Adding Code Coverage Reports To Domino-Container-Run Tests

Mon Mar 11 15:33:02 EDT 2024

Tags: docker testing

When you're writing test suites for your code, it can be very useful to use a tool to analyze the code coverage of your tests. While people can get a little obsessive about coverage percents, there's certainly no denying that it's helpful to know how much of your code is actually run when testing, and also being able to look down into the specifics of what is covered.

With Java, one of the preeminent tools for this is JaCoCo, a venerable open-source library that you can integrate with your test suites to give reports of your coverage. In a normal project, such as a build run via Maven, you can use the Maven plugin in tandem with the Maven Surefire and Failsafe plugins. However, things get more complicated if the code you're actually testing isn't in the Surefire JVM, but rather inside a container.

That's exactly the situation I have with the integration-test suite of the XPages Jakarta EE project, where it creates a Docker container with the current build of the project deployed as OSGi plugins, and then executes HTTP calls against OSGi bundles and NSFs. I figured this was a solvable problem, so I set out doing so.

I first came across this blog post, which describes the general idea well, but unfortunately references Gists that seem to no longer exist. Still, it gave me a good starting point.

Installing JaCoCo in Domino

The first thing I had to do was to get the JaCoCo Java agent into the container. I added it as a Maven dependency to the IT suite project:

1
2
3
4
5
6
<dependency>
	<groupId>org.jacoco</groupId>
	<artifactId>org.jacoco.agent</artifactId>
	<version>0.8.11</version>
	<scope>test</scope>
</dependency>

Conveniently, this dependency is itself a wrapper for the agent JAR and comes with a convenience method for accessing the JAR data. I used that to read it into memory and send it to the Docker runtime during the container build:

1
2
3
4
5
byte[] agentData;
try(InputStream is = AgentJar.getResourceAsStream()) {
	agentData = IOUtils.toByteArray(is);
}
withFileFromTransferable("staging/jacoco.jar", Transferable.of(agentData)); //$NON-NLS-1$

The use of Transferable here allows me to keep the process independent of whether Docker is running locally or remote - I run remotely almost all the time nowadays, due to Domino's continued lack of an ARM port.

With the file in place, I modified my Dockerfile to copy it to a known location in the container:

1
2
COPY --chown=notes:notes staging/jacoco.jar /local/
COPY --chown=notes:notes staging/JavaOptionsFile.txt /local/

The JavaOptionsFile.txt was already there for another ARM-related reason, but it's important to note for the next step. This sort of file is how you enable JaCoCo in the Domino JVM: I set JavaUserOptionsFile=/local/JavaOptionsFile.txt and it'll read its rules from there. Following the instructions, I added -javaagent:/local/jacoco.jar=output=file,destfile=/tmp/jacoco.exec on its own line in this file. This causes JaCoCo to be automatically loaded with the HTTP JVM and to store its report in the named file on shutdown.

Reading the Data

That said, this didn't work immediately. The file "/tmp/jacoco.exec" was created properly inside the container, so the agent was running, but the file content was always zero bytes. I realized that this was due to the merciless way in which the container is killed by my test suite: there's no proper shutdown step, and so JaCoCo's shutdown hook never fires.

Fortunately, writing to a file isn't the only way JaCoCo can do its reporting - you can also have it open up a TCP port to connect to and read. So I changed the Java option line to:

1
-javaagent:/local/jacoco.jar=output=tcpserver,address=*,port=6300

I modified the withExposedPorts(...) call inside the class that builds my Testcontainers container to also include 6300, and then used getMappedPort(6300) to identify the actual randomized port mapped by Docker.

The remaining task was to figure out the little protocol used by JaCoCo to signal that it should collect and return its data. I get the impression that it's not too complicated, but I still figured it'd be best to use an existing implementation. I found jacocotogo, a Maven plugin that reads the data, and it looked promising. However, it had two problems: being a Maven plugin, it came with a bunch of transitive dependencies I didn't want, and it's also 11 years old and thus a bit out of date.

I ended up forking the main utility class, trimming out the parts I didn't need (like JMX), switching it to NIO, and going from there.

Using the Data

With that all in place, a test run will end up with a file named "jacoco.exec" inside the "target" directory. Using this file varies by IDE, but, in Eclipse, you can install the EclEmma tool, open the "Coverage" view, right-click in the table area, and choose "Import Session...". That will let you locate the file and then choose the projects from your workspace that you're looking to analyze.

When I did that, I got my results:

Screenshot of Eclipse's Coverage tool detailing my test suite's coverage of somewhere around 50-65%

This is surprisingly good for the project, especially when you consider how large chunks of the red bars are things like the servlet wrapper package, which includes a lot of delegating code that is obligatory to match the interface but is not likely to be actually used in practice.

While this is currently the only project where I've needed to do this, it'll certainly be good to keep these techniques in mind. The TCP port thing in particular should be handy in future edge cases even without the Docker part.

Homelab Rework: Phase 3 - TrueNAS Core to Scale

Sat Mar 02 14:00:08 EST 2024

  1. Planning a Homelab Rework
  2. Homelab Rework: Phase 1
  3. Homelab Rework: Phase 2
  4. Homelab Rework: Phase 3 - TrueNAS Core to Scale

When I last talked about the ragtag fleet of computers I generously call a "homelab" now, I had converted my gaming/VM machine back from Proxmox to Windows, where it remains (successfully) to this day.

For a while, though, I've been eyeing converting my NAS from TrueNAS Core to Scale. While I really like FreeBSD technically and philosophically, running Linux was very appealing for a number of reasons. Still, it was a high-risk operation, even though the actual process of migration looked almost impossibly easy. For some reason, I decided to take the plunge this week.

The Setup

Before going into the actual process, I'll describe the setup a bit. The machine in question is a Mac Pro 1,1: two Xeon 5150s, four traditional HDDs for storage, and a handful of M.2 drives. The machine itself is far, far too old to have NVMe on the motherboard, but it does have PCIe, so I got a couple adapter cards. The boot volume is a SATA M.2 disk on one of them, while I have some actual NVMe ones serving as cache/log devices in the ZFS pool. Also, though everything says that the maximum RAM capacity is 32 GB, I actually have 64 in there and it's worked perfectly.

It's a bit of a weird beast this way, but those old Mac Pros were built to last, and it's holding up.

Also, if you're not familiar with TrueNAS and its different variants, it's worth a bit of explanation. TrueNAS Core (née FreeNAS) is a FreeBSD-based NAS-focused OS. You primarily interact with it via a web-based GUI and its various features heavily revolve around the use of ZFS, while its app system uses FreeBSD jails and its VM system uses Bhyve. TrueNAS Scale is a related system, but based on Debian Linux instead of FreeBSD. It still uses ZFS, and its GUI is similar to Core, but it implements its apps and VMs differently (more on this in a bit). For NAS/file-share uses, there's actually less of a difference than you might think based on their different underlying OSes, but the distinctions come into play once you go beyond the basics.

The Conversion

If anything, the above-linked documentation overstates the complexity of the operation. I didn't even need to go the "manual update" route: I went to the Update panel, switched from the TrueNAS Core train to the current non-beta TrueNAS Scale one, hit Update, and let it go. It took a long time, presumably due to the age of the machine, but it did its job and came back up on its own.

Well, mostly: for some reason, the actual data ZFS pool was sort of half-detached. The OS knew it was supposed to have a pool by its name, but didn't match it up to the existing disks. To fix this, I deleted the configuration for the pool (but did not delete the connected service configuration) and then went to Import Pool, where the real one existed. Once it was imported, everything lined back up without further issue.

Being basically a completely-different OS, there are a number of features that Core supports but Scale doesn't. Of that list, the only one I was using was the plugin/jail system, but I had whittled my use down to just Postgres (containing only discardable dev data) and Plex. These are both readily available in Scale's app system, and it was quick enough to get Plex re-set-up with the same library data.

Apps

As I mentioned, TrueNAS Core uses a custom-built "plugin" system sitting on top of the venerable FreeBSD jail capabilities. Those jails are similar in concept to things like Docker containers, and work very similarly in practice to the Linux Containers system I experienced with Proxmox.

TrueNAS Scale, for its part, uses Kubernetes, specifically by way of K3s, and provides its own convenient UI on top of it. Good thing it does provide this UI, too, since Kubernetes is a whole freaking thing, and I've up until this point stayed away from learning it. I guess my time has come, though. Kubernetes is distinct from Docker - while older versions used Docker as a runtime of sorts, this was always an implementation detail, and the system in use in current TrueNAS Scale is containerd.

Setting aside the conceptual complexity of Kubernetes, this distinction from Core is handy: while not being Docker, Kubernetes can consume Docker-compatible images and run them, and that ecosystem is huge. Additionally, while TrueNAS ships with a set of common app "charts" (Plex included), there's a community project named TrueCharts that adds definitions for tons and tons more.

Domino

That brings me to our beloved Domino. I had actually kind of gotten Domino running in a jail on TrueNAS Core, but it was much more an exercise in seeing if I could do it than anything useful: the installer didn't run, so I had to copy an installation from elsewhere, and the JVM wouldn't even load up without crashing. Neat to see, but I didn't keep it around.

The prospect on Scale is better, though. For one, it's actually Linux and thus doesn't need a binary-compatibility shim like FreeBSD has, and the container runtime meant I could presumably just use the normal image-building process. I could also run it in a VM, since the Linux hypervisor works on this machine while bhyve did not, but I figured I'd give the container path a shot.

Before I go any further, I'll give a huge caveat: while this works better than running it on FreeBSD, I wouldn't recommend actually doing what I've done for production. It'll presumably do what I want it to do here (be a local replica of all of my DBs without requiring a distinct VM), it's not ideal. For one, Domino plus Kubernetes is a weird mix: Kubernetes is all about building up and tearing down a swarm of containers dynamically, while Domino is much more of a single-server sort of thing. It works, certainly, but Kubernetes is always there to tempt you into doing things weird. Also, I know almost nothing about Kubernetes anyway, so don't take anything I say here as advice. It's good fun, though.

That said, on to the specifics!

Deploying the Container

The way the TrueNAS app UI works, you can go to "Custom App" and configure your container by referencing a Docker image from a repository. I don't normally actually host a Docker registry, instead manually loading the image into the runtime. It might be possible to do that here, but I took the opportunity to set up a quick local-network-only one on my other machine, both because I figured it'd be neat to learn to do that and because I forgot about the Harbor-hosted option on that link.

Since the local registry used HTTP and there's nowhere in the TrueNAS UI to tell it to not use HTTPS, I followed this suggestion to configure K3s to explicitly map it. With that in place, I was able to start pulling images from my registry.

The Domino Version

One quirk I quickly ran into was that I can't use Domino 14 on here. The reason for this isn't an OS problem, but rather a hardware limitation: the new glibc that Domino 14 uses requires the "x86-64-v2" microarchitecture level and the Xeon 5150 just doesn't have that by dint of pre-dating it by two years.

That's fine, though: I really just want this to house data, not app development, and 12.0.2 will do that with aplomb.

Volume Configuration

The way I usually set up a Domino container when using e.g. Docker Compose is that I define a handful of volumes to go with it: for the normal data dir, for DAOS, for the Transaction Log, and so forth. This is a bit of an affectation, I suppose, since I could also just define one volume for everything and it's not like I actually host these volumes elsewhere, but... I don't know, I like it. It keeps things disciplined.

Anyway, I originally set this up equivalently in the Custom App UI in TrueNAS, creating a "Volume" entry for each of these. However, I found that, for some reason, Domino didn't have write access to the newly-created volumes. Maybe this is due to the uid the container is built to use or something, but I worked around it by using Host Path Volumes instead. The net effect is the same, since they're in the same ZFS pool, and this actually makes it easier to peek at the data anyway, since it can be in the SMB share.

Once I did that and made sure the container user could modify the data, all was well. Mostly, anyway.

Transaction Logs, ZFS, and Sector Size

Once Domino got going, I noticed a minor problem: it would crash quickly, every time. Specifically, it crashed when it started preparing the transaction log directory. I eventually remembered running into the same problem on Proxmox at one point, and it brought me back to this blog post by Ted Hardenburgh. Long story short, my ZFS pool uses 4K sectors and Domino's transaction logs can't deal with that, at least in 12.0.2 and below.

This put me in a bit of a sticky spot, since the way to change this is to re-create the entire pool and I really didn't want to do that.

I came up with a workaround, though, in the form of making a little disk image and formatting it ext4. You can use a loop device to mount a file like a disk, so the process looks like this:

1
2
3
4
dd if=/dev/zero of=tlog.img bs=1G count=1
sudo /sbin/losetup --find --show tlog.img
sudo mkfs.ext4 /dev/loop0
sudo mount /dev/loop0 /mnt/tlog

That makes a 1GB disk image, formats it ext4, and mounts it as "/mnt/tlog". This process defaults to 512-byte sectors, so I made a directory within it writable by the container user (more on this shortly), configured the Domino container to map the transaction log directory to that path, and all was well.

Normally, to get this mounted at boot, you'd likely put an entry in fstab. However, TrueNAS assumes control over system configuration files like that, and you shouldn't edit them directly. Instead, what I did was write a small script that does the losetup and mount lines above and added an entry in "System Settings" - "Advanced" - "Init/Shutdown Scripts" to run this at pre-init.

Networking

The next hurdle I wanted to get over was the networking side. You can map ports in apps in a similar way to what you'd do with Docker, but you have to map them to a port 9000 or above. That would be an annoying issue in general, but especially for NRPC. Fortunately, the app configuration allows you give the container its own IP address in the "Add external Interfaces" (sic) configuration section. Since the virtual MAC address changes each time the container is deployed, I gave it a static IP address matching a reservation I carved out on my DHCP server, pointed it to the DNS server, and all was well. All of Domino's open ports are available on that IP, and it's basically like a VM in that way.

Container User

Normally, containers in TrueNAS's app system run as the "apps" user, though this is configurable per-app. The way the Domino container launches, though, it runs as UID 1000, which is notes inside the container. Outside the container, on my setup, that ID maps to... my user account jesse.

Administration-wise, that's not exactly the best! In a less "for fun" situation, I'd change the container user or look into UID mapping as I've done with Docker in the past, but honestly it's fine here. This means it's easy for me to access and edit Domino data/config files over the share, and it made the volume mapping above work without incident. As long as no admins find out about this, it can be my secret shame.

Future Uses

So, at this point, the server is doing the jobs it was doing previously, plus acting as a nice extra replica server for Domino. It's positioned well now for me to do a lot of other tinkering.

For one, it'll be a good opportunity for me to finally learn about Kubernetes, which I've been dragging my feet on. I installed the Portainer chart from TrueCharts to give me a look into the K8s layer in a way that's less abstracted than the TrueNAS UI but more familiar and comfortable than the kubectl tool for me for now.

Additionally, since the hypervisor works on here, it'll be another good location for me to store utility VMs when I need them, rather than putting everything on the Windows machine (which has half as much RAM).

I could possibly use it to host servers for various games like Terraria, though I'm a bit wary of throwing such ancient processors at the task. We'll see about that.

In general, I want to try hosting more things from home when they're non-critical, and this will definitely give me the opportunity. It's also quite fun to tinker with, and that's the most important thing.

XPages JEE 2.15.0 and Plans for JEE 10 and 11

Fri Feb 16 15:30:40 EST 2024

  1. Updating The XPages JEE Support Project To Jakarta EE 9, A Travelogue
  2. JSP and MVC Support in the XPages JEE Project
  3. Migrating a Large XPages App to Jakarta EE 9
  4. XPages Jakarta EE Support 2.2.0
  5. DQL, QueryResultsProcessor, and JNoSQL
  6. Implementing a Basic JNoSQL Driver for Domino
  7. Video Series On The XPages Jakarta EE Project
  8. JSF in the XPages Jakarta EE Support Project
  9. So Why Jakarta?
  10. XPages Jakarta EE 2.5.0 And The Looming Java-Version Wall
  11. Adding Concurrency to the XPages Jakarta EE Support Project
  12. Adding Transactions to the XPages Jakarta EE Support Project
  13. XPages Jakarta EE 2.9.0 and Next Steps
  14. XPages JEE 2.11.0 and the Javadoc Provider
  15. The Loose Roadmap for XPages Jakarta EE Support
  16. XPages JEE 2.12.0: JNoSQL Views and PrimeFaces Support
  17. XPages JEE 2.13.0
  18. XPages JEE 2.14.0
  19. XPages JEE 2.15.0 and Plans for JEE 10 and 11

Today, I released version 2.15.0 of the XPages Jakarta EE project. As is often the case lately, this version contains bug fixes but also a few notable features:

  • You can now specify Servlets in WEB-INF/web.xml (as opposed to just via the @WebServlet annotation. This is helpful for defining a Servlet when the actual implementation is in a JAR or when following non-annotation-based examples
  • You can now specify context-param values in WEB-INF/web.xml in the NSF and META-INF/web-fragment.xml in JAR design elements, which will be available to JSP, JSF, JAX-RS, @WebServlet-annotated Servlets, and web.xml-defined Servlets
  • Added @BooleanStorage annotation for NoSQL entities to define how boolean values are converted to note items
  • Added CRUD operations for calendar events to NoSQL, around a few new methods on Repository. This exposes some of the capabilities of NotesCalendar and can be used for, for example, providing an iCalendar feed based on a mail database. To go with that, XPages JEE also re-exports iCal4J as included in the Domino stack for NSF use, though this API is... not smooth

The first two here are focused around bringing NSFs more in line with "normal" Jakarta EE applications, while the latter are some nice improvements for the NoSQL driver. I hope to put the last one in particular to good use - for example, OpenNTF's site will be able to provide a calendar of webinars and other events that we can manage internally using a normal Notes calendar, and that sounds nice to me.

Next Versions

I still have the 3.x branch of the project chugging along, and I think it'll be ready for a real release before too long. Since it'll be a breaking-changes release thanks to upstream changes, I'm using it as an opportunity to consolidate the sprawl of features and XPages Libraries. Currently, my plan is:

  • One for "core", covering most things in the Jakarta EE Core Profile, plus the other utility specs I've implemented: Transactions, Bean Validation (which really should be in Core in my estimation), Concurrency, Servlet, and so forth, plus Data and NoSQL
  • One for "UI", covering Jakarta Pages, Jakarta Faces, and MVC - basically, the stuff you could use to replace XPages to make an HTML-generating app in your NSF
  • One for MicroProfile, or at least the specs I've implemented so far. I'm a little tempted to wrap this in to Core, since things like OpenAPI are useful almost all the time, but it's a clean-enough separation that it'll be fine

This will require Domino 14, since Jakarta EE 10 requires at least Java 11.

That brings me to some unexpected good news: though Jakarta EE 11 was long planned to use Java 21 as its minimum version (since 21 is the current LTS), it looks like they've switched to making Java 17 the baseline. For me, this is a little sad in an idealistic sense, since it pushes things like Virtual Threads out of the realm of being a core part of JEE, but I'm very happy that I'll be able to use all JEE 11 specs in Domino 14. Even if Domino 15 used Java 21, it'd still be a long while before that came, and we'd lag behind the standard for at least a year. Instead, this puts the project back in line with upstream, and allows me personally to potentially resume committing to Jakarta NoSQL - I'd been out of the loop for a very long time when it moved to 11 and then 17 as its required version.

I don't know right now whether JEE 11 will be the same sort of breaking change for the project (which would mean a 4.x release) or if I'll be able to make it a 3.x one - the specs aren't out yet, so time will tell. The big focus of 11 will be further centralization on CDI instead of EJB, and I'm all for it.

My plan is to get 3.x out for Domino 14, based on JEE 10, as soon as time allowed, and then I'll start looking into bumping to JEE 11 when it releases in the summer.

Notes/Domino 14 Fallout

Fri Dec 15 11:53:44 EST 2023

  1. AbstractCompiledPage, Missing Plugins, and MANIFEST.MF in FP10 and V10
  2. Domino 11's Java Switch Fallout
  3. fontconfig, Java, and Domino 11
  4. Notes/Domino 12.0.2 Fallout
  5. Notes/Domino 14 Fallout

Notes and Domino 14 are out now and, as I discussed back in June the big deal for me is the move to Java 17. This also came with a refresh of the Eclipse innards, from Neon (circa 2016) to 2021-12 (circa, uh, 2021). The Eclipse update is welcome, but so far it's been less impactful than the Java update - at some point, I'll want to see if some of the current-era Eclipse plugins work here, but that's for the future.

In the mean time, there's a bunch to know, so let's get to it! I've broken this one down into "critical" and "less critical" sections, since this post will likely have a similar life to my target platform one.

Critical To Know

ndext

I mentioned this in June, but an important thing to know about Java 17 is that "jvm/lib/ext" no longer exists. Fortunately, Notes and Domino have long had a secondary location for this sort of thing: "ndext" in the program directory. Any JARs in this folder are available at runtime on both platforms, and so the quick fix is to move anything you had in "jvm/lib/ext" to "ndext".

...but.

When it comes to Notes, there's a distinct difference between "available at runtime" and "on the build path". While Java agents here are fine - they modified the editor to include "ndext" in the build path - there's no such accommodation for XPages. So far, the best workaround I've thought of is to add any such JARs manually to the JRE definition in Eclipse's preferences:

Screenshot of Designer's JRE preferences, showing the addition of an ndext JAR

When doing this, there's a huge caveat: do not just add everything from this directory to the JRE. Most of it will be redundant, like the SWT stuff, but some will be actively harmful, in particular "jsdk.jar". That's an even-older Servlet version than comes with XPages, and including that will cause Designer to think that methods added in Servlet 2 aren't present. Only add the JARs you're extending it with.

Ideally, this will be remedied one way or another in the future, but for now we'll have to do this to account for it. The silver lining may be that it's a good impetus to OSGi-ify your dependencies if possible.

Poi Remains

This isn't new, but it's worth mentioning while we're on the topic of "ndext". Since Notes 11, the client ships with an old version of Apache Poi, but this is painfully distributed right in the JVM and not at the OSGi layer. Newer versions of Poi 4 XPages deal with this, but it's important to know that, while Poi is in Notes, it's not in Domino. If you write agents using these classes, or add them to your JRE and use them in XPages, you'll also need to deploy them to Domino.

Java Policy Location

This one's more of a note, and it's reiterating something from June: the location of "java.policy" and (if you add it) "java.pol" changed since Java 8. They used to be in "jvm/lib/security" but they're in "jvm/conf/security" now. They work the same as before, as does putting a file named ".java.policy" in the Domino user's home dir, so the other characteristics haven't changed.

sun.misc.BASE64Decoder

Back in Domino 11, the move from IBM J9 to OpenJ9 meant that some IBM internal forks of Sun internal classes were no longer available. The most notable of these were the BASE64 classes com.ibm.misc.BASE64Encoder and BASE64Decoder. These were very popular to use in the pre-Java-8 days, before java.util.Base64 existed, and so it was worth noting that they were gone then.

Well, sun.misc.Base64Encoder has now met a similar fate - it's probably still in there somewhere, but it's no longer accessible by user code. If you haven't made the switch to java.util.Base64, do so now.

Target Platform Bug

This is another note, but this classic bug that's been with us since 9.0.1FP10 is... fixed, I think! I'd thought at first that it remained, since the Target Platform config still just lists the Eclipse home dir, but installing and using an XPages library seemed to work properly without further change. Good!

Java Compiler Level

This is a fiddly one. Though Designer uses Java 17 by default now, it has the ability to compile down to older versions for past compatibility, such as running on a pre-14 server. Java agents have had their own way of dealing with this for a while, though it seems like maybe they default to Java 8 now, which is good. With XPages, it seems like it's a little less obvious.

From my experience, this is what I've found:

  • Existing projects may have a specific compiler setting specified (similar to agents in the above-linked post) and so will remain as Java 1.8
  • New projects seem to use the active workspace setting, which will matter
  • Upgrades of Designer in place will keep the default compiler level for the workspace at 1.8
  • A fresh installation sets the compiler level to 17

This is all... fine. It'd be nice if it was a little more obvious to the developer (like if it was controlled by the xsp.properties setting for minimum version), but this is more or less the Eclipse way of doing things. We'll just have to be aware for a while of these settings and their interactions when developing for pre-14 deployment. If you're targeting an older server, you should make sure that you go to Project Properties for your NSF and set the Java Compiler level to fit:

Screenshot of the Java Compiler settings for an NSF

Less Critical

Alright, that covers the big-ticket stuff, but there are a few more things to know.

Java Deprecation Warning On HTTP Start

This one is documented, though oddly in the "no longer included" page: when you start HTTP on Domino, you'll see a message like this:

WARNING: A terminally deprecated method in java.lang.System has been called
WARNING: System::setSecurityManager has been called by lotus.notes.AgentSecurityManager (file:/C:/Domino/ndext/Notes.jar)
WARNING: Please consider reporting this to the maintainers of lotus.notes.AgentSecurityManager
WARNING: System::setSecurityManager will be removed in a future release

This is actually totally fine. As it says, the whole SecurityManager apparatus is gone in future versions, but Notes and Domino still use it for agents (and, unfortunately, XPages). While I long for the day when I never have to think about it again, this is a reasonable-enough compromise for 14 as a "transitional" version in its Java journey. So... you can ignore this and not worry.

"Java Main Sources" and "Java Test Sources" Working Sets

If you use Working Sets in Designer, you may notice two entries not of your creation:

Screenshot of the 'Select Working Set' dialog in Designer 14

These showed up in Eclipse somewhere along the line, presumably to make it easier for people to select those types of projects without manually managing their working sets. Designer inherits this and HCL didn't remove them, but you're free to delete them if you want.

"AbstractCompiledPage cannot be resolved"

This is another one that came up back in 9.0.1FP10 and it's still here, but it's less of an issue: every once in a while, when building a project, Designer will complain about the AbstractCompiledPage class not being found. In my experience, this only shows up temporarily during a build, but it's worth noting that, if it does stick around for you, a Project - Clean should fix it.

JAX-B and CORBA

After Java 8, a few Java EE components were removed from the normal JRE distribution in favor of developing them in Java EE, then Jakarta EE. JAX-B is one of them (now Jakarta XML Binding) - we don't normally use this directly, but it comes up sometimes, either directly (likely as one of the other old-timey BASE64 workarounds) or as a dependency. It shouldn't matter in Designer, but, if you're writing Domino-targeting code outside Designer, you may need to be aware. One way or another, you should add this as a dependency - in a basic case, you can add "jakarta.xml.bind-api.jar" and "jaxb-impl.jar" from "ndext" to your build path.

Similarly, CORBA support classes ("org.omg" stuff, usually) used to come with the JRE and now don't. To work around this, HCL did what I've done in the past and included "glassfish-corba-omgapi.jar", in their case putting it in "ndext". If you're using Notes.jar with Java > 8 outside Designer, you'll need this one too.

Conclusion

I think that covers it for now. Considering how much changed with Java from 8 to 17, this could have been a lot rougher, though I fear that some of these workarounds will plague us for a long time.

In the mean time, I'd appreciate it if you vote for this Aha idea to keep Domino on a good Java update cadence. It'd be a shame if we sit on 17 right up until the very end of its life, as we did with 6 and 8, in part since these multi-version moves are more painful than single-LTS updates.

XPages JEE 2.14.0

Fri Oct 27 11:47:02 EDT 2023

  1. Updating The XPages JEE Support Project To Jakarta EE 9, A Travelogue
  2. JSP and MVC Support in the XPages JEE Project
  3. Migrating a Large XPages App to Jakarta EE 9
  4. XPages Jakarta EE Support 2.2.0
  5. DQL, QueryResultsProcessor, and JNoSQL
  6. Implementing a Basic JNoSQL Driver for Domino
  7. Video Series On The XPages Jakarta EE Project
  8. JSF in the XPages Jakarta EE Support Project
  9. So Why Jakarta?
  10. XPages Jakarta EE 2.5.0 And The Looming Java-Version Wall
  11. Adding Concurrency to the XPages Jakarta EE Support Project
  12. Adding Transactions to the XPages Jakarta EE Support Project
  13. XPages Jakarta EE 2.9.0 and Next Steps
  14. XPages JEE 2.11.0 and the Javadoc Provider
  15. The Loose Roadmap for XPages Jakarta EE Support
  16. XPages JEE 2.12.0: JNoSQL Views and PrimeFaces Support
  17. XPages JEE 2.13.0
  18. XPages JEE 2.14.0
  19. XPages JEE 2.15.0 and Plans for JEE 10 and 11

Today, I released version 2.14.0 of the XPages Jakarta EE Support project. As with the last few releases, this is primarily about bug fixes and compatibility as I prepare for the big switch in 3.0, but there are some notable, if small, feature additions.

To begin with, I improved handling of reading JSON in NoSQL entities when reading from a view. This applies to the @ItemStorage(type=ItemStorage.Type.JSON) annotation on entity properties, which causes the value to be loaded and stored as JSON, useful for storing custom class types in a document. Now, such values can be read from view entries - previously, this processing was skipped for those. Of note when using this: normally, storing as JSON will automatically set the item's summary flag to false, to avoid overflowing the summary limit. However, you can add @ItemFlags(summary=true) to the property to override this behavior so that the values can show up in views.

Additionally, I added the ability to use JAXRSClassContributors inside the NSF. These were originally an internal mechanism for the project to dynamically add REST endpoints and extensions, like those used by MVC and OpenAPI. Now, though, I've made it so that such classes can be registered via a file named "META-INF/services/org.openntf.xsp.jaxrs.JAXRSClassContributor" in the NSF, and also added the ability to specify configuration properties. The latter is important because, though all xsp.properties values were already inserted into the JAX-RS configuration, there was no way to provide non-string values. This came up in the context of MVC, which has a CSRF configuration property that must be an enum value.

For the final feature, I improved support for JAX-RS's status-indicating exceptions, such as NotSupportedException and BadRequestException. Previously, the project supported translating NotFoundException to a 404, but now it will also translate these other standard exceptions to their corresponding HTTP statuses.

The release is otherwise rounded out by a number of bug fixes to fix problems encountered in the wild. Additionally, I added a workaround for some classpath pollution in the latest Domino 14 beta - I hope that the trouble will be gone for GA, but this project should handle it either way.

Overdue CollabSphere Followup

Wed Oct 11 16:59:29 EDT 2023

Though it's been over a month since CollabSphere 2023, I've somehow now yet gotten around to talking about it here. Time to remedy that!

Webinar and Workshop

I presented a couple sessions at CollabSphere, but the meatiest was my workshop on the XPages Jakarta EE project. A bit before the show, I wrote a post discussing modes of development with it and that formed the structure of the workshop.

It also formed the structure of August's OpenNTF webinar. Fortunately, unlike the workshop, the webinar was recorded, and that is up on OpenNTF's YouTube channel. Though the workshop was longer and had some refinements and audience discussion, they both worked from the same original slide deck and so the webinar did a pretty good job of covering the same material.

As part of the presentations, I created four versions of the same basic to-do tracker app, written in each of the four modes I talked about. I put them up in the project repository with a quick README introducing each of them. With the project installed (they were written to 2.13.0 and above), you should be able to sync the ODPs with NSFs in Designer and poke around yourself. They're all intentionally-similarly structured and basic, with all of their elements in either Java classes, file resources, or stylesheets. They're also all intentionally under-developed, so you're not allowed to make fun of them.

JNX

This one will be a doozy! In fact, it's such a doozy that I'm going to keep kicking the can down the road as far as properly talking about it.

In short, Domino JNX is a more-modern API for Domino access - it was initiated and is used heavily by the Domino REST API (DRAPI), but also stands on its own. It started out essentially as another swing at Karsten Lehmann's Domino JNA and shares a lot of the same ideas and approaches (and code - the committer info in the GitHub repo is deceptive, as I got to paste a whole mountain of existing code from another repo into this one and thus got credit for it).

Now that it's open source, there's some significant work to do as far as documentation, examples, integration, and distribution go. I plan to find time to improve a lot of these aspects, including blogging here, though it's the sort of thing that always ends up below other priorities.

In any event, it's quite neat and has a lot of good capabilities. I plan to write a driver for JNoSQL for it, either to be alongside or to wholly supplant the Notes.jar-based one currently used by the project. The neat thing there is that users of XPages JEE won't have to care much: it'll just get a bit faster in parts but should largely work the same, except maybe with a change to the way you point at other databases.

New Tiny Project: Wink Chattiness Patch

Mon Sep 18 17:24:06 EDT 2023

Tags: domino

I've been using the Domino 14 betas for development for a while now, and one of the things that has driven me a little nuts is the way Wink spews a bunch of INFO-level logs to the server console when the XPages runtime initializes. You've probably seen it - this stuff:

Screenshot of a Domino console on Windows displaying Wink INFO logs from Verse

It goes on for a while like that.

This isn't new with 14 as such - it's just that 14 now ships with Verse by default, and Verse uses the Wink distribution that came along with the Extension Library, and so now everyone sees this.

I had encountered this before, back when one of my client projects used Wink before transitioning to what became the XPages JEE project. Back then, I wrote a shim in the project's Activator to reflectively insert replacements for each logger object. After seeing these messages from Verse for the millionth time, I decided to dust that off and turn it into a little project.

Thus was born the Wink Chattiness Patch, a project with an expected single release that has a simple purpose: when installed, it makes Wink less annoying.

This version is actually a bit more clever than the original from my client project. Part of that was born out of necessity: originally, the shim involved writing to final fields in classes, but that's no longer possible (normally) since Java 12, so that path was out. Instead, now I pre-populate the internal Logger cache with my shim objects. I also made them a bit better: rather than just lessening the threshold for logging from INFO to WARN, they redirect to java.util.logging, which then will log to error-log-*.xml as appropriate, like other parts of the XPages stack.

Ideally, HCL will improve this themselves (I recommend they look at replacing slf4j-simple in the Wink bundle with slf4j-jdk14, which is probably the most-expedient path), but, failing that, this patch should make your Domino console just a bit less hairy.

Homelab Rework: Phase 2

Fri Sep 15 11:39:42 EDT 2023

Tags: homelab linux
  1. Planning a Homelab Rework
  2. Homelab Rework: Phase 1
  3. Homelab Rework: Phase 2
  4. Homelab Rework: Phase 3 - TrueNAS Core to Scale

CollabSphere 2023 came and went the other week, and I have some followup to do from that for sure, not the least of which being the open-sourcing of JNX, but that post will have to wait a bit longer. For now, I'm here to talk about my home servers.

When last I left the topic, I had installed Proxmox as my VM host of choice to replace Windows Server 2019, migrated my existing Hyper-V VMs, and set up a Windows 11 VM with PCIe passthrough for the video card. There were some hoops to jump through, but I got everything working.

Now, though, I've gone back on all of that, or close to it. Why?

Why Did I Go Back On All Of That, Or Close To It?

The core trouble that has dogged me for the last few months is performance. While the host I'm using isn't a top-of-the-line powerhouse (namely, it's using an i7-8700K and generally related-era consumer parts), things were running worse than I was sure they should. My backup-runner Linux VM, which should have been happy as a clam with a Linux host, suffered to the extent that it never actually successfully ran a backup. My Windows dev VMs worked fine, but would periodically just drag when trying to redraw window widgets in a way they hadn't previously. And, most importantly of all, Baldur's Gate 3 exhibited bizarre load-speed problems: the actual graphical performance was great even on the highest settings, but I'd get lags of 10 seconds or so loading assets, much worse than the initial performance trouble reported by others in the release version of the game.

Some of this I chalked up to lack of optimized settings, like how the migrated VMs were using "compatibility" settings instead of all the finest-tuned VirtIO stuff. However, my gaming VM was decked out fully: VirtIO network and disk, highest-capability UEFI BIOS, and so forth. They were all sitting on ZFS across purely-NVMe drives, so they shouldn't have been lacking for disk speed. I tried a bunch of things, like dedicating an SATA SSD to the VM, or passing through a USB 3 SSD, but the result was always the same. Between game updates and re-making the VM in a "lesser" way with Windows 10, I ended up getting okay performance, but the speed of the other VMs bothered me.

Now, I don't want to throw Proxmox specifically or KVM generally under the bus here. It's possible that I could have improved this situation - perhaps, despite my investigations and little tweaks, I had things configured poorly. And, again, this hardware isn't built for the purpose, but instead I was cramming server-type behavior into "prosumer"-at-best hardware. Still, Hyper-V didn't have this trouble, so it nagged at me.

But Also Containers

As I mentioned towards the bottom of the previous post, Proxmox natively uses Linux Containers and not Docker, but I wanted to see what I could do about that. I tried a few things, installing Docker inside an LXC container as well as on the main host OS, but ran into odd filesystem-related problems within Dockerfiles. I found ways to work around those by doing things like deleting just files instead of directory trees, but I didn't want to go and change all my project Dockerfiles just to account for an odd local system. I had previously used my backup-manager VM for Docker, but that VM's performance trouble made me make a new secondary one. That ended up expanding the overhead and RAM consumption, which defeated some of the potential benefits.

Little Things

Beyond that, there were little things that got to me. Though Proxmox is free, it still gives a little nag screen about being unlicensed the first time you visit the web UI each reboot, which is a mild annoyance. Additionally, it doesn't have built-in support for suspending/resuming active VMs when you reboot the machine, as Hyper-V does - I found some people recommending systemd scripts for this, but that would introduce little timing problems that wouldn't arise if it was a standard capability.

There also ended up being a lot that was done solely via CLI and not the GUI. To an extent, that's fine - I'm good with using the CLI for quite a bit - but it did defeat some of the benefit of having a nice front-end app when I would regularly drop down to the CLI anyway for disk import/export, some device assignments, and so forth. That's not a bug or anything, but it made the experience feel a bit rickety.

The New Setup

So, in the end, I went crawling back to Windows and Hyper-V. I installed Windows 11 Pro and set up the NVMe drives in Storage Spaces... I was a little peeved that I couldn't use ReFS, since apparently "Pro" and "Pro for Workstations" are two separate versions of Windows somehow, but NTFS should still technically do the job (I'll just have to make sure my backup routine to my TrueNAS server is good). After I bashed at it for a little while to remove all the weird stupid ads that festoon Windows nowadays, I got things into good shape.

Hyper-V remains a champ here. I loaded up my re-converted VMs and their performance is great: my backup manager is back in business and my dev VMs are speedy like they used to be.

Among the reasons why I wanted to move away from Server 2019 in the first place is that the server-with-desktop-components versions of Windows always lagged behind the client version in a number of ways, and one of them was WSL2. Now that I'm back to a client version, I was able to install that with a little Debian environment, and then configure Docker Desktop to make use of it. With some network fiddling, I got the Docker daemon listening on a local network port and usable for my Testcontainers suites. Weirdly, this means that my Windows-based setup for Docker is actually a bit more efficient than the previous Linux-based one, but I won't let that bother me.

As for games, well... it's native Windows. For better or for worse, that's the best way to run them, and they run great. Baldur's Gate 3 is noticeably snappier with its load times already, and everything else still runs fine.

So, overall, it kind of stings that I went back to Windows as the primary host, but I can't deny that I'm already deriving a lot of benefits from it. I'll miss some things from Proxmox, like the smooth handling of automatic mounting of network shares as opposed to Windows's schizophrenic approach, but I'm otherwise pleased with how it's working again.

CollabSphere Workshop Schedule Update and OpenNTF Webinar

Tue Aug 22 14:06:55 EDT 2023

As I mentioned last month, I'll be participating in a couple presentations, including a workshop on the XPages Jakarta EE project.

This workshop is scheduled for Tuesday, but its time has shifted. Originally, it was scheduled for 1 PM - 3 PM local time, but it's moved up to 9 AM - 11 AM to help with some coordination. Looks like it's also in the Pullman room and not the Linnaeus room now. Same idea, but you'll probably want to bring a cup of coffee with you.

In addition, and particularly so if you won't be attending CollabSphere, I'll be doing this month's OpenNTF webinar this week, on Thursday. The plan for that is to be like a mini/less-interactive version of the workshop, but covering the same general idea of the various ways to develop JEE apps on Domino with the project. If you're interested in that, you can register here.

Modes Of App Development With XPages Jakarta EE

Fri Jul 28 11:46:50 EDT 2023

I've been working on my workshop for this year's CollabSphere, and one of the main decisions I have to make is what I'm going to focus on. The idea of the workshop is to give a bit more brass-tacks information about how to use the project: rather than just a list of features, it'll be about the specific business of building an app using it.

But how does one build an app in it? There's certainly no lack of tools available, but that leads to the opposite problem: what's the right one for your project? What's likely to be the most common path people take?

The Types

As I've been working on it, I've grouped things into four main categories, and I figured it'd be useful to enumerate them here to coordinate my thoughts and provide some general information. There aren't hard lines between these: you can use any mixture of some or all of the parts in an app, and do different mixes in different apps. These are just what I expect to be the main groupings:

  • "XPages Plus", using some new capabilities in existing or new apps with XPages-based UIs
  • REST services, focusing on providing REST endpoints for JavaScript-based apps or other servers
  • MVC and JSP, focusing on clean, lightweight UIs for document-based apps, but less ideal for complex business logic
  • JSF, building the same sorts of apps XPages is adept at, but using newer technology

"XPages Plus"

The first route is how the project got started: you keep building XPages apps but sprinkle in a few new capabilities to improve them.

For example, you could replace your managed beans defined in faces-config.xml with CDI beans, allowing you to get the quick benefit of annotation-based definitions and then the bigger benefits of @Inject, producer methods, and interceptors.

You could also start using newer EL features, like the long-desired ability to pass parameters to methods.

This path wouldn't necessarily require a lot of reworking of your app or changing the way you think about XPages development, but would still be something of a minor development refresh and can set you up well for future improvements.

Your data access will likely still be through the traditional xp:dominoDocument and xp:dominoView components, but you could also write beans that access data with lotus.domino or ODA, or switch to using the NoSQL driver.

REST Services

Alternatively, you could decide you want to focus your apps around REST services with either a JavaScript app in, for example, React as the front end, or providing services to remote servers.

With this, you'd largely stop using XPages design elements entirely, instead defining your services in Java classes with JAX-RS annotations. This brings huge advantages over other ways to write REST services on Domino, with the JAX-RS annotations allowing for clear, logical definition of services, their parameters, and their output. Moreover, the ancillary tooling brings things like automatic OpenAPI definitions, which would be annoying to maintain using things like the XPages-side REST controls.

This path is good if you're specifically aiming to build a JavaScript-based app, either because you just like it, because your organization decided to go that route, or if you have a larger team that splits the duties of front-end and back-end developers. It can also naturally blend into the next one.

Your data access here won't be through the XPages components, but you could still use lotus.domino or ODA classes, or switch to the NoSQL driver. That actually goes for the next two, too, so we'll just count that as assumed.

MVC and JSP

I'll admit that part of the reason I want to consider this a top-tier route is because I just personally really like it. I've had a blast writing apps like this blog and the OpenNTF site using this path, with its much-cleaner code and back-to-basics approach to HTML.

Regardless of my personal enjoyment of it, though, this has some nice advantages. The fact that MVC builds on top of JAX-RS means that it melds well with the REST-services approach above. For example, you might primarily write REST services for a JS app, but then do a set of "admin" pages with MVC. Or you might use this as part of the prototype phase: structure your app the same way you will when you expand to a multi-tier team, but start out by doing a quick UI with MVC on top of the same or related endpoints.

With this path, your app will start with Java classes with JAX-RS annotations, and then you'd mix it with JSP files inside WebContent/WEB-INF. One down side to this approach is that Designer doesn't provide much help for writing JSP files. In the tooling, I bind .jsp and .tag files to the HTML editor, so you at least get normal HTML assistance, but that won't help you with specific JSP tags and EL. Fortunately, the set of tools you'll likely use in JSP is comparatively small, so you'll eventually memorize things like <c:forEach items="..." var="...">...</c:forEach> in much the same way that you could eventually write out an <xp:repeat/> in your sleep in XPages.

JSF

This one, technically tricky though it may be, is conceptually straightforward: write the same sort of apps you do with XPages, but do it with modern JSF instead. This makes a lot of sense, since JSF shares XPages's acumen with complicated forms with partial refreshes and changing state data, but has benefited from some development that didn't happen on the XPages side.

It's not a direct replacement: in particular, JSF has no knowledge of Domino data sources, so there's no xp:dominoDocument or xp:dominoView. You'd still need to do your data access via beans, as in the previous two options, likely using either lotus.domino/ODA or the NoSQL driver. Additionally, Designer really doesn't help you here - again, I map .xhtml and .jsf files to the HTML editor, but JSF components have a lot of properties to set, and so you'll be spending a lot of time referencing documentation.

Still, it's clear why this is proving to be a popular path. The development model is the same as in XPages, while the JSF stack (especially including PrimeFaces) brings a lot of amenities that aren't in XPages and are also more portable to other environments.

Conclusion

So, for now, I'm thinking of splitting up the workshop to cover each of these paths a bit. That runs the risk of feeling like too much of a grab bag, but I don't want to give the opposite impression, that the project only allows for some specific path. It's a broad platform update, accommodating many development approaches, and I want to keep that clear. Fortunately, each path has a pretty-clean pitch, and the shared components (CDI, bean validation, the REST client, etc.) build on each other well, so the idea that it's a pool of features that you can swim in is, I think, compelling.