XPages JEE 3.0 Beta 4

Wed May 22 13:48:19 EDT 2024

Earlier today, I uploaded beta 4 of XPages JEE 3.0 to GitHub. I've been taking a slow approach to this release due to its "breaking changes" nature, but I think it's just about ready for release.

Domino 14

Like previous betas, this release requires Domino 14 (and Notes 14 for development), since it moves to a baseline of Jakarta EE 10, which in turn requires Java 11. Doing this let me get rid of some extra shim code that was needed to support both Domino 14 and previous versions, and also let me move to some newer language constructs. If you're interested in the sorts of things that the new versions of Java brought, check out the OpenNTF webinar from April, where I talked about just that.

Library Reorganization

Beyond the Java version requirement, the big breaking change I made was to finally shrink the number of XPages libraries and p2 features in the project. As the project grew, I kept adding new distinct XPages libraries, for the principle of keeping each spec distinct, as they often technically are. A few things made me want to fix this, though:

  • Checking all the boxes in the Xsp Properties editor for each library was annoying
  • Checking "Yes, install this plug-in" for every single component, plus its source version, when installing in Designer was very annoying
  • I had to do weird tricks to add features that touched multiple specs. For example, the project tree had a bunch of cross-spec fragments like "jaxrs.cdi" and "json.cdi" to contribute parts for when CDI was present but not break things when it wasn't. This added an extra layer of indirection and maintenance hassle
  • The specs themselves have been converging, particularly in the sense that more and more they assume the "backbone" of CDI is present. For example, Faces removed its original @ManagedBean and related support in favor of going all-in on CDI. Jakarta REST is moving towards the same
  • It was hard to think of realistic scenarios where it would be important to split up the specs like this, using, say, REST but not CDI or Validation

Now, there are just three: "org.openntf.xsp.jakartaee.core", "org.openntf.xsp.jakartaee.ui", and "org.openntf.xsp.microprofile". I was tempted to roll MicroProfile into "core", but they're conceptually (and administratively) distinct enough that it was worth separating them. With this change, it's not only less annoying to install, but it lets me make a lot more assumptions about what is present across specs, simplifying a lot of little things.

Deep-Dive Sidebar: Class Loading

One interesting aspect I ran into when making this change was that I had to readjust my mental model for how class loading is done from an NSF-based application and the libraries it uses. The way it mostly works conceptually aligns with what you see in Designer:

  • Select a library to depend on
  • The XspLibrary has a "getPluginId" method, which then Designer uses to add the OSGi bundle to the classpath
  • Any Require-Bundle dependencies in that plugin marked as "visibility:=reexport" are also included on the classpath

So, in this way, you'd previously select the "org.openntf.xsp.cdi" library, which would then add a dependency on the bundle of the same name, which would in turn re-export the things the NSF should see, such as the CDI API classes.

When I consolidated the libraries, I did it in the straightforward way: I made new "*.library" bundles for them and then added the existing spec-specific bundles as re-exported dependencies. As far as Designer was concerned, all was well, and there was just another little layer in between.

However, that's not quite the whole story when it comes to the runtime on the server. Though Designer presents the NSF as a pseudo-OSGi bundle using the Plug-in Development Environment, Domino doesn't do the same thing. What Domino does is use a class called ModuleClassLoader (not to be confused with Equinox's ModuleClassLoader, which is entirely different and IS an OSGi loader) to handle loading classes from the NSF and its dependencies. The way it gets to its dependencies isn't really a "true" OSGi way, though: it keeps track of a collection of ClassLoader objects as extraDepends, which it consults each in turn as needed. Those ClassLoader objects, at least in post-8.5.2-era Domino, are the internal class loaders from the library OSGi bundles. This is cheating, and I imagine it was made for pragmatic transitional reasons when OSGi came into the picture.

The old layout conceptually looks like this:

Diagram of NSF to old library dependency

At first blush, this seems like a "six of one, half a dozen of the other" sort of situation, but it's not quite. What this setup does that normal OSGi doesn't is that it exposes META-INF/services files inside the direct dependencies to the application's ClassLoader, whereas these are normally encapsulated in OSGi. The effect was that a bunch of things that used to work started to fail - REST couldn't find all its output-writing classes, Validation couldn't find its implementation, and so forth. This is because they would all internally ask the thread-context ClassLoader (i.e. the NSF's loader) for resources within META-INF/services, and the extraDepends list used to be able to find them. Now that there was a layer of indirection, this no longer worked: the extraDepends loaders could see their own stuff but would not traverse the OSGi barrier to peek inside their further dependencies for these. Conceptually, now we have this:

Diagram of NSF to new library dependency

A direct ClassLoader dependency allows reading of resources, but a true OSGi-type dependency does not. So the result is that I had to "promote" a bunch of META-INF/services files from the now-downstream plugins into the "*.library" ones. It all makes sense once you see how the gears are moving, but it sure threw me for a loop for a while.

Bundle and Package Renaming

Okay, now back to the changes.

Since I was already breaking things anyway, I decided this was a good opportunity to fix the names of the bundles and packages in the project's source. For example, some names were antiquated: what was once "JSF" is "Jakarta Faces", but my bundle was "org.openntf.xsp.jsf". Additionally, I was inconsistent in my hierarchy: while Transaction was in "org.openntf.xsp.jakarta.transaction", others (like Faces there) skipped the "jakarta" level of the hierarchy. These don't normally matter to developers consuming the library, but they annoyed me. Now, all of the bundles and their contained packages are within either "org.openntf.xsp.jakarta", "org.openntf.xsp.jakartaee" (for platform-wide capabilities), or "org.openntf.xsp.microprofile".

Along with this will be a couple potential breaking changes for app-level code, such as moving org.openntf.xsp.beanvalidation.XPagesValidationUtil to org.openntf.xsp.jakarta.validation.XPagesValidationUtil, but there won't be TOO many due to this change.

Jakarta Data and NoSQL Changes

This one isn't from my latest round of changes and has been the case since early in the 3.x stream, but it's worth mentioning again here. The Repository concept from Jakarta NoSQL moved from that spec to the new "Jakarta Data" spec, and so related packages changed from jakarta.nosql.mapping to jakarta.data. Additionally, since the NoSQL spec shrunk to accommodate, things like @Column changed from jakarta.nosql.mapping.Column to jakarta.nosql.Column. It makes sense as NoSQL has been an evolving spec all along, but I suspect that this will be the biggest app-code-breaking change it experiences for a good while.

Release and Future Versions

My next steps are to put this through its paces now that all the issues are closed. Though I've ported everything to the JEE 10 versions, I haven't tested to make sure that most of the new features work. While JEE was largely a "cleanup" release, there are a bunch of new features, particularly in Faces, which is in turn always the jankiest part of the stack on Domino.

Post-3.0, I expect that my focus will start to shift to Jakarta EE 11. For a time, I was going to be SOL with it: though Domino 14 bumped Java to 17, JEE 11 was slated to target Java 21 at a minimum. In the mean time, however, that target shifted down to 17, putting it back on the table for Domino. JEE 11 was originally slated for Q1 of this year, but it slipped to some time around summer. That fits reasonably well with my cadence here. JEE 11 is technically also a breaking release, but I suspect that it won't break features that XPages JEE users use, at least not after this hurdle here.

Simplifying the Maven Build of the NSF File Server Project

Wed Apr 10 17:02:09 EDT 2024

When working on NSF File Server project that I talked about the other day, I took a slightly-different tack as far as building it than I did in the past, and I think it's worth going over some of that in case it's useful for others.

Initial Version

The first version of this project was a non-OSGi WAR file meant to be deployed to an app server like Liberty, not to Domino's OSGi stack, and so it's never involved Tycho. This made it mostly simpler, since its various dependencies are normal Maven dependencies and so I didn't have to worry about any of the annoying hoops.

However, it did have some native Domino dependencies: Notes.jar and the NAPI. These would need to be included as Maven dependencies and brought into the final WAR. The way I handled this was using the generate-domino-update-site project, which lets you first generate a p2 site in the style of the painfully outdated IBM-provided update site and then, if desired, turn that p2 site into more-normal Maven artifacts.

When I eventually switched from targeting a WAR file to having it run on Domino, I used the same dependency structure. The Domino version runs as an HttpService implementation, and so I pointed at the Mavenized version of the com.ibm.xsp.bootstrap and com.ibm.domino.xsp.adapter bundles.

Then, I used the maven-bundle-plugin, which fits the job of taking an otherwise-normal Maven project and making it work in OSGi environments (mostly). The way that plugin works is that you specify a lot of your MANIFEST.MF rules in the pom.xml:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
<plugin>
	<groupId>org.apache.felix</groupId>
	<artifactId>maven-bundle-plugin</artifactId>
	<extensions>true</extensions>
	<configuration>
		<instructions>
			<Bundle-SymbolicName>org.openntf.nsffile.httpservice;singleton:=true</Bundle-SymbolicName>
			<Bundle-RequiredExecutionEnvironment>JavaSE-1.8</Bundle-RequiredExecutionEnvironment>
			<Export-Package/>
			<Require-Bundle>
				com.ibm.domino.xsp.adapter,
				com.ibm.commons,
				com.ibm.commons.xml
			</Require-Bundle>
			<Import-Package>
				javax.servlet,
				javax.servlet.http
			</Import-Package>
			<Embed-Dependency>*;scope=compile</Embed-Dependency>
			<Embed-Transitive>true</Embed-Transitive>
			<Embed-Directory>lib</Embed-Directory>
			
			<_removeheaders>Require-Capability</_removeheaders>
			
			<_snapshot>${osgi.qualifier}</_snapshot>
		</instructions>
	</configuration>
</plugin>

The first couple are one-for-one matches to what you'd have in the MANIFEST.MF, but things get weird once you get to the "Embed-*" ones.

The Embed-Dependency instruction is a potent one: you give it a description of what dependencies you want embedded in your OSGi bundle (in this case, all my non-provided dependencies), and then it does the job of copying them into your final bundle JAR. You can do this other ways - copying them manually, using the Maven Dependency Plugin, or others - but this handles all your transitive stuff nicely for you, thanks to Embed-Transitive. I use Embed-Directory here just for cleanliness - the result is functionally the same without it.

The final bits are just for cleanliness: I remove Require-Capability to avoid some trouble I had with older Domino versions, and then I set what the snapshot value will be, which ends up being the current build time.

With this, I end up with a single OSGi bundle with everything in it. This works well for this sort of project - with something to be used in Designer, I prefer to make a big pool of distinct OSGi bundles to make it so that you can look up the source properly, but something server-only like this doesn't need that.

2.0 Version

In this new version, the switch to JNX meant that I was tantalizingly close to not having to do any weird dependency stuff: JNX is distributed in Maven Central, so I didn't need to have the weird locally-built stuff like I did for Notes.jar and the NAPI.

However, that wasn't everything: there are still the "bootstrap" bundles containing the HttpService superclass and related classes. While I don't need to distribute those anywhere, they're still required to compile the classes - no amount of non-verified text files or the like will get around that.

I came up with another way, though. Java only needs classes that look like those to compile, and then the compiled class of mine will be the same regardless of anything else. This is critical: I don't actually need any implementation code, and that's the part I can't redistribute. So I made two little Maven modules: com.ibm.xsp.bootstrap.shim and com.ibm.domino.xsp.adapter.shim. These modules contain just a handful of classes, and of those classes only the methods actually referenced by my code. Since these modules will then be marked as "provided", they won't be bundled into the final JAR.

This squared the circle nicely, where now I can compile the Java side without any weird pre-requisites. Admittedly, the two NSFs in the module set still require an NSF ODP Tooling environment, which is a whole other ball of wax, but it's a step in the right direction.

Other Uses

This technique can be used in other similar projects when you only need a few classes from the XPages stack. For example, if your goal is to just wrap a third-party library and provide it to XPages in Designer, you could probably do this by making a stub implementation of XspLibrary and related classes, and skip the whole generate-domino-update-site step. The more you use from the stack, the less practical this is - for example, the XPages Jakarta EE project reaches into all sorts of crap, and so I can't really do this there. For this, though, it works nicely.

NSF File Server 2.0

Sun Apr 07 13:59:18 EDT 2024

A few years ago, I made a little project that hosts an SFTP server that stores documents in an NSF. I've used it here and there since then - as in the original post, I stashed some company docs in it to have them nicely synced among our Domino servers, and I've also had cases where clients use it to, for example, provide a way for their vendors to upload files in a standard way.

The other week, I decided to dive back into it to add some capabilities I'd wanted for a while, and the result is version 2.0.0. This version is a significant revamp that adds quite a bit.

Multiple Mounts

The first limitation I wanted to improve was the fact that the first version was restricted to a single NSF. That works fine in the basic case, but I wanted to start doing things like storing server config backups in there, and wouldn't necessarily want them in the same NSF as, say, company contracts, credentials, and secrets.

The way I went about this was to make it so that the new configuration NSF has a "Mounts" view that lets you specify a path in a conceptual top level of the filesystem that would then point to a target NSF. This allows the admin to do things like have separate ACLs for different mounts - since the client will act as an authenticated Domino user, these will be properly obeyed, and a user won't be able to access documents they don't have rights to.

Additionally, I could configure it so that not all mounts are present on all servers, which will come into play particularly with the next feature.

Screenshot of the Mounts view in the file server config NSF

New Filesystem Types

Once I had a composite top-level filesystem, I realized that it wouldn't be terribly difficult to allow more filesystem types than the original NSF file store I made. That filesystem is built using the NIO File System Provider framework added in Java 7, and that system is designed to be pretty extensible. By default, Java comes with a few providers: the normal local filesystem as well as one that can treat a ZIP or JAR as a contained filesystem itself. These are accessed in a generic way, where you specify a URI and a map of "env" properties that are provider-specific.

For example, the ZIP filesystem takes a URI in the form of "jar:file:/some/path/to/file.zip" and an environment map configuring whether the filesystem should be created if it doesn't already exist in memory and what encoding to use for filenames (very important if you have Unicode characters in there).

So I added ways to configure a mount to the local server filesystem (similar to what the Mindoo FTP Server does) and then a generic configuration for any installed provider. It's probably uncommon that you will have a custom File System Provider implementation in your Java classpath, but hey, maybe you do, and I want to allow that to work.

I also added an extension point to the project itself that allows adding new providers via plugin.xml files in OSGi, and I can think of a couple other projects that may use this, like the NSF ODP Tooling.

WebContent Filesystem

Beyond adding the JVM-provided systems, I wrote another new filesystem type, one that provides access to the conceptual "WebContent" directory presented in Package Explorer in Designer:

Screenshot showing Designer and Transmit looking at the same WebContent in an NSF

The idea here is that this could be used to deploy, say, a JavaScript client application to an NSF without the developer or build server having to know anything about Domino. Pretty much everything can work with SFTP, so this makes accessing those files a lot easier. This is similar to the WebDAV capabilities Domino has had for a very long time, but with a different protocol.

Server Keypairs

In the first version, the app would generate and store the server's SSH keypair on the filesystem, in the data directory. This is fine, but part of the point of this whole project is that I like to get away from non-replicating stuff, and so I moved these keys to the configuration NSF. Now, on first connection, the server will look for a keypair document in the NSF and, if it doesn't exist, will generate a new one and store it there. Since I've been working with encrypted fields a lot for client work lately, I also realized that this was a good use for it: the public key is a normal text item (so you could distribute and verify it as needed), but the private key is encrypted with the generating server's ID file. Since only the server itself ever needs to know its private key, this works swimmingly.

JNX

This isn't a new app feature per se, but this was a good situation for me to put JNX to work in an open-source project. I had originally written this using the lotus.domino classes for most work and the IBM NAPI for things like generating sessions for a given username, but switching to JNX let me ditch both of those.

Admittedly, this is a case where switching to JNX didn't grant me significant new capabilities, but it DID let me do a couple things better. Some things are distinct feature improvements, like improving password authentication (previously, I was doing a compare of hashes "manually", which is fragile), while others are just making the code smoother, like no longer having to do the read-convert-recycle dance with DateTimes in LSXBE. It's just pleasant, and let me find a few places where the JNX API could be improved.

Future Additions

When I pick this project back up, there are certainly a couple things I'd like to add.

One would be to look into rsync support: rsync is tremendously useful for things like synchronizing filesystem-bound configs, but it's its own protocol tunneled over SSH, and so just having SFTP isn't enough. The underlying Apache Mina SSHD project is a general SSH server and not just SFTP, so it may be possible to do it by intercepting the commands sent over to initialize rsync, but it will be non-trivial. There's a library in Java that provides an rsync server, but it's GPL-licensed, and so I have to keep away for license-safety's sake.

Beyond that, it's mostly that I'd like to implement more filesystem types. Presenting data as a filesystem can be a very powerful tool: you could imagine providing access to documents in a DB as DXL or YAML, or listing files from a Document Library NSF, or (as I'd like to do some day) having the NSF ODP Tooling project replicate the ODP layout over SFTP.

For now, I'm looking forward to putting it to more use as a coordinating point. If I keep messing around with apps on TrueNAS, it'll give me a good feeling of security to have more info stashed in Domino and less prone to destruction if one server happens to blow up.

Realmz

Sun Mar 31 11:35:14 EDT 2024

For a while now, I've wanted to just kind of gush about an old Mac game I played when I was a teenager, and the last day of Marchintosh for the year is as good a time as any.

Overview

Realmz is a game that ran on the classic Mac OS and, in later versions, Windows. It was shareware at the time - one of the few shareware games I ended up cobbling together the money for - but has long been made fully available for free, with my go-to source being the Macintosh Garden. If you have SheepShaver around, it works nicely there.

The game itself is quickly identified as a party-based fantasy RPG. I didn't really realize it at the time, but it's a full-on CRPG in the nerdiest sense. I mean, look at this freaking character sheet:

Screenshot of the Realmz character creation sheet

While it's not strictly D&D rules, it basically is. Older versions (which are also available on the Macintosh Garden) even used THAC0 before switching to an "Armor Rating" system.

CRPG

Looking back, I'm glad I had an experience with such a true-blood CRPG at the time. I didn't play D&D growing up, didn't play the Gold Box games, and was too busy playing pretty much exclusively Blizzard games to play the Infinity Engine games or Neverwinter Nights when they came out. It wasn't really until Dragon Age: Origins and then (especially) Pillars of Eternity that I realized the glory of the genre. But looking at Realmz, it's obvious that it's right in the same lineage.

Combat is strictly turn-based, takes place on a grid, and is suitably technical:

Screenshot of a combat situation in Realmz

It even does some of the weird stuff: for example, martial characters won't just get multiple attacks per round, but will also get "partial" steps like my rogue Hebs there, who gets three attacks every two rounds, as a stepping stone to 2 / 1.

Realmz also has its own mechanics-heavy take on the thing CRPGs try to do where they want to emulate an open-ended experience a DM might oversee beyond just combat. For example, early on, you meet a kid who wants you to help his dog, which is stuck in a well. When you get there, you're presented with the "encounter" screen, where you can try all sorts of things:

Screenshot of the Realmz encounter screen

There are a lot of ways to deal with these encounters. In this case, I might have Galba there do an Acrobatic Act, which has about even odds. My sorcerer Fenton there might use a Spider Climb (might not be the name) spell to make scaling the well effortless. Or, if I stocked up, I might just use a rope. You can easily fail this - if you do, the kid runs off crying and you have to wait for the guards to show up to help you, with no experience gain. Realmz has a bunch of these scenarios and they're pretty neat. Admittedly, they fall short in the ways that all non-DM-run games eventually do, where your actual options aren't truly limitless. The "Speak" option is available in other situations, but it's only ever really practical if you have, say, a magic word to open a door or something. It's not a true tabletop experience, but it's trying, bless its heart.

Mac-ness

One thing I really enjoy about games in the heyday of Mac shareware games (by the way, read The Secret History of Mac Gaming if you haven't - it's great) is how thoroughly Mac-like they are. For both practical and cultural reasons, a lot of Mac games didn't necessarily take over the whole screen with their own interface like DOS and Windows games usually do. While there are some Windows games that use the Windows UI, like another small classic Castle of the Winds, it's very common in Mac games. For example, there's Scarab of Ra:

Screenshot of Scarab of Ra from Macintosh Garden

As it happens, Scarab of Ra is another game where I didn't appreciate its lineage at the time: it's a true roguelike, albeit with a first-person perspective.

Realmz doesn't go quite as hard in using native widgets for everything, but you can see the menu bar in earlier screenshots - you use the normal Mac menu to access game commands, the bestiary, your ally list, your collected notes, and so forth. It's just neat. Also, like a lot of Mac software at the time, Realmz's program directory is just a delight to look at:

Screenshot of the Realmz 2.5 installation folder

My use of the "Drawing Board" Appearance Manager theme helps it too, but just check out those icons. That sort of thing wasn't strictly necessary, but it was the Mac way, and it was wonderful.

Versions

And this isn't exactly a Mac-like attribute, but I like that Realmz wasn't afraid of using version numbers. It went from version 1.x all the way up through 8.x, with minor and patch versions along the way. It was updated all the time, and it was always exciting to see a new major version to find what the big changes are.

Mostly, the changes were things like adding classes: the old versions have the same sort of handful you'd find in basic D&D, while the later ones have so many that you can pick between "Archer" and "Marksman" or "Bard" and "Minstrel". Some of the changes were less like finding a D&D source book and more like the game gradually morphing into its own sequel, though.

For example, the original versions didn't have music of any kind, as was the style at the time. Somewhere along the line (version 5, I think), it gained music, and... boy, it's a doozy. Here's, for example, the camping music:

What I assume happened is that the developer wanted to add some music and then found some free or cheap module files and slapped them in there where they kind of work. The tone is absolutely bizarre, and it's kind of great for it.

It was just neat seeing the game progress, with the changes in systems and new features, even the "eh, not the best idea" stuff like parts of dungeons that switch to a first-person mode.

Scenarios

I have to admit that, though I played a ton of Realmz, I never even got that far into it. A big part of that was that the scenarios beyond the starting City of Bywater also cost money above the core game, and it's a tall order for a cash-strapped teenager to cough up money at all, let alone add-on costs. So my characters probably always plateaued around level 10, as there's just not that much to do in the base scenario. That's good news for future me, though: there's a ton of stuff waiting for me whenever I want to get back into it.

If you have a taste for older games or CRPGs in general, I definitely suggest you give it a try. The Windows version may be easier to run than the Mac one, though it loses some of the appeal. Depending on your temperament, Realmz may be easier to get into from scratch than games like, say, the original Baldur's Gate, with the latter's janky RTwP combat and constant fourth-wall-breaking dweebiness. Definitely keep it in mind for a cozy-day game, I say.

Adding Code Coverage Reports To Domino-Container-Run Tests

Mon Mar 11 15:33:02 EDT 2024

Tags: docker testing

When you're writing test suites for your code, it can be very useful to use a tool to analyze the code coverage of your tests. While people can get a little obsessive about coverage percents, there's certainly no denying that it's helpful to know how much of your code is actually run when testing, and also being able to look down into the specifics of what is covered.

With Java, one of the preeminent tools for this is JaCoCo, a venerable open-source library that you can integrate with your test suites to give reports of your coverage. In a normal project, such as a build run via Maven, you can use the Maven plugin in tandem with the Maven Surefire and Failsafe plugins. However, things get more complicated if the code you're actually testing isn't in the Surefire JVM, but rather inside a container.

That's exactly the situation I have with the integration-test suite of the XPages Jakarta EE project, where it creates a Docker container with the current build of the project deployed as OSGi plugins, and then executes HTTP calls against OSGi bundles and NSFs. I figured this was a solvable problem, so I set out doing so.

I first came across this blog post, which describes the general idea well, but unfortunately references Gists that seem to no longer exist. Still, it gave me a good starting point.

Installing JaCoCo in Domino

The first thing I had to do was to get the JaCoCo Java agent into the container. I added it as a Maven dependency to the IT suite project:

1
2
3
4
5
6
<dependency>
	<groupId>org.jacoco</groupId>
	<artifactId>org.jacoco.agent</artifactId>
	<version>0.8.11</version>
	<scope>test</scope>
</dependency>

Conveniently, this dependency is itself a wrapper for the agent JAR and comes with a convenience method for accessing the JAR data. I used that to read it into memory and send it to the Docker runtime during the container build:

1
2
3
4
5
byte[] agentData;
try(InputStream is = AgentJar.getResourceAsStream()) {
	agentData = IOUtils.toByteArray(is);
}
withFileFromTransferable("staging/jacoco.jar", Transferable.of(agentData)); //$NON-NLS-1$

The use of Transferable here allows me to keep the process independent of whether Docker is running locally or remote - I run remotely almost all the time nowadays, due to Domino's continued lack of an ARM port.

With the file in place, I modified my Dockerfile to copy it to a known location in the container:

1
2
COPY --chown=notes:notes staging/jacoco.jar /local/
COPY --chown=notes:notes staging/JavaOptionsFile.txt /local/

The JavaOptionsFile.txt was already there for another ARM-related reason, but it's important to note for the next step. This sort of file is how you enable JaCoCo in the Domino JVM: I set JavaUserOptionsFile=/local/JavaOptionsFile.txt and it'll read its rules from there. Following the instructions, I added -javaagent:/local/jacoco.jar=output=file,destfile=/tmp/jacoco.exec on its own line in this file. This causes JaCoCo to be automatically loaded with the HTTP JVM and to store its report in the named file on shutdown.

Reading the Data

That said, this didn't work immediately. The file "/tmp/jacoco.exec" was created properly inside the container, so the agent was running, but the file content was always zero bytes. I realized that this was due to the merciless way in which the container is killed by my test suite: there's no proper shutdown step, and so JaCoCo's shutdown hook never fires.

Fortunately, writing to a file isn't the only way JaCoCo can do its reporting - you can also have it open up a TCP port to connect to and read. So I changed the Java option line to:

1
-javaagent:/local/jacoco.jar=output=tcpserver,address=*,port=6300

I modified the withExposedPorts(...) call inside the class that builds my Testcontainers container to also include 6300, and then used getMappedPort(6300) to identify the actual randomized port mapped by Docker.

The remaining task was to figure out the little protocol used by JaCoCo to signal that it should collect and return its data. I get the impression that it's not too complicated, but I still figured it'd be best to use an existing implementation. I found jacocotogo, a Maven plugin that reads the data, and it looked promising. However, it had two problems: being a Maven plugin, it came with a bunch of transitive dependencies I didn't want, and it's also 11 years old and thus a bit out of date.

I ended up forking the main utility class, trimming out the parts I didn't need (like JMX), switching it to NIO, and going from there.

Using the Data

With that all in place, a test run will end up with a file named "jacoco.exec" inside the "target" directory. Using this file varies by IDE, but, in Eclipse, you can install the EclEmma tool, open the "Coverage" view, right-click in the table area, and choose "Import Session...". That will let you locate the file and then choose the projects from your workspace that you're looking to analyze.

When I did that, I got my results:

Screenshot of Eclipse's Coverage tool detailing my test suite's coverage of somewhere around 50-65%

This is surprisingly good for the project, especially when you consider how large chunks of the red bars are things like the servlet wrapper package, which includes a lot of delegating code that is obligatory to match the interface but is not likely to be actually used in practice.

While this is currently the only project where I've needed to do this, it'll certainly be good to keep these techniques in mind. The TCP port thing in particular should be handy in future edge cases even without the Docker part.

Homelab Rework: Phase 3 - TrueNAS Core to Scale

Sat Mar 02 14:00:08 EST 2024

  1. Planning a Homelab Rework
  2. Homelab Rework: Phase 1
  3. Homelab Rework: Phase 2
  4. Homelab Rework: Phase 3 - TrueNAS Core to Scale

When I last talked about the ragtag fleet of computers I generously call a "homelab" now, I had converted my gaming/VM machine back from Proxmox to Windows, where it remains (successfully) to this day.

For a while, though, I've been eyeing converting my NAS from TrueNAS Core to Scale. While I really like FreeBSD technically and philosophically, running Linux was very appealing for a number of reasons. Still, it was a high-risk operation, even though the actual process of migration looked almost impossibly easy. For some reason, I decided to take the plunge this week.

The Setup

Before going into the actual process, I'll describe the setup a bit. The machine in question is a Mac Pro 1,1: two Xeon 5150s, four traditional HDDs for storage, and a handful of M.2 drives. The machine itself is far, far too old to have NVMe on the motherboard, but it does have PCIe, so I got a couple adapter cards. The boot volume is a SATA M.2 disk on one of them, while I have some actual NVMe ones serving as cache/log devices in the ZFS pool. Also, though everything says that the maximum RAM capacity is 32 GB, I actually have 64 in there and it's worked perfectly.

It's a bit of a weird beast this way, but those old Mac Pros were built to last, and it's holding up.

Also, if you're not familiar with TrueNAS and its different variants, it's worth a bit of explanation. TrueNAS Core (née FreeNAS) is a FreeBSD-based NAS-focused OS. You primarily interact with it via a web-based GUI and its various features heavily revolve around the use of ZFS, while its app system uses FreeBSD jails and its VM system uses Bhyve. TrueNAS Scale is a related system, but based on Debian Linux instead of FreeBSD. It still uses ZFS, and its GUI is similar to Core, but it implements its apps and VMs differently (more on this in a bit). For NAS/file-share uses, there's actually less of a difference than you might think based on their different underlying OSes, but the distinctions come into play once you go beyond the basics.

The Conversion

If anything, the above-linked documentation overstates the complexity of the operation. I didn't even need to go the "manual update" route: I went to the Update panel, switched from the TrueNAS Core train to the current non-beta TrueNAS Scale one, hit Update, and let it go. It took a long time, presumably due to the age of the machine, but it did its job and came back up on its own.

Well, mostly: for some reason, the actual data ZFS pool was sort of half-detached. The OS knew it was supposed to have a pool by its name, but didn't match it up to the existing disks. To fix this, I deleted the configuration for the pool (but did not delete the connected service configuration) and then went to Import Pool, where the real one existed. Once it was imported, everything lined back up without further issue.

Being basically a completely-different OS, there are a number of features that Core supports but Scale doesn't. Of that list, the only one I was using was the plugin/jail system, but I had whittled my use down to just Postgres (containing only discardable dev data) and Plex. These are both readily available in Scale's app system, and it was quick enough to get Plex re-set-up with the same library data.

Apps

As I mentioned, TrueNAS Core uses a custom-built "plugin" system sitting on top of the venerable FreeBSD jail capabilities. Those jails are similar in concept to things like Docker containers, and work very similarly in practice to the Linux Containers system I experienced with Proxmox.

TrueNAS Scale, for its part, uses Kubernetes, specifically by way of K3s, and provides its own convenient UI on top of it. Good thing it does provide this UI, too, since Kubernetes is a whole freaking thing, and I've up until this point stayed away from learning it. I guess my time has come, though. Kubernetes is distinct from Docker - while older versions used Docker as a runtime of sorts, this was always an implementation detail, and the system in use in current TrueNAS Scale is containerd.

Setting aside the conceptual complexity of Kubernetes, this distinction from Core is handy: while not being Docker, Kubernetes can consume Docker-compatible images and run them, and that ecosystem is huge. Additionally, while TrueNAS ships with a set of common app "charts" (Plex included), there's a community project named TrueCharts that adds definitions for tons and tons more.

Domino

That brings me to our beloved Domino. I had actually kind of gotten Domino running in a jail on TrueNAS Core, but it was much more an exercise in seeing if I could do it than anything useful: the installer didn't run, so I had to copy an installation from elsewhere, and the JVM wouldn't even load up without crashing. Neat to see, but I didn't keep it around.

The prospect on Scale is better, though. For one, it's actually Linux and thus doesn't need a binary-compatibility shim like FreeBSD has, and the container runtime meant I could presumably just use the normal image-building process. I could also run it in a VM, since the Linux hypervisor works on this machine while bhyve did not, but I figured I'd give the container path a shot.

Before I go any further, I'll give a huge caveat: while this works better than running it on FreeBSD, I wouldn't recommend actually doing what I've done for production. It'll presumably do what I want it to do here (be a local replica of all of my DBs without requiring a distinct VM), it's not ideal. For one, Domino plus Kubernetes is a weird mix: Kubernetes is all about building up and tearing down a swarm of containers dynamically, while Domino is much more of a single-server sort of thing. It works, certainly, but Kubernetes is always there to tempt you into doing things weird. Also, I know almost nothing about Kubernetes anyway, so don't take anything I say here as advice. It's good fun, though.

That said, on to the specifics!

Deploying the Container

The way the TrueNAS app UI works, you can go to "Custom App" and configure your container by referencing a Docker image from a repository. I don't normally actually host a Docker registry, instead manually loading the image into the runtime. It might be possible to do that here, but I took the opportunity to set up a quick local-network-only one on my other machine, both because I figured it'd be neat to learn to do that and because I forgot about the Harbor-hosted option on that link.

Since the local registry used HTTP and there's nowhere in the TrueNAS UI to tell it to not use HTTPS, I followed this suggestion to configure K3s to explicitly map it. With that in place, I was able to start pulling images from my registry.

The Domino Version

One quirk I quickly ran into was that I can't use Domino 14 on here. The reason for this isn't an OS problem, but rather a hardware limitation: the new glibc that Domino 14 uses requires the "x86-64-v2" microarchitecture level and the Xeon 5150 just doesn't have that by dint of pre-dating it by two years.

That's fine, though: I really just want this to house data, not app development, and 12.0.2 will do that with aplomb.

Volume Configuration

The way I usually set up a Domino container when using e.g. Docker Compose is that I define a handful of volumes to go with it: for the normal data dir, for DAOS, for the Transaction Log, and so forth. This is a bit of an affectation, I suppose, since I could also just define one volume for everything and it's not like I actually host these volumes elsewhere, but... I don't know, I like it. It keeps things disciplined.

Anyway, I originally set this up equivalently in the Custom App UI in TrueNAS, creating a "Volume" entry for each of these. However, I found that, for some reason, Domino didn't have write access to the newly-created volumes. Maybe this is due to the uid the container is built to use or something, but I worked around it by using Host Path Volumes instead. The net effect is the same, since they're in the same ZFS pool, and this actually makes it easier to peek at the data anyway, since it can be in the SMB share.

Once I did that and made sure the container user could modify the data, all was well. Mostly, anyway.

Transaction Logs, ZFS, and Sector Size

Once Domino got going, I noticed a minor problem: it would crash quickly, every time. Specifically, it crashed when it started preparing the transaction log directory. I eventually remembered running into the same problem on Proxmox at one point, and it brought me back to this blog post by Ted Hardenburgh. Long story short, my ZFS pool uses 4K sectors and Domino's transaction logs can't deal with that, at least in 12.0.2 and below.

This put me in a bit of a sticky spot, since the way to change this is to re-create the entire pool and I really didn't want to do that.

I came up with a workaround, though, in the form of making a little disk image and formatting it ext4. You can use a loop device to mount a file like a disk, so the process looks like this:

1
2
3
4
dd if=/dev/zero of=tlog.img bs=1G count=1
sudo /sbin/losetup --find --show tlog.img
sudo mkfs.ext4 /dev/loop0
sudo mount /dev/loop0 /mnt/tlog

That makes a 1GB disk image, formats it ext4, and mounts it as "/mnt/tlog". This process defaults to 512-byte sectors, so I made a directory within it writable by the container user (more on this shortly), configured the Domino container to map the transaction log directory to that path, and all was well.

Normally, to get this mounted at boot, you'd likely put an entry in fstab. However, TrueNAS assumes control over system configuration files like that, and you shouldn't edit them directly. Instead, what I did was write a small script that does the losetup and mount lines above and added an entry in "System Settings" - "Advanced" - "Init/Shutdown Scripts" to run this at pre-init.

Networking

The next hurdle I wanted to get over was the networking side. You can map ports in apps in a similar way to what you'd do with Docker, but you have to map them to a port 9000 or above. That would be an annoying issue in general, but especially for NRPC. Fortunately, the app configuration allows you give the container its own IP address in the "Add external Interfaces" (sic) configuration section. Since the virtual MAC address changes each time the container is deployed, I gave it a static IP address matching a reservation I carved out on my DHCP server, pointed it to the DNS server, and all was well. All of Domino's open ports are available on that IP, and it's basically like a VM in that way.

Container User

Normally, containers in TrueNAS's app system run as the "apps" user, though this is configurable per-app. The way the Domino container launches, though, it runs as UID 1000, which is notes inside the container. Outside the container, on my setup, that ID maps to... my user account jesse.

Administration-wise, that's not exactly the best! In a less "for fun" situation, I'd change the container user or look into UID mapping as I've done with Docker in the past, but honestly it's fine here. This means it's easy for me to access and edit Domino data/config files over the share, and it made the volume mapping above work without incident. As long as no admins find out about this, it can be my secret shame.

Future Uses

So, at this point, the server is doing the jobs it was doing previously, plus acting as a nice extra replica server for Domino. It's positioned well now for me to do a lot of other tinkering.

For one, it'll be a good opportunity for me to finally learn about Kubernetes, which I've been dragging my feet on. I installed the Portainer chart from TrueCharts to give me a look into the K8s layer in a way that's less abstracted than the TrueNAS UI but more familiar and comfortable than the kubectl tool for me for now.

Additionally, since the hypervisor works on here, it'll be another good location for me to store utility VMs when I need them, rather than putting everything on the Windows machine (which has half as much RAM).

I could possibly use it to host servers for various games like Terraria, though I'm a bit wary of throwing such ancient processors at the task. We'll see about that.

In general, I want to try hosting more things from home when they're non-critical, and this will definitely give me the opportunity. It's also quite fun to tinker with, and that's the most important thing.

XPages JEE 2.15.0 and Plans for JEE 10 and 11

Fri Feb 16 15:30:40 EST 2024

  1. Updating The XPages JEE Support Project To Jakarta EE 9, A Travelogue
  2. JSP and MVC Support in the XPages JEE Project
  3. Migrating a Large XPages App to Jakarta EE 9
  4. XPages Jakarta EE Support 2.2.0
  5. DQL, QueryResultsProcessor, and JNoSQL
  6. Implementing a Basic JNoSQL Driver for Domino
  7. Video Series On The XPages Jakarta EE Project
  8. JSF in the XPages Jakarta EE Support Project
  9. So Why Jakarta?
  10. XPages Jakarta EE 2.5.0 And The Looming Java-Version Wall
  11. Adding Concurrency to the XPages Jakarta EE Support Project
  12. Adding Transactions to the XPages Jakarta EE Support Project
  13. XPages Jakarta EE 2.9.0 and Next Steps
  14. XPages JEE 2.11.0 and the Javadoc Provider
  15. The Loose Roadmap for XPages Jakarta EE Support
  16. XPages JEE 2.12.0: JNoSQL Views and PrimeFaces Support
  17. XPages JEE 2.13.0
  18. XPages JEE 2.14.0
  19. XPages JEE 2.15.0 and Plans for JEE 10 and 11

Today, I released version 2.15.0 of the XPages Jakarta EE project. As is often the case lately, this version contains bug fixes but also a few notable features:

  • You can now specify Servlets in WEB-INF/web.xml (as opposed to just via the @WebServlet annotation). This is helpful for defining a Servlet when the actual implementation is in a JAR or when following non-annotation-based examples
  • You can now specify context-param values in WEB-INF/web.xml in the NSF and META-INF/web-fragment.xml in JAR design elements, which will be available to JSP, JSF, JAX-RS, @WebServlet-annotated Servlets, and web.xml-defined Servlets
  • Added @BooleanStorage annotation for NoSQL entities to define how boolean values are converted to note items
  • Added CRUD operations for calendar events to NoSQL, around a few new methods on Repository. This exposes some of the capabilities of NotesCalendar and can be used for, for example, providing an iCalendar feed based on a mail database. To go with that, XPages JEE also re-exports iCal4J as included in the Domino stack for NSF use, though this API is... not smooth

The first two here are focused around bringing NSFs more in line with "normal" Jakarta EE applications, while the latter are some nice improvements for the NoSQL driver. I hope to put the last one in particular to good use - for example, OpenNTF's site will be able to provide a calendar of webinars and other events that we can manage internally using a normal Notes calendar, and that sounds nice to me.

Next Versions

I still have the 3.x branch of the project chugging along, and I think it'll be ready for a real release before too long. Since it'll be a breaking-changes release thanks to upstream changes, I'm using it as an opportunity to consolidate the sprawl of features and XPages Libraries. Currently, my plan is:

  • One for "core", covering most things in the Jakarta EE Core Profile, plus the other utility specs I've implemented: Transactions, Bean Validation (which really should be in Core in my estimation), Concurrency, Servlet, and so forth, plus Data and NoSQL
  • One for "UI", covering Jakarta Pages, Jakarta Faces, and MVC - basically, the stuff you could use to replace XPages to make an HTML-generating app in your NSF
  • One for MicroProfile, or at least the specs I've implemented so far. I'm a little tempted to wrap this in to Core, since things like OpenAPI are useful almost all the time, but it's a clean-enough separation that it'll be fine

This will require Domino 14, since Jakarta EE 10 requires at least Java 11.

That brings me to some unexpected good news: though Jakarta EE 11 was long planned to use Java 21 as its minimum version (since 21 is the current LTS), it looks like they've switched to making Java 17 the baseline. For me, this is a little sad in an idealistic sense, since it pushes things like Virtual Threads out of the realm of being a core part of JEE, but I'm very happy that I'll be able to use all JEE 11 specs in Domino 14. Even if Domino 15 used Java 21, it'd still be a long while before that came, and we'd lag behind the standard for at least a year. Instead, this puts the project back in line with upstream, and allows me personally to potentially resume committing to Jakarta NoSQL - I'd been out of the loop for a very long time when it moved to 11 and then 17 as its required version.

I don't know right now whether JEE 11 will be the same sort of breaking change for the project (which would mean a 4.x release) or if I'll be able to make it a 3.x one - the specs aren't out yet, so time will tell. The big focus of 11 will be further centralization on CDI instead of EJB, and I'm all for it.

My plan is to get 3.x out for Domino 14, based on JEE 10, as soon as time allowed, and then I'll start looking into bumping to JEE 11 when it releases in the summer.

Notes/Domino 14 Fallout

Fri Dec 15 11:53:44 EST 2023

  1. AbstractCompiledPage, Missing Plugins, and MANIFEST.MF in FP10 and V10
  2. Domino 11's Java Switch Fallout
  3. fontconfig, Java, and Domino 11
  4. Notes/Domino 12.0.2 Fallout
  5. Notes/Domino 14 Fallout
  6. PSA: ndext JARs on Designer 14 FP1 and FP2
  7. PSA: XPages Breaking Changes in 14.0 FP3

Notes and Domino 14 are out now and, as I discussed back in June the big deal for me is the move to Java 17. This also came with a refresh of the Eclipse innards, from Neon (circa 2016) to 2021-12 (circa, uh, 2021). The Eclipse update is welcome, but so far it's been less impactful than the Java update - at some point, I'll want to see if some of the current-era Eclipse plugins work here, but that's for the future.

In the mean time, there's a bunch to know, so let's get to it! I've broken this one down into "critical" and "less critical" sections, since this post will likely have a similar life to my target platform one.

Critical To Know

ndext

I mentioned this in June, but an important thing to know about Java 17 is that "jvm/lib/ext" no longer exists. Fortunately, Notes and Domino have long had a secondary location for this sort of thing: "ndext" in the program directory. Any JARs in this folder are available at runtime on both platforms, and so the quick fix is to move anything you had in "jvm/lib/ext" to "ndext".

...but.

When it comes to Notes, there's a distinct difference between "available at runtime" and "on the build path". While Java agents here are fine - they modified the editor to include "ndext" in the build path - there's no such accommodation for XPages. So far, the best workaround I've thought of is to add any such JARs manually to the JRE definition in Eclipse's preferences:

Screenshot of Designer's JRE preferences, showing the addition of an ndext JAR

When doing this, there's a huge caveat: do not just add everything from this directory to the JRE. Most of it will be redundant, like the SWT stuff, but some will be actively harmful, in particular "jsdk.jar". That's an even-older Servlet version than comes with XPages, and including that will cause Designer to think that methods added in Servlet 2 aren't present. Only add the JARs you're extending it with.

Ideally, this will be remedied one way or another in the future, but for now we'll have to do this to account for it. The silver lining may be that it's a good impetus to OSGi-ify your dependencies if possible.

Poi Remains

This isn't new, but it's worth mentioning while we're on the topic of "ndext". Since Notes 11, the client ships with an old version of Apache Poi, but this is painfully distributed right in the JVM and not at the OSGi layer. Newer versions of Poi 4 XPages deal with this, but it's important to know that, while Poi is in Notes, it's not in Domino. If you write agents using these classes, or add them to your JRE and use them in XPages, you'll also need to deploy them to Domino.

Java Policy Location

This one's more of a note, and it's reiterating something from June: the location of "java.policy" and (if you add it) "java.pol" changed since Java 8. They used to be in "jvm/lib/security" but they're in "jvm/conf/security" now. They work the same as before, as does putting a file named ".java.policy" in the Domino user's home dir, so the other characteristics haven't changed.

sun.misc.BASE64Decoder

Back in Domino 11, the move from IBM J9 to OpenJ9 meant that some IBM internal forks of Sun internal classes were no longer available. The most notable of these were the BASE64 classes com.ibm.misc.BASE64Encoder and BASE64Decoder. These were very popular to use in the pre-Java-8 days, before java.util.Base64 existed, and so it was worth noting that they were gone then.

Well, sun.misc.Base64Encoder has now met a similar fate - it's probably still in there somewhere, but it's no longer accessible by user code. If you haven't made the switch to java.util.Base64, do so now.

Target Platform Bug

This is another note, but this classic bug that's been with us since 9.0.1FP10 is... fixed, I think! I'd thought at first that it remained, since the Target Platform config still just lists the Eclipse home dir, but installing and using an XPages library seemed to work properly without further change. Good!

Java Compiler Level

This is a fiddly one. Though Designer uses Java 17 by default now, it has the ability to compile down to older versions for past compatibility, such as running on a pre-14 server. Java agents have had their own way of dealing with this for a while, though it seems like maybe they default to Java 8 now, which is good. With XPages, it seems like it's a little less obvious.

From my experience, this is what I've found:

  • Existing projects may have a specific compiler setting specified (similar to agents in the above-linked post) and so will remain as Java 1.8
  • New projects seem to use the active workspace setting, which will matter
  • Upgrades of Designer in place will keep the default compiler level for the workspace at 1.8
  • A fresh installation sets the compiler level to 17

This is all... fine. It'd be nice if it was a little more obvious to the developer (like if it was controlled by the xsp.properties setting for minimum version), but this is more or less the Eclipse way of doing things. We'll just have to be aware for a while of these settings and their interactions when developing for pre-14 deployment. If you're targeting an older server, you should make sure that you go to Project Properties for your NSF and set the Java Compiler level to fit:

Screenshot of the Java Compiler settings for an NSF

Less Critical

Alright, that covers the big-ticket stuff, but there are a few more things to know.

Java Deprecation Warning On HTTP Start

This one is documented, though oddly in the "no longer included" page: when you start HTTP on Domino, you'll see a message like this:

WARNING: A terminally deprecated method in java.lang.System has been called
WARNING: System::setSecurityManager has been called by lotus.notes.AgentSecurityManager (file:/C:/Domino/ndext/Notes.jar)
WARNING: Please consider reporting this to the maintainers of lotus.notes.AgentSecurityManager
WARNING: System::setSecurityManager will be removed in a future release

This is actually totally fine. As it says, the whole SecurityManager apparatus is gone in future versions, but Notes and Domino still use it for agents (and, unfortunately, XPages). While I long for the day when I never have to think about it again, this is a reasonable-enough compromise for 14 as a "transitional" version in its Java journey. So... you can ignore this and not worry.

"Java Main Sources" and "Java Test Sources" Working Sets

If you use Working Sets in Designer, you may notice two entries not of your creation:

Screenshot of the 'Select Working Set' dialog in Designer 14

These showed up in Eclipse somewhere along the line, presumably to make it easier for people to select those types of projects without manually managing their working sets. Designer inherits this and HCL didn't remove them, but you're free to delete them if you want.

"AbstractCompiledPage cannot be resolved"

This is another one that came up back in 9.0.1FP10 and it's still here, but it's less of an issue: every once in a while, when building a project, Designer will complain about the AbstractCompiledPage class not being found. In my experience, this only shows up temporarily during a build, but it's worth noting that, if it does stick around for you, a Project - Clean should fix it.

JAX-B and CORBA

After Java 8, a few Java EE components were removed from the normal JRE distribution in favor of developing them in Java EE, then Jakarta EE. JAX-B is one of them (now Jakarta XML Binding) - we don't normally use this directly, but it comes up sometimes, either directly (likely as one of the other old-timey BASE64 workarounds) or as a dependency. It shouldn't matter in Designer, but, if you're writing Domino-targeting code outside Designer, you may need to be aware. One way or another, you should add this as a dependency - in a basic case, you can add "jakarta.xml.bind-api.jar" and "jaxb-impl.jar" from "ndext" to your build path.

Similarly, CORBA support classes ("org.omg" stuff, usually) used to come with the JRE and now don't. To work around this, HCL did what I've done in the past and included "glassfish-corba-omgapi.jar", in their case putting it in "ndext". If you're using Notes.jar with Java > 8 outside Designer, you'll need this one too.

Conclusion

I think that covers it for now. Considering how much changed with Java from 8 to 17, this could have been a lot rougher, though I fear that some of these workarounds will plague us for a long time.

In the mean time, I'd appreciate it if you vote for this Aha idea to keep Domino on a good Java update cadence. It'd be a shame if we sit on 17 right up until the very end of its life, as we did with 6 and 8, in part since these multi-version moves are more painful than single-LTS updates.

XPages JEE 2.14.0

Fri Oct 27 11:47:02 EDT 2023

  1. Updating The XPages JEE Support Project To Jakarta EE 9, A Travelogue
  2. JSP and MVC Support in the XPages JEE Project
  3. Migrating a Large XPages App to Jakarta EE 9
  4. XPages Jakarta EE Support 2.2.0
  5. DQL, QueryResultsProcessor, and JNoSQL
  6. Implementing a Basic JNoSQL Driver for Domino
  7. Video Series On The XPages Jakarta EE Project
  8. JSF in the XPages Jakarta EE Support Project
  9. So Why Jakarta?
  10. XPages Jakarta EE 2.5.0 And The Looming Java-Version Wall
  11. Adding Concurrency to the XPages Jakarta EE Support Project
  12. Adding Transactions to the XPages Jakarta EE Support Project
  13. XPages Jakarta EE 2.9.0 and Next Steps
  14. XPages JEE 2.11.0 and the Javadoc Provider
  15. The Loose Roadmap for XPages Jakarta EE Support
  16. XPages JEE 2.12.0: JNoSQL Views and PrimeFaces Support
  17. XPages JEE 2.13.0
  18. XPages JEE 2.14.0
  19. XPages JEE 2.15.0 and Plans for JEE 10 and 11

Today, I released version 2.14.0 of the XPages Jakarta EE Support project. As with the last few releases, this is primarily about bug fixes and compatibility as I prepare for the big switch in 3.0, but there are some notable, if small, feature additions.

To begin with, I improved handling of reading JSON in NoSQL entities when reading from a view. This applies to the @ItemStorage(type=ItemStorage.Type.JSON) annotation on entity properties, which causes the value to be loaded and stored as JSON, useful for storing custom class types in a document. Now, such values can be read from view entries - previously, this processing was skipped for those. Of note when using this: normally, storing as JSON will automatically set the item's summary flag to false, to avoid overflowing the summary limit. However, you can add @ItemFlags(summary=true) to the property to override this behavior so that the values can show up in views.

Additionally, I added the ability to use JAXRSClassContributors inside the NSF. These were originally an internal mechanism for the project to dynamically add REST endpoints and extensions, like those used by MVC and OpenAPI. Now, though, I've made it so that such classes can be registered via a file named "META-INF/services/org.openntf.xsp.jaxrs.JAXRSClassContributor" in the NSF, and also added the ability to specify configuration properties. The latter is important because, though all xsp.properties values were already inserted into the JAX-RS configuration, there was no way to provide non-string values. This came up in the context of MVC, which has a CSRF configuration property that must be an enum value.

For the final feature, I improved support for JAX-RS's status-indicating exceptions, such as NotSupportedException and BadRequestException. Previously, the project supported translating NotFoundException to a 404, but now it will also translate these other standard exceptions to their corresponding HTTP statuses.

The release is otherwise rounded out by a number of bug fixes to fix problems encountered in the wild. Additionally, I added a workaround for some classpath pollution in the latest Domino 14 beta - I hope that the trouble will be gone for GA, but this project should handle it either way.

Overdue CollabSphere Followup

Wed Oct 11 16:59:29 EDT 2023

Though it's been over a month since CollabSphere 2023, I've somehow now yet gotten around to talking about it here. Time to remedy that!

Webinar and Workshop

I presented a couple sessions at CollabSphere, but the meatiest was my workshop on the XPages Jakarta EE project. A bit before the show, I wrote a post discussing modes of development with it and that formed the structure of the workshop.

It also formed the structure of August's OpenNTF webinar. Fortunately, unlike the workshop, the webinar was recorded, and that is up on OpenNTF's YouTube channel. Though the workshop was longer and had some refinements and audience discussion, they both worked from the same original slide deck and so the webinar did a pretty good job of covering the same material.

As part of the presentations, I created four versions of the same basic to-do tracker app, written in each of the four modes I talked about. I put them up in the project repository with a quick README introducing each of them. With the project installed (they were written to 2.13.0 and above), you should be able to sync the ODPs with NSFs in Designer and poke around yourself. They're all intentionally-similarly structured and basic, with all of their elements in either Java classes, file resources, or stylesheets. They're also all intentionally under-developed, so you're not allowed to make fun of them.

JNX

This one will be a doozy! In fact, it's such a doozy that I'm going to keep kicking the can down the road as far as properly talking about it.

In short, Domino JNX is a more-modern API for Domino access - it was initiated and is used heavily by the Domino REST API (DRAPI), but also stands on its own. It started out essentially as another swing at Karsten Lehmann's Domino JNA and shares a lot of the same ideas and approaches (and code - the committer info in the GitHub repo is deceptive, as I got to paste a whole mountain of existing code from another repo into this one and thus got credit for it).

Now that it's open source, there's some significant work to do as far as documentation, examples, integration, and distribution go. I plan to find time to improve a lot of these aspects, including blogging here, though it's the sort of thing that always ends up below other priorities.

In any event, it's quite neat and has a lot of good capabilities. I plan to write a driver for JNoSQL for it, either to be alongside or to wholly supplant the Notes.jar-based one currently used by the project. The neat thing there is that users of XPages JEE won't have to care much: it'll just get a bit faster in parts but should largely work the same, except maybe with a change to the way you point at other databases.