PSA: ndext JARs on Designer 14 FP1 and FP2

Thu Sep 12 11:02:18 EDT 2024

Tags: java xpages
  1. Oct 19 2018 - AbstractCompiledPage, Missing Plugins, and MANIFEST.MF in FP10 and V10
  2. Jan 07 2020 - Domino 11's Java Switch Fallout
  3. Jan 29 2021 - fontconfig, Java, and Domino 11
  4. Nov 17 2022 - Notes/Domino 12.0.2 Fallout
  5. Dec 15 2023 - Notes/Domino 14 Fallout
  6. Sep 12 2024 - PSA: ndext JARs on Designer 14 FP1 and FP2
  7. Dec 16 2024 - PSA: XPages Breaking Changes in 14.0 FP3
  8. Jun 17 2025 - Notes/Domino 14.5 Fallout

Back when Notes/Domino 14 came out, I made a post where I described some of the fallout of it. One of the entries was about the upstream removal of the "jvm/lib/ext" directory and the moving of all common extension JARs to the "ndext" directory. The upshot there was that any JARs that you want to add to the filesystem in Designer to match deployment on the server would have to be added to the active JRE in Designer in order to be recognized.

HCL presumably noticed this problem and altered the installation to accommodate it in FP1 and FP2. However, the approach they took is to add all JARs from ndext to the JVM. Thus, a fresh install+upgrade of Notes 14 to FP2 (or 14.5 EAP1) has a JVM that looks like this:

Screenshot of the 'Edit JRE' screen in Designer 14 FP2

This is a problem in a couple ways, but the most immediate is that it includes the toxic "jsdk.jar" I warned about in the earlier post. This JAR contains primordial Servlet classes from the very first addition of Servlet to Domino, predating XPages, and that version lacks even the convenience methods added in the ancient-but-less-so version in XPages. To demonstrate this, you can write this code:

1
2
HttpServletRequest req = null; /* pretend this is assigned to something */
Map param = req.getParameterMap();

This will work in a clean Designer 14 installation but will break on upgrade to 14 FP2, with Designer complaining that the getParameterMap method does not exist. There are others like this too, but basically any "The method foo() is undefined..." error for Servlet classes is a sign of this.

The fix is to go into your JVM definition (Preferences - "Java" - "Installed JREs" - "jvm (default)" - "Edit...") and remove jsdk.jar. While you're in there, I recommend also removing POI and its related JARs (poi-*, xmlbeans, ooxml-schemas, fr.opensagres.poi.*, commons-*) too, unless you also happen to have deployed them to the server, since they're not normally present on Domino and are thus mostly there to lead you astray. Honestly, almost none of the JARs present in there by default are useful for the XPages JVM definition, since the critical ones are contributed via OSGi plugins. I guess guava.jar is important just because it's going to contaminate the server's JVM too, so you want to account for that. Otherwise, it's probably best to treat it like a 14 install and only add the new JARs you've explicitly added and deployed to the server.

Recent Open-Source Project Updates

Fri Sep 06 14:25:34 EDT 2024

I've released a spate of open-source project updates recently, and I figured it'd be good to round up what's new. Most of them are utilitarian in nature - mostly fixes for things that crop up with Domino 14 and Java > 8 - but the first one is larger.

XPages Jakarta EE

Today, I released version 3.1.0 of the XPages JEE project. This is mostly about fixing up some edge-case and sporadic bugs that cropped up in 3.0, but also includes some performance updates and contributions from new contributors. Additionally, it should work on the newly-launched Domino 14.5 EAP1. The use of Java 21 in that version of Domino won't necessarily affect XPages JEE in a while, since JEE 11 targets Java 17, but there's some neat stuff in there for general use.

p2-layout-resolver

The p2-layout-resolver is a plugin that allows the use of p2 (Eclipse-style) repositories as Maven dependencies in non-Tycho projects. I use this in a lot of cases where I move a project from Tycho to maven-bundle-plugin for simplicity in dependency management.

Version 1.9.0 includes a very-useful contribution that fixes dependencies in cases where a bundle has a Bundle-ClassPath entry that references an embedded JAR that doesn't exist. In the Domino world, this cropped up in Domino 14, so it's useful if you're building anything that targets that version of the runtime or above.

p2-maven-plugin

For various Domino-related needs, I maintain a fork of the p2-maven-plugin, which is useful for its additions of things like generating site.xml files (still important for importing into an NSF update site, after all these years) and the <transform>jakarta</transform> option to run JARs through Eclipse Transformer when bundling them, allowing use of pre-Jakarta JEE artifacts in a smooth way.

The 3.1.x versions focused on fixing problems when running on Java > 8 (namely no longer using IBM Commons XML) and improving handling of some other hiccups.

Pretty-Printing JSON in the (Desktop) Notes Client and Domino

Fri Jul 26 10:30:35 EDT 2024

In the OpenNTF Discord (join if you haven't!), Daniel Nashed brought up a task he was facing: in the Notes client, writing pretty-printed JSON. LotusScript has its NotesJSON* classes that can process JSON in their stark way, but the stringify output is meant for machine reading and doesn't include whitespace and line breaks, making it ill-suited for things like configuration files or other things a human might read or edit.

Since the goal is to get it working in the full Notes client and not Nomad, Java is on the table, but Java - for dumb historical reasons - has no proper built-in JSON library. However, as of 12.something HCL shunted IBM Commons down to the global classpath in order to support the "share Java design elements between XPages and agents" feature. Among many other things, IBM Commons includes a JSON library that can suit.

I wrote a post almost a decade ago talking about this library and its limited nature, but it's nonetheless less limited than the LotusScript classes, and it's up to the task. There are a couple ways to go about this, depending on your needs, but for now I'll just cover the basic case here of "I have a string of JSON and want to format it".

To do this, you can make a Script Library of the Java type named, say, "JsonPrettyPrint" and make a new class in the "com.example" package and named "JsonPrettyPrint" with this contents:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
package com.example;

import java.io.IOException;

import com.ibm.commons.util.io.json.JsonException;
import com.ibm.commons.util.io.json.JsonGenerator;
import com.ibm.commons.util.io.json.JsonJavaFactory;
import com.ibm.commons.util.io.json.JsonParser;

public class JsonPrettyPrint {
	public String prettyPrint(String json) throws JsonException, IOException {
		Object jsonObj = JsonParser.fromJson(JsonJavaFactory.instanceEx, json);
		return JsonGenerator.toJson(JsonJavaFactory.instanceEx, jsonObj, false);
	}
}

Then, you can instantiate this object with LS2J and pass it a JSON string, such as one from NotesJSONNavigator:

1
2
3
4
5
6
7
8
UseLSX "*javacon"
Use "JsonPrettyPrint"

Sub Initialize
	Dim jsession As New JavaSession, jsonPrinter As JavaObject
	Set jsonPrinter = jsession.GetClass("com.example.JsonPrettyPrint").CreateObject()
	MsgBox jsonPrinter.prettyPrint(|{"foo":{"bar":"baz"}}|)
End Sub

That'll get you something presentable:

Screenshot of a message box showing pretty-printed JSON

While the stringify-parse-stringify process you'd do if you generated your JSON with the NotesJSON* classes is inefficient, it's not too bad, especially for the size of content you're likely to want to emit here. You could alternatively do more work with JsonJavaObject and friends from IBM Commons directly to save a little overhead, but this path is a good way to do the vast majority of work in "normal" LotusScript and then only dip in to Java for the last step.

As mentioned at the start, the presence of Java means this won't work for Nomad, unfortunately. There may be a way to wrangle your way to this result using the primordial JavaScript runtime present in that, but that may not be worth the trouble unless you really need it. Better would be to vote for the Aha idea to add pretty printing to LS.

XPages JEE 3.0

Sun Jun 09 14:45:14 EDT 2024

Today, I uploaded the release version of 3.0.0 of the XPages Jakarta EE Support project. It's been proving stable in my use since the last beta, and so I think this is as good a time as any to release it properly.

Changes

The big-ticket change remains the move to Jakarta EE 10 as the baseline, which brings a handful of new features as well as a new Java version requirement. That means that this release also requires at least Domino 14. Domino 12.x served us well, but its time has passed.

Jakarta EE 10, for its part, is mostly about solving a lot of old business in the JEE community: it continues the gradual deprecation of EJB in favor of CDI, it removes some old stuff like applet requirements, and then also brings in a couple "scratch an itch" features.

Of particular note is the addition of the EntityPart type for REST services. Though it's a small feature, it's a real "finally" one, in that there hadn't been a proper way to deal with multipart/form-data MIME body parts individually, and so each implementation of Jakarta REST would bring in its own, or you'd have to fall back to taking an InputStream and parsing the MIME body yourself. Now, you can do so in a spec-based way:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
@POST
@Consumes(MediaType.MULTIPART_FORM_DATA)
public String post(@FormParam("part") EntityPart part) throws IOException {
	String mediaType = part.getMediaType();
	String name = part.getName();
	String fileName = part.getFileName();
	MultivaluedMap<String, String> headers = part.getHeaders();
	byte[] data = part.getContent().readAllBytes();
		
	// ...
}

There's also the split of Jakarta NoSQL into that spec plus the new Jakarta Data. In this release of XPages JEE, I mostly aimed to keep the same level of functionality while accounting for the renaming of packages and types, but I'll be interested in building on this in the future.

Finally, there's the project-specific change of condensing the many, many XPages libraries just down to Core, UI, and MicroProfile. That didn't impact functionality as such, but it sure is nice only having three (or six, with the source features) features to say "yes, install this" to when updating it in Designer and only three to check in Xsp Properties. It also allowed me to delete a lot of weird shim and conditional code, and it'll make maintenance of it much easier in the future without having to worry about every permutation of what libraries you have enabled in an NSF.

The Future

Speaking of which, that brings me to some of the next things in the docket. I imagine that the immediate work will be cleaning up any loose ends from the move. For example, Jakarta Concurrency 3.0 brought a bunch of new features, but I haven't actually checked to see if they work or if I need to do more adapting.

Additionally, Jakarta Data is intended to go beyond just NoSQL, and can also layer on top of Jakarta Persistence (née JPA, the API for working with relational DBs) and arbitrary services. I don't know yet if there's a usable implementation beyond the one in Open Liberty, so that may have to wait, but it'll be interesting to tinker with.

There are also a bunch of features I'd like to get cracking on now that this hurdle is done. For example, I'd like to move the NoSQL driver to use JNX, which would let me do a couple things that the Notes.jar classes just can't. Along with that, I'd like to add an option to publish MP Metrics to the Domino statistics store, which adopting JNX would allow easily.

Fortunately, I don't expect that there will be any other breaking-change discontinuities in the future. Jakarta EE 11 has some deprecations and removals, but it's mostly similar to JEE 10 in that it's about classes and idioms that are much, much older than any code built using this project. That should give this 3.x series a good, long period to be a comfortable baseline, even though the next major version of JEE.

XPages JEE 3.0 Beta 4

Wed May 22 13:48:19 EDT 2024

Earlier today, I uploaded beta 4 of XPages JEE 3.0 to GitHub. I've been taking a slow approach to this release due to its "breaking changes" nature, but I think it's just about ready for release.

Domino 14

Like previous betas, this release requires Domino 14 (and Notes 14 for development), since it moves to a baseline of Jakarta EE 10, which in turn requires Java 11. Doing this let me get rid of some extra shim code that was needed to support both Domino 14 and previous versions, and also let me move to some newer language constructs. If you're interested in the sorts of things that the new versions of Java brought, check out the OpenNTF webinar from April, where I talked about just that.

Library Reorganization

Beyond the Java version requirement, the big breaking change I made was to finally shrink the number of XPages libraries and p2 features in the project. As the project grew, I kept adding new distinct XPages libraries, for the principle of keeping each spec distinct, as they often technically are. A few things made me want to fix this, though:

  • Checking all the boxes in the Xsp Properties editor for each library was annoying
  • Checking "Yes, install this plug-in" for every single component, plus its source version, when installing in Designer was very annoying
  • I had to do weird tricks to add features that touched multiple specs. For example, the project tree had a bunch of cross-spec fragments like "jaxrs.cdi" and "json.cdi" to contribute parts for when CDI was present but not break things when it wasn't. This added an extra layer of indirection and maintenance hassle
  • The specs themselves have been converging, particularly in the sense that more and more they assume the "backbone" of CDI is present. For example, Faces removed its original @ManagedBean and related support in favor of going all-in on CDI. Jakarta REST is moving towards the same
  • It was hard to think of realistic scenarios where it would be important to split up the specs like this, using, say, REST but not CDI or Validation

Now, there are just three: "org.openntf.xsp.jakartaee.core", "org.openntf.xsp.jakartaee.ui", and "org.openntf.xsp.microprofile". I was tempted to roll MicroProfile into "core", but they're conceptually (and administratively) distinct enough that it was worth separating them. With this change, it's not only less annoying to install, but it lets me make a lot more assumptions about what is present across specs, simplifying a lot of little things.

Deep-Dive Sidebar: Class Loading

One interesting aspect I ran into when making this change was that I had to readjust my mental model for how class loading is done from an NSF-based application and the libraries it uses. The way it mostly works conceptually aligns with what you see in Designer:

  • Select a library to depend on
  • The XspLibrary has a "getPluginId" method, which then Designer uses to add the OSGi bundle to the classpath
  • Any Require-Bundle dependencies in that plugin marked as "visibility:=reexport" are also included on the classpath

So, in this way, you'd previously select the "org.openntf.xsp.cdi" library, which would then add a dependency on the bundle of the same name, which would in turn re-export the things the NSF should see, such as the CDI API classes.

When I consolidated the libraries, I did it in the straightforward way: I made new "*.library" bundles for them and then added the existing spec-specific bundles as re-exported dependencies. As far as Designer was concerned, all was well, and there was just another little layer in between.

However, that's not quite the whole story when it comes to the runtime on the server. Though Designer presents the NSF as a pseudo-OSGi bundle using the Plug-in Development Environment, Domino doesn't do the same thing. What Domino does is use a class called ModuleClassLoader (not to be confused with Equinox's ModuleClassLoader, which is entirely different and IS an OSGi loader) to handle loading classes from the NSF and its dependencies. The way it gets to its dependencies isn't really a "true" OSGi way, though: it keeps track of a collection of ClassLoader objects as extraDepends, which it consults each in turn as needed. Those ClassLoader objects, at least in post-8.5.2-era Domino, are the internal class loaders from the library OSGi bundles. This is cheating, and I imagine it was made for pragmatic transitional reasons when OSGi came into the picture.

The old layout conceptually looks like this:

Diagram of NSF to old library dependency

At first blush, this seems like a "six of one, half a dozen of the other" sort of situation, but it's not quite. What this setup does that normal OSGi doesn't is that it exposes META-INF/services files inside the direct dependencies to the application's ClassLoader, whereas these are normally encapsulated in OSGi. The effect was that a bunch of things that used to work started to fail - REST couldn't find all its output-writing classes, Validation couldn't find its implementation, and so forth. This is because they would all internally ask the thread-context ClassLoader (i.e. the NSF's loader) for resources within META-INF/services, and the extraDepends list used to be able to find them. Now that there was a layer of indirection, this no longer worked: the extraDepends loaders could see their own stuff but would not traverse the OSGi barrier to peek inside their further dependencies for these. Conceptually, now we have this:

Diagram of NSF to new library dependency

A direct ClassLoader dependency allows reading of resources, but a true OSGi-type dependency does not. So the result is that I had to "promote" a bunch of META-INF/services files from the now-downstream plugins into the "*.library" ones. It all makes sense once you see how the gears are moving, but it sure threw me for a loop for a while.

Bundle and Package Renaming

Okay, now back to the changes.

Since I was already breaking things anyway, I decided this was a good opportunity to fix the names of the bundles and packages in the project's source. For example, some names were antiquated: what was once "JSF" is "Jakarta Faces", but my bundle was "org.openntf.xsp.jsf". Additionally, I was inconsistent in my hierarchy: while Transaction was in "org.openntf.xsp.jakarta.transaction", others (like Faces there) skipped the "jakarta" level of the hierarchy. These don't normally matter to developers consuming the library, but they annoyed me. Now, all of the bundles and their contained packages are within either "org.openntf.xsp.jakarta", "org.openntf.xsp.jakartaee" (for platform-wide capabilities), or "org.openntf.xsp.microprofile".

Along with this will be a couple potential breaking changes for app-level code, such as moving org.openntf.xsp.beanvalidation.XPagesValidationUtil to org.openntf.xsp.jakarta.validation.XPagesValidationUtil, but there won't be TOO many due to this change.

Jakarta Data and NoSQL Changes

This one isn't from my latest round of changes and has been the case since early in the 3.x stream, but it's worth mentioning again here. The Repository concept from Jakarta NoSQL moved from that spec to the new "Jakarta Data" spec, and so related packages changed from jakarta.nosql.mapping to jakarta.data. Additionally, since the NoSQL spec shrunk to accommodate, things like @Column changed from jakarta.nosql.mapping.Column to jakarta.nosql.Column. It makes sense as NoSQL has been an evolving spec all along, but I suspect that this will be the biggest app-code-breaking change it experiences for a good while.

Release and Future Versions

My next steps are to put this through its paces now that all the issues are closed. Though I've ported everything to the JEE 10 versions, I haven't tested to make sure that most of the new features work. While JEE was largely a "cleanup" release, there are a bunch of new features, particularly in Faces, which is in turn always the jankiest part of the stack on Domino.

Post-3.0, I expect that my focus will start to shift to Jakarta EE 11. For a time, I was going to be SOL with it: though Domino 14 bumped Java to 17, JEE 11 was slated to target Java 21 at a minimum. In the mean time, however, that target shifted down to 17, putting it back on the table for Domino. JEE 11 was originally slated for Q1 of this year, but it slipped to some time around summer. That fits reasonably well with my cadence here. JEE 11 is technically also a breaking release, but I suspect that it won't break features that XPages JEE users use, at least not after this hurdle here.

Simplifying the Maven Build of the NSF File Server Project

Wed Apr 10 17:02:09 EDT 2024

When working on NSF File Server project that I talked about the other day, I took a slightly-different tack as far as building it than I did in the past, and I think it's worth going over some of that in case it's useful for others.

Initial Version

The first version of this project was a non-OSGi WAR file meant to be deployed to an app server like Liberty, not to Domino's OSGi stack, and so it's never involved Tycho. This made it mostly simpler, since its various dependencies are normal Maven dependencies and so I didn't have to worry about any of the annoying hoops.

However, it did have some native Domino dependencies: Notes.jar and the NAPI. These would need to be included as Maven dependencies and brought into the final WAR. The way I handled this was using the generate-domino-update-site project, which lets you first generate a p2 site in the style of the painfully outdated IBM-provided update site and then, if desired, turn that p2 site into more-normal Maven artifacts.

When I eventually switched from targeting a WAR file to having it run on Domino, I used the same dependency structure. The Domino version runs as an HttpService implementation, and so I pointed at the Mavenized version of the com.ibm.xsp.bootstrap and com.ibm.domino.xsp.adapter bundles.

Then, I used the maven-bundle-plugin, which fits the job of taking an otherwise-normal Maven project and making it work in OSGi environments (mostly). The way that plugin works is that you specify a lot of your MANIFEST.MF rules in the pom.xml:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
<plugin>
	<groupId>org.apache.felix</groupId>
	<artifactId>maven-bundle-plugin</artifactId>
	<extensions>true</extensions>
	<configuration>
		<instructions>
			<Bundle-SymbolicName>org.openntf.nsffile.httpservice;singleton:=true</Bundle-SymbolicName>
			<Bundle-RequiredExecutionEnvironment>JavaSE-1.8</Bundle-RequiredExecutionEnvironment>
			<Export-Package/>
			<Require-Bundle>
				com.ibm.domino.xsp.adapter,
				com.ibm.commons,
				com.ibm.commons.xml
			</Require-Bundle>
			<Import-Package>
				javax.servlet,
				javax.servlet.http
			</Import-Package>
			<Embed-Dependency>*;scope=compile</Embed-Dependency>
			<Embed-Transitive>true</Embed-Transitive>
			<Embed-Directory>lib</Embed-Directory>
			
			<_removeheaders>Require-Capability</_removeheaders>
			
			<_snapshot>${osgi.qualifier}</_snapshot>
		</instructions>
	</configuration>
</plugin>

The first couple are one-for-one matches to what you'd have in the MANIFEST.MF, but things get weird once you get to the "Embed-*" ones.

The Embed-Dependency instruction is a potent one: you give it a description of what dependencies you want embedded in your OSGi bundle (in this case, all my non-provided dependencies), and then it does the job of copying them into your final bundle JAR. You can do this other ways - copying them manually, using the Maven Dependency Plugin, or others - but this handles all your transitive stuff nicely for you, thanks to Embed-Transitive. I use Embed-Directory here just for cleanliness - the result is functionally the same without it.

The final bits are just for cleanliness: I remove Require-Capability to avoid some trouble I had with older Domino versions, and then I set what the snapshot value will be, which ends up being the current build time.

With this, I end up with a single OSGi bundle with everything in it. This works well for this sort of project - with something to be used in Designer, I prefer to make a big pool of distinct OSGi bundles to make it so that you can look up the source properly, but something server-only like this doesn't need that.

2.0 Version

In this new version, the switch to JNX meant that I was tantalizingly close to not having to do any weird dependency stuff: JNX is distributed in Maven Central, so I didn't need to have the weird locally-built stuff like I did for Notes.jar and the NAPI.

However, that wasn't everything: there are still the "bootstrap" bundles containing the HttpService superclass and related classes. While I don't need to distribute those anywhere, they're still required to compile the classes - no amount of non-verified text files or the like will get around that.

I came up with another way, though. Java only needs classes that look like those to compile, and then the compiled class of mine will be the same regardless of anything else. This is critical: I don't actually need any implementation code, and that's the part I can't redistribute. So I made two little Maven modules: com.ibm.xsp.bootstrap.shim and com.ibm.domino.xsp.adapter.shim. These modules contain just a handful of classes, and of those classes only the methods actually referenced by my code. Since these modules will then be marked as "provided", they won't be bundled into the final JAR.

This squared the circle nicely, where now I can compile the Java side without any weird pre-requisites. Admittedly, the two NSFs in the module set still require an NSF ODP Tooling environment, which is a whole other ball of wax, but it's a step in the right direction.

Other Uses

This technique can be used in other similar projects when you only need a few classes from the XPages stack. For example, if your goal is to just wrap a third-party library and provide it to XPages in Designer, you could probably do this by making a stub implementation of XspLibrary and related classes, and skip the whole generate-domino-update-site step. The more you use from the stack, the less practical this is - for example, the XPages Jakarta EE project reaches into all sorts of crap, and so I can't really do this there. For this, though, it works nicely.

NSF File Server 2.0

Sun Apr 07 13:59:18 EDT 2024

A few years ago, I made a little project that hosts an SFTP server that stores documents in an NSF. I've used it here and there since then - as in the original post, I stashed some company docs in it to have them nicely synced among our Domino servers, and I've also had cases where clients use it to, for example, provide a way for their vendors to upload files in a standard way.

The other week, I decided to dive back into it to add some capabilities I'd wanted for a while, and the result is version 2.0.0. This version is a significant revamp that adds quite a bit.

Multiple Mounts

The first limitation I wanted to improve was the fact that the first version was restricted to a single NSF. That works fine in the basic case, but I wanted to start doing things like storing server config backups in there, and wouldn't necessarily want them in the same NSF as, say, company contracts, credentials, and secrets.

The way I went about this was to make it so that the new configuration NSF has a "Mounts" view that lets you specify a path in a conceptual top level of the filesystem that would then point to a target NSF. This allows the admin to do things like have separate ACLs for different mounts - since the client will act as an authenticated Domino user, these will be properly obeyed, and a user won't be able to access documents they don't have rights to.

Additionally, I could configure it so that not all mounts are present on all servers, which will come into play particularly with the next feature.

Screenshot of the Mounts view in the file server config NSF

New Filesystem Types

Once I had a composite top-level filesystem, I realized that it wouldn't be terribly difficult to allow more filesystem types than the original NSF file store I made. That filesystem is built using the NIO File System Provider framework added in Java 7, and that system is designed to be pretty extensible. By default, Java comes with a few providers: the normal local filesystem as well as one that can treat a ZIP or JAR as a contained filesystem itself. These are accessed in a generic way, where you specify a URI and a map of "env" properties that are provider-specific.

For example, the ZIP filesystem takes a URI in the form of "jar:file:/some/path/to/file.zip" and an environment map configuring whether the filesystem should be created if it doesn't already exist in memory and what encoding to use for filenames (very important if you have Unicode characters in there).

So I added ways to configure a mount to the local server filesystem (similar to what the Mindoo FTP Server does) and then a generic configuration for any installed provider. It's probably uncommon that you will have a custom File System Provider implementation in your Java classpath, but hey, maybe you do, and I want to allow that to work.

I also added an extension point to the project itself that allows adding new providers via plugin.xml files in OSGi, and I can think of a couple other projects that may use this, like the NSF ODP Tooling.

WebContent Filesystem

Beyond adding the JVM-provided systems, I wrote another new filesystem type, one that provides access to the conceptual "WebContent" directory presented in Package Explorer in Designer:

Screenshot showing Designer and Transmit looking at the same WebContent in an NSF

The idea here is that this could be used to deploy, say, a JavaScript client application to an NSF without the developer or build server having to know anything about Domino. Pretty much everything can work with SFTP, so this makes accessing those files a lot easier. This is similar to the WebDAV capabilities Domino has had for a very long time, but with a different protocol.

Server Keypairs

In the first version, the app would generate and store the server's SSH keypair on the filesystem, in the data directory. This is fine, but part of the point of this whole project is that I like to get away from non-replicating stuff, and so I moved these keys to the configuration NSF. Now, on first connection, the server will look for a keypair document in the NSF and, if it doesn't exist, will generate a new one and store it there. Since I've been working with encrypted fields a lot for client work lately, I also realized that this was a good use for it: the public key is a normal text item (so you could distribute and verify it as needed), but the private key is encrypted with the generating server's ID file. Since only the server itself ever needs to know its private key, this works swimmingly.

JNX

This isn't a new app feature per se, but this was a good situation for me to put JNX to work in an open-source project. I had originally written this using the lotus.domino classes for most work and the IBM NAPI for things like generating sessions for a given username, but switching to JNX let me ditch both of those.

Admittedly, this is a case where switching to JNX didn't grant me significant new capabilities, but it DID let me do a couple things better. Some things are distinct feature improvements, like improving password authentication (previously, I was doing a compare of hashes "manually", which is fragile), while others are just making the code smoother, like no longer having to do the read-convert-recycle dance with DateTimes in LSXBE. It's just pleasant, and let me find a few places where the JNX API could be improved.

Future Additions

When I pick this project back up, there are certainly a couple things I'd like to add.

One would be to look into rsync support: rsync is tremendously useful for things like synchronizing filesystem-bound configs, but it's its own protocol tunneled over SSH, and so just having SFTP isn't enough. The underlying Apache Mina SSHD project is a general SSH server and not just SFTP, so it may be possible to do it by intercepting the commands sent over to initialize rsync, but it will be non-trivial. There's a library in Java that provides an rsync server, but it's GPL-licensed, and so I have to keep away for license-safety's sake.

Beyond that, it's mostly that I'd like to implement more filesystem types. Presenting data as a filesystem can be a very powerful tool: you could imagine providing access to documents in a DB as DXL or YAML, or listing files from a Document Library NSF, or (as I'd like to do some day) having the NSF ODP Tooling project replicate the ODP layout over SFTP.

For now, I'm looking forward to putting it to more use as a coordinating point. If I keep messing around with apps on TrueNAS, it'll give me a good feeling of security to have more info stashed in Domino and less prone to destruction if one server happens to blow up.

Realmz

Sun Mar 31 11:35:14 EDT 2024

For a while now, I've wanted to just kind of gush about an old Mac game I played when I was a teenager, and the last day of Marchintosh for the year is as good a time as any.

Overview

Realmz is a game that ran on the classic Mac OS and, in later versions, Windows. It was shareware at the time - one of the few shareware games I ended up cobbling together the money for - but has long been made fully available for free, with my go-to source being the Macintosh Garden. If you have SheepShaver around, it works nicely there.

The game itself is quickly identified as a party-based fantasy RPG. I didn't really realize it at the time, but it's a full-on CRPG in the nerdiest sense. I mean, look at this freaking character sheet:

Screenshot of the Realmz character creation sheet

While it's not strictly D&D rules, it basically is. Older versions (which are also available on the Macintosh Garden) even used THAC0 before switching to an "Armor Rating" system.

CRPG

Looking back, I'm glad I had an experience with such a true-blood CRPG at the time. I didn't play D&D growing up, didn't play the Gold Box games, and was too busy playing pretty much exclusively Blizzard games to play the Infinity Engine games or Neverwinter Nights when they came out. It wasn't really until Dragon Age: Origins and then (especially) Pillars of Eternity that I realized the glory of the genre. But looking at Realmz, it's obvious that it's right in the same lineage.

Combat is strictly turn-based, takes place on a grid, and is suitably technical:

Screenshot of a combat situation in Realmz

It even does some of the weird stuff: for example, martial characters won't just get multiple attacks per round, but will also get "partial" steps like my rogue Hebs there, who gets three attacks every two rounds, as a stepping stone to 2 / 1.

Realmz also has its own mechanics-heavy take on the thing CRPGs try to do where they want to emulate an open-ended experience a DM might oversee beyond just combat. For example, early on, you meet a kid who wants you to help his dog, which is stuck in a well. When you get there, you're presented with the "encounter" screen, where you can try all sorts of things:

Screenshot of the Realmz encounter screen

There are a lot of ways to deal with these encounters. In this case, I might have Galba there do an Acrobatic Act, which has about even odds. My sorcerer Fenton there might use a Spider Climb (might not be the name) spell to make scaling the well effortless. Or, if I stocked up, I might just use a rope. You can easily fail this - if you do, the kid runs off crying and you have to wait for the guards to show up to help you, with no experience gain. Realmz has a bunch of these scenarios and they're pretty neat. Admittedly, they fall short in the ways that all non-DM-run games eventually do, where your actual options aren't truly limitless. The "Speak" option is available in other situations, but it's only ever really practical if you have, say, a magic word to open a door or something. It's not a true tabletop experience, but it's trying, bless its heart.

Mac-ness

One thing I really enjoy about games in the heyday of Mac shareware games (by the way, read The Secret History of Mac Gaming if you haven't - it's great) is how thoroughly Mac-like they are. For both practical and cultural reasons, a lot of Mac games didn't necessarily take over the whole screen with their own interface like DOS and Windows games usually do. While there are some Windows games that use the Windows UI, like another small classic Castle of the Winds, it's very common in Mac games. For example, there's Scarab of Ra:

Screenshot of Scarab of Ra from Macintosh Garden

As it happens, Scarab of Ra is another game where I didn't appreciate its lineage at the time: it's a true roguelike, albeit with a first-person perspective.

Realmz doesn't go quite as hard in using native widgets for everything, but you can see the menu bar in earlier screenshots - you use the normal Mac menu to access game commands, the bestiary, your ally list, your collected notes, and so forth. It's just neat. Also, like a lot of Mac software at the time, Realmz's program directory is just a delight to look at:

Screenshot of the Realmz 2.5 installation folder

My use of the "Drawing Board" Appearance Manager theme helps it too, but just check out those icons. That sort of thing wasn't strictly necessary, but it was the Mac way, and it was wonderful.

Versions

And this isn't exactly a Mac-like attribute, but I like that Realmz wasn't afraid of using version numbers. It went from version 1.x all the way up through 8.x, with minor and patch versions along the way. It was updated all the time, and it was always exciting to see a new major version to find what the big changes are.

Mostly, the changes were things like adding classes: the old versions have the same sort of handful you'd find in basic D&D, while the later ones have so many that you can pick between "Archer" and "Marksman" or "Bard" and "Minstrel". Some of the changes were less like finding a D&D source book and more like the game gradually morphing into its own sequel, though.

For example, the original versions didn't have music of any kind, as was the style at the time. Somewhere along the line (version 5, I think), it gained music, and... boy, it's a doozy. Here's, for example, the camping music:

What I assume happened is that the developer wanted to add some music and then found some free or cheap module files and slapped them in there where they kind of work. The tone is absolutely bizarre, and it's kind of great for it.

It was just neat seeing the game progress, with the changes in systems and new features, even the "eh, not the best idea" stuff like parts of dungeons that switch to a first-person mode.

Scenarios

I have to admit that, though I played a ton of Realmz, I never even got that far into it. A big part of that was that the scenarios beyond the starting City of Bywater also cost money above the core game, and it's a tall order for a cash-strapped teenager to cough up money at all, let alone add-on costs. So my characters probably always plateaued around level 10, as there's just not that much to do in the base scenario. That's good news for future me, though: there's a ton of stuff waiting for me whenever I want to get back into it.

If you have a taste for older games or CRPGs in general, I definitely suggest you give it a try. The Windows version may be easier to run than the Mac one, though it loses some of the appeal. Depending on your temperament, Realmz may be easier to get into from scratch than games like, say, the original Baldur's Gate, with the latter's janky RTwP combat and constant fourth-wall-breaking dweebiness. Definitely keep it in mind for a cozy-day game, I say.

Adding Code Coverage Reports To Domino-Container-Run Tests

Mon Mar 11 15:33:02 EDT 2024

Tags: docker testing

When you're writing test suites for your code, it can be very useful to use a tool to analyze the code coverage of your tests. While people can get a little obsessive about coverage percents, there's certainly no denying that it's helpful to know how much of your code is actually run when testing, and also being able to look down into the specifics of what is covered.

With Java, one of the preeminent tools for this is JaCoCo, a venerable open-source library that you can integrate with your test suites to give reports of your coverage. In a normal project, such as a build run via Maven, you can use the Maven plugin in tandem with the Maven Surefire and Failsafe plugins. However, things get more complicated if the code you're actually testing isn't in the Surefire JVM, but rather inside a container.

That's exactly the situation I have with the integration-test suite of the XPages Jakarta EE project, where it creates a Docker container with the current build of the project deployed as OSGi plugins, and then executes HTTP calls against OSGi bundles and NSFs. I figured this was a solvable problem, so I set out doing so.

I first came across this blog post, which describes the general idea well, but unfortunately references Gists that seem to no longer exist. Still, it gave me a good starting point.

Installing JaCoCo in Domino

The first thing I had to do was to get the JaCoCo Java agent into the container. I added it as a Maven dependency to the IT suite project:

1
2
3
4
5
6
<dependency>
	<groupId>org.jacoco</groupId>
	<artifactId>org.jacoco.agent</artifactId>
	<version>0.8.11</version>
	<scope>test</scope>
</dependency>

Conveniently, this dependency is itself a wrapper for the agent JAR and comes with a convenience method for accessing the JAR data. I used that to read it into memory and send it to the Docker runtime during the container build:

1
2
3
4
5
byte[] agentData;
try(InputStream is = AgentJar.getResourceAsStream()) {
	agentData = IOUtils.toByteArray(is);
}
withFileFromTransferable("staging/jacoco.jar", Transferable.of(agentData)); //$NON-NLS-1$

The use of Transferable here allows me to keep the process independent of whether Docker is running locally or remote - I run remotely almost all the time nowadays, due to Domino's continued lack of an ARM port.

With the file in place, I modified my Dockerfile to copy it to a known location in the container:

1
2
COPY --chown=notes:notes staging/jacoco.jar /local/
COPY --chown=notes:notes staging/JavaOptionsFile.txt /local/

The JavaOptionsFile.txt was already there for another ARM-related reason, but it's important to note for the next step. This sort of file is how you enable JaCoCo in the Domino JVM: I set JavaUserOptionsFile=/local/JavaOptionsFile.txt and it'll read its rules from there. Following the instructions, I added -javaagent:/local/jacoco.jar=output=file,destfile=/tmp/jacoco.exec on its own line in this file. This causes JaCoCo to be automatically loaded with the HTTP JVM and to store its report in the named file on shutdown.

Reading the Data

That said, this didn't work immediately. The file "/tmp/jacoco.exec" was created properly inside the container, so the agent was running, but the file content was always zero bytes. I realized that this was due to the merciless way in which the container is killed by my test suite: there's no proper shutdown step, and so JaCoCo's shutdown hook never fires.

Fortunately, writing to a file isn't the only way JaCoCo can do its reporting - you can also have it open up a TCP port to connect to and read. So I changed the Java option line to:

1
-javaagent:/local/jacoco.jar=output=tcpserver,address=*,port=6300

I modified the withExposedPorts(...) call inside the class that builds my Testcontainers container to also include 6300, and then used getMappedPort(6300) to identify the actual randomized port mapped by Docker.

The remaining task was to figure out the little protocol used by JaCoCo to signal that it should collect and return its data. I get the impression that it's not too complicated, but I still figured it'd be best to use an existing implementation. I found jacocotogo, a Maven plugin that reads the data, and it looked promising. However, it had two problems: being a Maven plugin, it came with a bunch of transitive dependencies I didn't want, and it's also 11 years old and thus a bit out of date.

I ended up forking the main utility class, trimming out the parts I didn't need (like JMX), switching it to NIO, and going from there.

Using the Data

With that all in place, a test run will end up with a file named "jacoco.exec" inside the "target" directory. Using this file varies by IDE, but, in Eclipse, you can install the EclEmma tool, open the "Coverage" view, right-click in the table area, and choose "Import Session...". That will let you locate the file and then choose the projects from your workspace that you're looking to analyze.

When I did that, I got my results:

Screenshot of Eclipse's Coverage tool detailing my test suite's coverage of somewhere around 50-65%

This is surprisingly good for the project, especially when you consider how large chunks of the red bars are things like the servlet wrapper package, which includes a lot of delegating code that is obligatory to match the interface but is not likely to be actually used in practice.

While this is currently the only project where I've needed to do this, it'll certainly be good to keep these techniques in mind. The TCP port thing in particular should be handy in future edge cases even without the Docker part.

Homelab Rework: Phase 3 - TrueNAS Core to Scale

Sat Mar 02 14:00:08 EST 2024

  1. Jun 25 2023 - Planning a Homelab Rework
  2. Jul 10 2023 - Homelab Rework: Phase 1
  3. Sep 15 2023 - Homelab Rework: Phase 2
  4. Mar 02 2024 - Homelab Rework: Phase 3 - TrueNAS Core to Scale

When I last talked about the ragtag fleet of computers I generously call a "homelab" now, I had converted my gaming/VM machine back from Proxmox to Windows, where it remains (successfully) to this day.

For a while, though, I've been eyeing converting my NAS from TrueNAS Core to Scale. While I really like FreeBSD technically and philosophically, running Linux was very appealing for a number of reasons. Still, it was a high-risk operation, even though the actual process of migration looked almost impossibly easy. For some reason, I decided to take the plunge this week.

The Setup

Before going into the actual process, I'll describe the setup a bit. The machine in question is a Mac Pro 1,1: two Xeon 5150s, four traditional HDDs for storage, and a handful of M.2 drives. The machine itself is far, far too old to have NVMe on the motherboard, but it does have PCIe, so I got a couple adapter cards. The boot volume is a SATA M.2 disk on one of them, while I have some actual NVMe ones serving as cache/log devices in the ZFS pool. Also, though everything says that the maximum RAM capacity is 32 GB, I actually have 64 in there and it's worked perfectly.

It's a bit of a weird beast this way, but those old Mac Pros were built to last, and it's holding up.

Also, if you're not familiar with TrueNAS and its different variants, it's worth a bit of explanation. TrueNAS Core (née FreeNAS) is a FreeBSD-based NAS-focused OS. You primarily interact with it via a web-based GUI and its various features heavily revolve around the use of ZFS, while its app system uses FreeBSD jails and its VM system uses Bhyve. TrueNAS Scale is a related system, but based on Debian Linux instead of FreeBSD. It still uses ZFS, and its GUI is similar to Core, but it implements its apps and VMs differently (more on this in a bit). For NAS/file-share uses, there's actually less of a difference than you might think based on their different underlying OSes, but the distinctions come into play once you go beyond the basics.

The Conversion

If anything, the above-linked documentation overstates the complexity of the operation. I didn't even need to go the "manual update" route: I went to the Update panel, switched from the TrueNAS Core train to the current non-beta TrueNAS Scale one, hit Update, and let it go. It took a long time, presumably due to the age of the machine, but it did its job and came back up on its own.

Well, mostly: for some reason, the actual data ZFS pool was sort of half-detached. The OS knew it was supposed to have a pool by its name, but didn't match it up to the existing disks. To fix this, I deleted the configuration for the pool (but did not delete the connected service configuration) and then went to Import Pool, where the real one existed. Once it was imported, everything lined back up without further issue.

Being basically a completely-different OS, there are a number of features that Core supports but Scale doesn't. Of that list, the only one I was using was the plugin/jail system, but I had whittled my use down to just Postgres (containing only discardable dev data) and Plex. These are both readily available in Scale's app system, and it was quick enough to get Plex re-set-up with the same library data.

Apps

As I mentioned, TrueNAS Core uses a custom-built "plugin" system sitting on top of the venerable FreeBSD jail capabilities. Those jails are similar in concept to things like Docker containers, and work very similarly in practice to the Linux Containers system I experienced with Proxmox.

TrueNAS Scale, for its part, uses Kubernetes, specifically by way of K3s, and provides its own convenient UI on top of it. Good thing it does provide this UI, too, since Kubernetes is a whole freaking thing, and I've up until this point stayed away from learning it. I guess my time has come, though. Kubernetes is distinct from Docker - while older versions used Docker as a runtime of sorts, this was always an implementation detail, and the system in use in current TrueNAS Scale is containerd.

Setting aside the conceptual complexity of Kubernetes, this distinction from Core is handy: while not being Docker, Kubernetes can consume Docker-compatible images and run them, and that ecosystem is huge. Additionally, while TrueNAS ships with a set of common app "charts" (Plex included), there's a community project named TrueCharts that adds definitions for tons and tons more.

Domino

That brings me to our beloved Domino. I had actually kind of gotten Domino running in a jail on TrueNAS Core, but it was much more an exercise in seeing if I could do it than anything useful: the installer didn't run, so I had to copy an installation from elsewhere, and the JVM wouldn't even load up without crashing. Neat to see, but I didn't keep it around.

The prospect on Scale is better, though. For one, it's actually Linux and thus doesn't need a binary-compatibility shim like FreeBSD has, and the container runtime meant I could presumably just use the normal image-building process. I could also run it in a VM, since the Linux hypervisor works on this machine while bhyve did not, but I figured I'd give the container path a shot.

Before I go any further, I'll give a huge caveat: while this works better than running it on FreeBSD, I wouldn't recommend actually doing what I've done for production. It'll presumably do what I want it to do here (be a local replica of all of my DBs without requiring a distinct VM), it's not ideal. For one, Domino plus Kubernetes is a weird mix: Kubernetes is all about building up and tearing down a swarm of containers dynamically, while Domino is much more of a single-server sort of thing. It works, certainly, but Kubernetes is always there to tempt you into doing things weird. Also, I know almost nothing about Kubernetes anyway, so don't take anything I say here as advice. It's good fun, though.

That said, on to the specifics!

Deploying the Container

The way the TrueNAS app UI works, you can go to "Custom App" and configure your container by referencing a Docker image from a repository. I don't normally actually host a Docker registry, instead manually loading the image into the runtime. It might be possible to do that here, but I took the opportunity to set up a quick local-network-only one on my other machine, both because I figured it'd be neat to learn to do that and because I forgot about the Harbor-hosted option on that link.

Since the local registry used HTTP and there's nowhere in the TrueNAS UI to tell it to not use HTTPS, I followed this suggestion to configure K3s to explicitly map it. With that in place, I was able to start pulling images from my registry.

The Domino Version

One quirk I quickly ran into was that I can't use Domino 14 on here. The reason for this isn't an OS problem, but rather a hardware limitation: the new glibc that Domino 14 uses requires the "x86-64-v2" microarchitecture level and the Xeon 5150 just doesn't have that by dint of pre-dating it by two years.

That's fine, though: I really just want this to house data, not app development, and 12.0.2 will do that with aplomb.

Volume Configuration

The way I usually set up a Domino container when using e.g. Docker Compose is that I define a handful of volumes to go with it: for the normal data dir, for DAOS, for the Transaction Log, and so forth. This is a bit of an affectation, I suppose, since I could also just define one volume for everything and it's not like I actually host these volumes elsewhere, but... I don't know, I like it. It keeps things disciplined.

Anyway, I originally set this up equivalently in the Custom App UI in TrueNAS, creating a "Volume" entry for each of these. However, I found that, for some reason, Domino didn't have write access to the newly-created volumes. Maybe this is due to the uid the container is built to use or something, but I worked around it by using Host Path Volumes instead. The net effect is the same, since they're in the same ZFS pool, and this actually makes it easier to peek at the data anyway, since it can be in the SMB share.

Once I did that and made sure the container user could modify the data, all was well. Mostly, anyway.

Transaction Logs, ZFS, and Sector Size

Once Domino got going, I noticed a minor problem: it would crash quickly, every time. Specifically, it crashed when it started preparing the transaction log directory. I eventually remembered running into the same problem on Proxmox at one point, and it brought me back to this blog post by Ted Hardenburgh. Long story short, my ZFS pool uses 4K sectors and Domino's transaction logs can't deal with that, at least in 12.0.2 and below.

This put me in a bit of a sticky spot, since the way to change this is to re-create the entire pool and I really didn't want to do that.

I came up with a workaround, though, in the form of making a little disk image and formatting it ext4. You can use a loop device to mount a file like a disk, so the process looks like this:

1
2
3
4
dd if=/dev/zero of=tlog.img bs=1G count=1
sudo /sbin/losetup --find --show tlog.img
sudo mkfs.ext4 /dev/loop0
sudo mount /dev/loop0 /mnt/tlog

That makes a 1GB disk image, formats it ext4, and mounts it as "/mnt/tlog". This process defaults to 512-byte sectors, so I made a directory within it writable by the container user (more on this shortly), configured the Domino container to map the transaction log directory to that path, and all was well.

Normally, to get this mounted at boot, you'd likely put an entry in fstab. However, TrueNAS assumes control over system configuration files like that, and you shouldn't edit them directly. Instead, what I did was write a small script that does the losetup and mount lines above and added an entry in "System Settings" - "Advanced" - "Init/Shutdown Scripts" to run this at pre-init.

Networking

The next hurdle I wanted to get over was the networking side. You can map ports in apps in a similar way to what you'd do with Docker, but you have to map them to a port 9000 or above. That would be an annoying issue in general, but especially for NRPC. Fortunately, the app configuration allows you give the container its own IP address in the "Add external Interfaces" (sic) configuration section. Since the virtual MAC address changes each time the container is deployed, I gave it a static IP address matching a reservation I carved out on my DHCP server, pointed it to the DNS server, and all was well. All of Domino's open ports are available on that IP, and it's basically like a VM in that way.

Container User

Normally, containers in TrueNAS's app system run as the "apps" user, though this is configurable per-app. The way the Domino container launches, though, it runs as UID 1000, which is notes inside the container. Outside the container, on my setup, that ID maps to... my user account jesse.

Administration-wise, that's not exactly the best! In a less "for fun" situation, I'd change the container user or look into UID mapping as I've done with Docker in the past, but honestly it's fine here. This means it's easy for me to access and edit Domino data/config files over the share, and it made the volume mapping above work without incident. As long as no admins find out about this, it can be my secret shame.

Future Uses

So, at this point, the server is doing the jobs it was doing previously, plus acting as a nice extra replica server for Domino. It's positioned well now for me to do a lot of other tinkering.

For one, it'll be a good opportunity for me to finally learn about Kubernetes, which I've been dragging my feet on. I installed the Portainer chart from TrueCharts to give me a look into the K8s layer in a way that's less abstracted than the TrueNAS UI but more familiar and comfortable than the kubectl tool for me for now.

Additionally, since the hypervisor works on here, it'll be another good location for me to store utility VMs when I need them, rather than putting everything on the Windows machine (which has half as much RAM).

I could possibly use it to host servers for various games like Terraria, though I'm a bit wary of throwing such ancient processors at the task. We'll see about that.

In general, I want to try hosting more things from home when they're non-critical, and this will definitely give me the opportunity. It's also quite fun to tinker with, and that's the most important thing.