Using Custom DNS Configurations With CertMgr

Thu Oct 24 18:51:20 EDT 2024

The most common way that I expect people use Domino's CertMgr/certstore.nsf is to use Let's Encrypt with the default HTTP-based validation. This is very common in other products too and usually works great, but there are cases when it's not what you want. I hit two recently:

  • My Domino servers are behind Traefik reverse proxies to blend them with other Docker-based services, and by default the HTTP challenge doesn't like when there's already an HTTP->HTTPS redirect in place
  • I also have dev servers at home that aren't publicly-visible at all, and so can't participate in the HTTP flow

The first hasn't been trouble until recently, since the reverse proxy is fine, but now I want to have a proper certificate for other services like DRAPI. For the second, I've had a semi-manual process: my pfSense-based router uses its Acme Certificate plugin to do the dns-01 challenge (since it knows about my DNS provider) and then, every three months, I would export that certificate and put it into certstore.nsf.

Re-Enter CertMgr

Domino's CertMgr can handle those DNS challenges just fine, though, and the HCL-TECH-SOFTWARE/domino-cert-manager project on GitHub contains configuration documents for several common providers/protocols.

For historical reasons (namely: I didn't like Network Solutions in 2000), I use joker.com as my registrar, and they're not in the default list. Indeed, it seems like their support for this process is very much a "oh geez, everyone's asking us for this, so let's hack something together" sort of thing. Fortunately, the configuration docs are adaptable with formula (and other methods) - I'll spare you the troubleshooting details and get to the specifics.

DNS Provider Configuration

In certstore.nsf, the "DNS Configuration" view lets you create configuration documents for custom providers. Before I go further, I'll mention that I put the DXL of mine in OpenNTF's Snippets collection - certstore.nsf has a generic "Import DXL" action that lets you point to a file and import it, made for exactly this sort of situation.

Anyway, the meat of the config document happens on the "Operations" tab. This tab has a bunch of options for various lookup/query actions that different providers may need (say, for pre-request authorization flows), but we won't be using most of those here.

Operations

Our type here is "HTTP Request" - there are options to shell out to a command or run an agent if you need even more flexibility, but that type should handle most cases.

The "Status formula" field controls what Domino considers the success/failure state of the request. It contains a formula that will be run in the context of a consistent document used across each phase. If your provider responds with JSON, the JSON will be broken down into JSONPath-ish item names, as you can see in the HCL-provided examples. You can then use that to determine success or failure. Joker replies in a sparse human-readable text format, but does set the HTTP status code nicely, so I set this to ret_AddStatus.

The "DNS provider delay" field indicates how long the challenge check will wait after the "Add" operation, and that latency will depend on your specific provider. I did 20 seconds to be safe, and it's proven fine.

During development, setting "HTTP request tracing" to "Enabled" is helpful for learning how things go, and then "Write trace on error" is likely the best choice once things look good.

HTTP Lookup Request

For Joker, you can leave this whole section blank - its "API" doesn't support this and it's optional, so ignore it.

HTTP Add Request

This section is broken up into two parts, "Query request" and "Request". Set/leave the "Query request type" to "None" or blank, since it's not useful here.

Now we're back into the meat of the configuration. For "Request type", set it to "POST".

"URL formula" should be cfg_URL, which represents the URL configured above. Other providers may have URL permutations for different operations, but Joker has only the one.

Joker is very picky about the Content-Type header, so set the "Header formula" field to "Content-Type: application/x-www-form-urlencoded", which will include that constant string in the upload.

Things get a bit more complicated when it gets to the "Post data formula". For this, we want to match the example from Joker's example, but we also need to do a bit of processing based on the specific name you're asking for. Some DNS providers may want you to send a DNS key value like _acme-challenge.foo.example.com, while others (like Joker) want just _acme-challenge.foo. So we do a check here:

txtName := @If(@Ends(param_DnsTxtName; "."+cfg_DnsZone); @Left(param_DnsTxtName; "."+cfg_DnsZone); param_DnsTxtName);

"username=" + @UrlEncode("Domino"; cfg_UserName) + "&password=" + @UrlEncode("Domino"; cfg_Password) + "&zone=" + @UrlEncode("Domino"; cfg_DnsZone) + "&label=" + @UrlEncode("Domino";txtName) + "&type=TXT&value=" + @UrlEncode("Domino"; param_DnsTxtValue)

In my experience, this covers both single-host certificates and wildcard certificates.

HTTP Delete Request

This is for the cleanup step, so your DNS isn't littered with a bunch of useless TXT challenge records.

As before, make sure "Query request type" is "None" or blank.

Similarly, "Request type", "URL formula", and "Header formula" should all be the same as in the "Add" section.

Finally, the "Post data formula" is almost the same, but sets the value to nothing:

txtName := @If(@Ends(param_DnsTxtName; "."+cfg_DnsZone); @Left(param_DnsTxtName; "."+cfg_DnsZone); param_DnsTxtName);

"username=" + @UrlEncode("Domino"; cfg_UserName) + "&password=" + @UrlEncode("Domino"; cfg_Password) + "&zone=" + @UrlEncode("Domino"; cfg_DnsZone) + "&label=" + @UrlEncode("Domino";txtName) + "&type=TXT&value="

Putting It To Use

Once you have your generic provider configured, you can create a new Account document in the "DNS Providers" view.

In this document, set your "Registered domain" to your, uh, registered domain - in my case, "frostillic.us". This remains the case even if you want to register wildcard certificates for subdomains, like if I wanted "*.foo.frostillic.us". CertMgr will use this to match your request, and matches subdomain wildcards too.

There's a lot of room for special tokens and keys here, but Joker only needs three fields:

"DNS zone" is again your domain name.

"User name" is the user name you get when you go to your DNS configuration and enable Dynamic DNS - it's not your normal account name. This is a good separation, and a lot of other providers will likely have similar "don't use your normal account" stuff.

Similarly, "Password" is the Dynamic-DNS-specific password.

Joker account configuration

TLS Credentials

Your last step is to actually put in the certificate request. This stage is actually pretty much identical to the normal process, with the extra capability that you can now make use of wildcard certificates.

On this step, you can fill in your host name, servers with access, and then your ACME account. Even more than with the normal process, it's safest to start with "LetsEncryptStaging" instead of "LetsEncryptProduction" to avoid them temporarily banning you if you make too many requests.

With a custom provider, I recommend opening up a server console for your CertMgr server before you hit "Submit Request", since then you can see its progress as it goes. You can get potentially more info if you launch CertMgr as load certmgr -d for debug output. Anyway, with that open, you can click "Submit Request" and let it rip.

As it goes, you'll see a couple lines reading "CertMgr: Error parsing JSON Result" and so forth. This is normal and non-fatal - it comes from the default behavior of trying to parse the response as JSON and failing, but it should still put the unparsed response in the document. What you want is something at the end starting "CertMgr: Successfully processed ACME request", and for the document in certstore.nsf to get its nice little lock icon. If it fails, check the error message in the cert document as well as the document in the "DNS Trace Logs" view - that will contain logs of each step, and all of the contextual information written into the doc used by your formulas.

Wrapping Up

This process is, unfortunately, necessarily complicated - since each DNS provider does their own thing, there's not a one-config-fits-all option. But the nice thing is that, once it's configured, it should be good for a long while. You'll be able to generate certificates for non-public servers and wildcard at will, and that makes a lot of things a lot more flexible.

PSA: ndext JARs on Designer 14 FP1 and FP2

Thu Sep 12 11:02:18 EDT 2024

Tags: java xpages
  1. Oct 19 2018 - AbstractCompiledPage, Missing Plugins, and MANIFEST.MF in FP10 and V10
  2. Jan 07 2020 - Domino 11's Java Switch Fallout
  3. Jan 29 2021 - fontconfig, Java, and Domino 11
  4. Nov 17 2022 - Notes/Domino 12.0.2 Fallout
  5. Dec 15 2023 - Notes/Domino 14 Fallout
  6. Sep 12 2024 - PSA: ndext JARs on Designer 14 FP1 and FP2
  7. Dec 16 2024 - PSA: XPages Breaking Changes in 14.0 FP3
  8. Jun 17 2025 - Notes/Domino 14.5 Fallout

Back when Notes/Domino 14 came out, I made a post where I described some of the fallout of it. One of the entries was about the upstream removal of the "jvm/lib/ext" directory and the moving of all common extension JARs to the "ndext" directory. The upshot there was that any JARs that you want to add to the filesystem in Designer to match deployment on the server would have to be added to the active JRE in Designer in order to be recognized.

HCL presumably noticed this problem and altered the installation to accommodate it in FP1 and FP2. However, the approach they took is to add all JARs from ndext to the JVM. Thus, a fresh install+upgrade of Notes 14 to FP2 (or 14.5 EAP1) has a JVM that looks like this:

Screenshot of the 'Edit JRE' screen in Designer 14 FP2

This is a problem in a couple ways, but the most immediate is that it includes the toxic "jsdk.jar" I warned about in the earlier post. This JAR contains primordial Servlet classes from the very first addition of Servlet to Domino, predating XPages, and that version lacks even the convenience methods added in the ancient-but-less-so version in XPages. To demonstrate this, you can write this code:

1
2
HttpServletRequest req = null; /* pretend this is assigned to something */
Map param = req.getParameterMap();

This will work in a clean Designer 14 installation but will break on upgrade to 14 FP2, with Designer complaining that the getParameterMap method does not exist. There are others like this too, but basically any "The method foo() is undefined..." error for Servlet classes is a sign of this.

The fix is to go into your JVM definition (Preferences - "Java" - "Installed JREs" - "jvm (default)" - "Edit...") and remove jsdk.jar. While you're in there, I recommend also removing POI and its related JARs (poi-*, xmlbeans, ooxml-schemas, fr.opensagres.poi.*, commons-*) too, unless you also happen to have deployed them to the server, since they're not normally present on Domino and are thus mostly there to lead you astray. Honestly, almost none of the JARs present in there by default are useful for the XPages JVM definition, since the critical ones are contributed via OSGi plugins. I guess guava.jar is important just because it's going to contaminate the server's JVM too, so you want to account for that. Otherwise, it's probably best to treat it like a 14 install and only add the new JARs you've explicitly added and deployed to the server.

Recent Open-Source Project Updates

Fri Sep 06 14:25:34 EDT 2024

I've released a spate of open-source project updates recently, and I figured it'd be good to round up what's new. Most of them are utilitarian in nature - mostly fixes for things that crop up with Domino 14 and Java > 8 - but the first one is larger.

XPages Jakarta EE

Today, I released version 3.1.0 of the XPages JEE project. This is mostly about fixing up some edge-case and sporadic bugs that cropped up in 3.0, but also includes some performance updates and contributions from new contributors. Additionally, it should work on the newly-launched Domino 14.5 EAP1. The use of Java 21 in that version of Domino won't necessarily affect XPages JEE in a while, since JEE 11 targets Java 17, but there's some neat stuff in there for general use.

p2-layout-resolver

The p2-layout-resolver is a plugin that allows the use of p2 (Eclipse-style) repositories as Maven dependencies in non-Tycho projects. I use this in a lot of cases where I move a project from Tycho to maven-bundle-plugin for simplicity in dependency management.

Version 1.9.0 includes a very-useful contribution that fixes dependencies in cases where a bundle has a Bundle-ClassPath entry that references an embedded JAR that doesn't exist. In the Domino world, this cropped up in Domino 14, so it's useful if you're building anything that targets that version of the runtime or above.

p2-maven-plugin

For various Domino-related needs, I maintain a fork of the p2-maven-plugin, which is useful for its additions of things like generating site.xml files (still important for importing into an NSF update site, after all these years) and the <transform>jakarta</transform> option to run JARs through Eclipse Transformer when bundling them, allowing use of pre-Jakarta JEE artifacts in a smooth way.

The 3.1.x versions focused on fixing problems when running on Java > 8 (namely no longer using IBM Commons XML) and improving handling of some other hiccups.

Pretty-Printing JSON in the (Desktop) Notes Client and Domino

Fri Jul 26 10:30:35 EDT 2024

In the OpenNTF Discord (join if you haven't!), Daniel Nashed brought up a task he was facing: in the Notes client, writing pretty-printed JSON. LotusScript has its NotesJSON* classes that can process JSON in their stark way, but the stringify output is meant for machine reading and doesn't include whitespace and line breaks, making it ill-suited for things like configuration files or other things a human might read or edit.

Since the goal is to get it working in the full Notes client and not Nomad, Java is on the table, but Java - for dumb historical reasons - has no proper built-in JSON library. However, as of 12.something HCL shunted IBM Commons down to the global classpath in order to support the "share Java design elements between XPages and agents" feature. Among many other things, IBM Commons includes a JSON library that can suit.

I wrote a post almost a decade ago talking about this library and its limited nature, but it's nonetheless less limited than the LotusScript classes, and it's up to the task. There are a couple ways to go about this, depending on your needs, but for now I'll just cover the basic case here of "I have a string of JSON and want to format it".

To do this, you can make a Script Library of the Java type named, say, "JsonPrettyPrint" and make a new class in the "com.example" package and named "JsonPrettyPrint" with this contents:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
package com.example;

import java.io.IOException;

import com.ibm.commons.util.io.json.JsonException;
import com.ibm.commons.util.io.json.JsonGenerator;
import com.ibm.commons.util.io.json.JsonJavaFactory;
import com.ibm.commons.util.io.json.JsonParser;

public class JsonPrettyPrint {
	public String prettyPrint(String json) throws JsonException, IOException {
		Object jsonObj = JsonParser.fromJson(JsonJavaFactory.instanceEx, json);
		return JsonGenerator.toJson(JsonJavaFactory.instanceEx, jsonObj, false);
	}
}

Then, you can instantiate this object with LS2J and pass it a JSON string, such as one from NotesJSONNavigator:

1
2
3
4
5
6
7
8
UseLSX "*javacon"
Use "JsonPrettyPrint"

Sub Initialize
	Dim jsession As New JavaSession, jsonPrinter As JavaObject
	Set jsonPrinter = jsession.GetClass("com.example.JsonPrettyPrint").CreateObject()
	MsgBox jsonPrinter.prettyPrint(|{"foo":{"bar":"baz"}}|)
End Sub

That'll get you something presentable:

Screenshot of a message box showing pretty-printed JSON

While the stringify-parse-stringify process you'd do if you generated your JSON with the NotesJSON* classes is inefficient, it's not too bad, especially for the size of content you're likely to want to emit here. You could alternatively do more work with JsonJavaObject and friends from IBM Commons directly to save a little overhead, but this path is a good way to do the vast majority of work in "normal" LotusScript and then only dip in to Java for the last step.

As mentioned at the start, the presence of Java means this won't work for Nomad, unfortunately. There may be a way to wrangle your way to this result using the primordial JavaScript runtime present in that, but that may not be worth the trouble unless you really need it. Better would be to vote for the Aha idea to add pretty printing to LS.

XPages JEE 3.0

Sun Jun 09 14:45:14 EDT 2024

Today, I uploaded the release version of 3.0.0 of the XPages Jakarta EE Support project. It's been proving stable in my use since the last beta, and so I think this is as good a time as any to release it properly.

Changes

The big-ticket change remains the move to Jakarta EE 10 as the baseline, which brings a handful of new features as well as a new Java version requirement. That means that this release also requires at least Domino 14. Domino 12.x served us well, but its time has passed.

Jakarta EE 10, for its part, is mostly about solving a lot of old business in the JEE community: it continues the gradual deprecation of EJB in favor of CDI, it removes some old stuff like applet requirements, and then also brings in a couple "scratch an itch" features.

Of particular note is the addition of the EntityPart type for REST services. Though it's a small feature, it's a real "finally" one, in that there hadn't been a proper way to deal with multipart/form-data MIME body parts individually, and so each implementation of Jakarta REST would bring in its own, or you'd have to fall back to taking an InputStream and parsing the MIME body yourself. Now, you can do so in a spec-based way:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
@POST
@Consumes(MediaType.MULTIPART_FORM_DATA)
public String post(@FormParam("part") EntityPart part) throws IOException {
	String mediaType = part.getMediaType();
	String name = part.getName();
	String fileName = part.getFileName();
	MultivaluedMap<String, String> headers = part.getHeaders();
	byte[] data = part.getContent().readAllBytes();
		
	// ...
}

There's also the split of Jakarta NoSQL into that spec plus the new Jakarta Data. In this release of XPages JEE, I mostly aimed to keep the same level of functionality while accounting for the renaming of packages and types, but I'll be interested in building on this in the future.

Finally, there's the project-specific change of condensing the many, many XPages libraries just down to Core, UI, and MicroProfile. That didn't impact functionality as such, but it sure is nice only having three (or six, with the source features) features to say "yes, install this" to when updating it in Designer and only three to check in Xsp Properties. It also allowed me to delete a lot of weird shim and conditional code, and it'll make maintenance of it much easier in the future without having to worry about every permutation of what libraries you have enabled in an NSF.

The Future

Speaking of which, that brings me to some of the next things in the docket. I imagine that the immediate work will be cleaning up any loose ends from the move. For example, Jakarta Concurrency 3.0 brought a bunch of new features, but I haven't actually checked to see if they work or if I need to do more adapting.

Additionally, Jakarta Data is intended to go beyond just NoSQL, and can also layer on top of Jakarta Persistence (née JPA, the API for working with relational DBs) and arbitrary services. I don't know yet if there's a usable implementation beyond the one in Open Liberty, so that may have to wait, but it'll be interesting to tinker with.

There are also a bunch of features I'd like to get cracking on now that this hurdle is done. For example, I'd like to move the NoSQL driver to use JNX, which would let me do a couple things that the Notes.jar classes just can't. Along with that, I'd like to add an option to publish MP Metrics to the Domino statistics store, which adopting JNX would allow easily.

Fortunately, I don't expect that there will be any other breaking-change discontinuities in the future. Jakarta EE 11 has some deprecations and removals, but it's mostly similar to JEE 10 in that it's about classes and idioms that are much, much older than any code built using this project. That should give this 3.x series a good, long period to be a comfortable baseline, even though the next major version of JEE.

XPages JEE 3.0 Beta 4

Wed May 22 13:48:19 EDT 2024

Earlier today, I uploaded beta 4 of XPages JEE 3.0 to GitHub. I've been taking a slow approach to this release due to its "breaking changes" nature, but I think it's just about ready for release.

Domino 14

Like previous betas, this release requires Domino 14 (and Notes 14 for development), since it moves to a baseline of Jakarta EE 10, which in turn requires Java 11. Doing this let me get rid of some extra shim code that was needed to support both Domino 14 and previous versions, and also let me move to some newer language constructs. If you're interested in the sorts of things that the new versions of Java brought, check out the OpenNTF webinar from April, where I talked about just that.

Library Reorganization

Beyond the Java version requirement, the big breaking change I made was to finally shrink the number of XPages libraries and p2 features in the project. As the project grew, I kept adding new distinct XPages libraries, for the principle of keeping each spec distinct, as they often technically are. A few things made me want to fix this, though:

  • Checking all the boxes in the Xsp Properties editor for each library was annoying
  • Checking "Yes, install this plug-in" for every single component, plus its source version, when installing in Designer was very annoying
  • I had to do weird tricks to add features that touched multiple specs. For example, the project tree had a bunch of cross-spec fragments like "jaxrs.cdi" and "json.cdi" to contribute parts for when CDI was present but not break things when it wasn't. This added an extra layer of indirection and maintenance hassle
  • The specs themselves have been converging, particularly in the sense that more and more they assume the "backbone" of CDI is present. For example, Faces removed its original @ManagedBean and related support in favor of going all-in on CDI. Jakarta REST is moving towards the same
  • It was hard to think of realistic scenarios where it would be important to split up the specs like this, using, say, REST but not CDI or Validation

Now, there are just three: "org.openntf.xsp.jakartaee.core", "org.openntf.xsp.jakartaee.ui", and "org.openntf.xsp.microprofile". I was tempted to roll MicroProfile into "core", but they're conceptually (and administratively) distinct enough that it was worth separating them. With this change, it's not only less annoying to install, but it lets me make a lot more assumptions about what is present across specs, simplifying a lot of little things.

Deep-Dive Sidebar: Class Loading

One interesting aspect I ran into when making this change was that I had to readjust my mental model for how class loading is done from an NSF-based application and the libraries it uses. The way it mostly works conceptually aligns with what you see in Designer:

  • Select a library to depend on
  • The XspLibrary has a "getPluginId" method, which then Designer uses to add the OSGi bundle to the classpath
  • Any Require-Bundle dependencies in that plugin marked as "visibility:=reexport" are also included on the classpath

So, in this way, you'd previously select the "org.openntf.xsp.cdi" library, which would then add a dependency on the bundle of the same name, which would in turn re-export the things the NSF should see, such as the CDI API classes.

When I consolidated the libraries, I did it in the straightforward way: I made new "*.library" bundles for them and then added the existing spec-specific bundles as re-exported dependencies. As far as Designer was concerned, all was well, and there was just another little layer in between.

However, that's not quite the whole story when it comes to the runtime on the server. Though Designer presents the NSF as a pseudo-OSGi bundle using the Plug-in Development Environment, Domino doesn't do the same thing. What Domino does is use a class called ModuleClassLoader (not to be confused with Equinox's ModuleClassLoader, which is entirely different and IS an OSGi loader) to handle loading classes from the NSF and its dependencies. The way it gets to its dependencies isn't really a "true" OSGi way, though: it keeps track of a collection of ClassLoader objects as extraDepends, which it consults each in turn as needed. Those ClassLoader objects, at least in post-8.5.2-era Domino, are the internal class loaders from the library OSGi bundles. This is cheating, and I imagine it was made for pragmatic transitional reasons when OSGi came into the picture.

The old layout conceptually looks like this:

Diagram of NSF to old library dependency

At first blush, this seems like a "six of one, half a dozen of the other" sort of situation, but it's not quite. What this setup does that normal OSGi doesn't is that it exposes META-INF/services files inside the direct dependencies to the application's ClassLoader, whereas these are normally encapsulated in OSGi. The effect was that a bunch of things that used to work started to fail - REST couldn't find all its output-writing classes, Validation couldn't find its implementation, and so forth. This is because they would all internally ask the thread-context ClassLoader (i.e. the NSF's loader) for resources within META-INF/services, and the extraDepends list used to be able to find them. Now that there was a layer of indirection, this no longer worked: the extraDepends loaders could see their own stuff but would not traverse the OSGi barrier to peek inside their further dependencies for these. Conceptually, now we have this:

Diagram of NSF to new library dependency

A direct ClassLoader dependency allows reading of resources, but a true OSGi-type dependency does not. So the result is that I had to "promote" a bunch of META-INF/services files from the now-downstream plugins into the "*.library" ones. It all makes sense once you see how the gears are moving, but it sure threw me for a loop for a while.

Bundle and Package Renaming

Okay, now back to the changes.

Since I was already breaking things anyway, I decided this was a good opportunity to fix the names of the bundles and packages in the project's source. For example, some names were antiquated: what was once "JSF" is "Jakarta Faces", but my bundle was "org.openntf.xsp.jsf". Additionally, I was inconsistent in my hierarchy: while Transaction was in "org.openntf.xsp.jakarta.transaction", others (like Faces there) skipped the "jakarta" level of the hierarchy. These don't normally matter to developers consuming the library, but they annoyed me. Now, all of the bundles and their contained packages are within either "org.openntf.xsp.jakarta", "org.openntf.xsp.jakartaee" (for platform-wide capabilities), or "org.openntf.xsp.microprofile".

Along with this will be a couple potential breaking changes for app-level code, such as moving org.openntf.xsp.beanvalidation.XPagesValidationUtil to org.openntf.xsp.jakarta.validation.XPagesValidationUtil, but there won't be TOO many due to this change.

Jakarta Data and NoSQL Changes

This one isn't from my latest round of changes and has been the case since early in the 3.x stream, but it's worth mentioning again here. The Repository concept from Jakarta NoSQL moved from that spec to the new "Jakarta Data" spec, and so related packages changed from jakarta.nosql.mapping to jakarta.data. Additionally, since the NoSQL spec shrunk to accommodate, things like @Column changed from jakarta.nosql.mapping.Column to jakarta.nosql.Column. It makes sense as NoSQL has been an evolving spec all along, but I suspect that this will be the biggest app-code-breaking change it experiences for a good while.

Release and Future Versions

My next steps are to put this through its paces now that all the issues are closed. Though I've ported everything to the JEE 10 versions, I haven't tested to make sure that most of the new features work. While JEE was largely a "cleanup" release, there are a bunch of new features, particularly in Faces, which is in turn always the jankiest part of the stack on Domino.

Post-3.0, I expect that my focus will start to shift to Jakarta EE 11. For a time, I was going to be SOL with it: though Domino 14 bumped Java to 17, JEE 11 was slated to target Java 21 at a minimum. In the mean time, however, that target shifted down to 17, putting it back on the table for Domino. JEE 11 was originally slated for Q1 of this year, but it slipped to some time around summer. That fits reasonably well with my cadence here. JEE 11 is technically also a breaking release, but I suspect that it won't break features that XPages JEE users use, at least not after this hurdle here.

Simplifying the Maven Build of the NSF File Server Project

Wed Apr 10 17:02:09 EDT 2024

When working on NSF File Server project that I talked about the other day, I took a slightly-different tack as far as building it than I did in the past, and I think it's worth going over some of that in case it's useful for others.

Initial Version

The first version of this project was a non-OSGi WAR file meant to be deployed to an app server like Liberty, not to Domino's OSGi stack, and so it's never involved Tycho. This made it mostly simpler, since its various dependencies are normal Maven dependencies and so I didn't have to worry about any of the annoying hoops.

However, it did have some native Domino dependencies: Notes.jar and the NAPI. These would need to be included as Maven dependencies and brought into the final WAR. The way I handled this was using the generate-domino-update-site project, which lets you first generate a p2 site in the style of the painfully outdated IBM-provided update site and then, if desired, turn that p2 site into more-normal Maven artifacts.

When I eventually switched from targeting a WAR file to having it run on Domino, I used the same dependency structure. The Domino version runs as an HttpService implementation, and so I pointed at the Mavenized version of the com.ibm.xsp.bootstrap and com.ibm.domino.xsp.adapter bundles.

Then, I used the maven-bundle-plugin, which fits the job of taking an otherwise-normal Maven project and making it work in OSGi environments (mostly). The way that plugin works is that you specify a lot of your MANIFEST.MF rules in the pom.xml:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
<plugin>
	<groupId>org.apache.felix</groupId>
	<artifactId>maven-bundle-plugin</artifactId>
	<extensions>true</extensions>
	<configuration>
		<instructions>
			<Bundle-SymbolicName>org.openntf.nsffile.httpservice;singleton:=true</Bundle-SymbolicName>
			<Bundle-RequiredExecutionEnvironment>JavaSE-1.8</Bundle-RequiredExecutionEnvironment>
			<Export-Package/>
			<Require-Bundle>
				com.ibm.domino.xsp.adapter,
				com.ibm.commons,
				com.ibm.commons.xml
			</Require-Bundle>
			<Import-Package>
				javax.servlet,
				javax.servlet.http
			</Import-Package>
			<Embed-Dependency>*;scope=compile</Embed-Dependency>
			<Embed-Transitive>true</Embed-Transitive>
			<Embed-Directory>lib</Embed-Directory>
			
			<_removeheaders>Require-Capability</_removeheaders>
			
			<_snapshot>${osgi.qualifier}</_snapshot>
		</instructions>
	</configuration>
</plugin>

The first couple are one-for-one matches to what you'd have in the MANIFEST.MF, but things get weird once you get to the "Embed-*" ones.

The Embed-Dependency instruction is a potent one: you give it a description of what dependencies you want embedded in your OSGi bundle (in this case, all my non-provided dependencies), and then it does the job of copying them into your final bundle JAR. You can do this other ways - copying them manually, using the Maven Dependency Plugin, or others - but this handles all your transitive stuff nicely for you, thanks to Embed-Transitive. I use Embed-Directory here just for cleanliness - the result is functionally the same without it.

The final bits are just for cleanliness: I remove Require-Capability to avoid some trouble I had with older Domino versions, and then I set what the snapshot value will be, which ends up being the current build time.

With this, I end up with a single OSGi bundle with everything in it. This works well for this sort of project - with something to be used in Designer, I prefer to make a big pool of distinct OSGi bundles to make it so that you can look up the source properly, but something server-only like this doesn't need that.

2.0 Version

In this new version, the switch to JNX meant that I was tantalizingly close to not having to do any weird dependency stuff: JNX is distributed in Maven Central, so I didn't need to have the weird locally-built stuff like I did for Notes.jar and the NAPI.

However, that wasn't everything: there are still the "bootstrap" bundles containing the HttpService superclass and related classes. While I don't need to distribute those anywhere, they're still required to compile the classes - no amount of non-verified text files or the like will get around that.

I came up with another way, though. Java only needs classes that look like those to compile, and then the compiled class of mine will be the same regardless of anything else. This is critical: I don't actually need any implementation code, and that's the part I can't redistribute. So I made two little Maven modules: com.ibm.xsp.bootstrap.shim and com.ibm.domino.xsp.adapter.shim. These modules contain just a handful of classes, and of those classes only the methods actually referenced by my code. Since these modules will then be marked as "provided", they won't be bundled into the final JAR.

This squared the circle nicely, where now I can compile the Java side without any weird pre-requisites. Admittedly, the two NSFs in the module set still require an NSF ODP Tooling environment, which is a whole other ball of wax, but it's a step in the right direction.

Other Uses

This technique can be used in other similar projects when you only need a few classes from the XPages stack. For example, if your goal is to just wrap a third-party library and provide it to XPages in Designer, you could probably do this by making a stub implementation of XspLibrary and related classes, and skip the whole generate-domino-update-site step. The more you use from the stack, the less practical this is - for example, the XPages Jakarta EE project reaches into all sorts of crap, and so I can't really do this there. For this, though, it works nicely.

NSF File Server 2.0

Sun Apr 07 13:59:18 EDT 2024

A few years ago, I made a little project that hosts an SFTP server that stores documents in an NSF. I've used it here and there since then - as in the original post, I stashed some company docs in it to have them nicely synced among our Domino servers, and I've also had cases where clients use it to, for example, provide a way for their vendors to upload files in a standard way.

The other week, I decided to dive back into it to add some capabilities I'd wanted for a while, and the result is version 2.0.0. This version is a significant revamp that adds quite a bit.

Multiple Mounts

The first limitation I wanted to improve was the fact that the first version was restricted to a single NSF. That works fine in the basic case, but I wanted to start doing things like storing server config backups in there, and wouldn't necessarily want them in the same NSF as, say, company contracts, credentials, and secrets.

The way I went about this was to make it so that the new configuration NSF has a "Mounts" view that lets you specify a path in a conceptual top level of the filesystem that would then point to a target NSF. This allows the admin to do things like have separate ACLs for different mounts - since the client will act as an authenticated Domino user, these will be properly obeyed, and a user won't be able to access documents they don't have rights to.

Additionally, I could configure it so that not all mounts are present on all servers, which will come into play particularly with the next feature.

Screenshot of the Mounts view in the file server config NSF

New Filesystem Types

Once I had a composite top-level filesystem, I realized that it wouldn't be terribly difficult to allow more filesystem types than the original NSF file store I made. That filesystem is built using the NIO File System Provider framework added in Java 7, and that system is designed to be pretty extensible. By default, Java comes with a few providers: the normal local filesystem as well as one that can treat a ZIP or JAR as a contained filesystem itself. These are accessed in a generic way, where you specify a URI and a map of "env" properties that are provider-specific.

For example, the ZIP filesystem takes a URI in the form of "jar:file:/some/path/to/file.zip" and an environment map configuring whether the filesystem should be created if it doesn't already exist in memory and what encoding to use for filenames (very important if you have Unicode characters in there).

So I added ways to configure a mount to the local server filesystem (similar to what the Mindoo FTP Server does) and then a generic configuration for any installed provider. It's probably uncommon that you will have a custom File System Provider implementation in your Java classpath, but hey, maybe you do, and I want to allow that to work.

I also added an extension point to the project itself that allows adding new providers via plugin.xml files in OSGi, and I can think of a couple other projects that may use this, like the NSF ODP Tooling.

WebContent Filesystem

Beyond adding the JVM-provided systems, I wrote another new filesystem type, one that provides access to the conceptual "WebContent" directory presented in Package Explorer in Designer:

Screenshot showing Designer and Transmit looking at the same WebContent in an NSF

The idea here is that this could be used to deploy, say, a JavaScript client application to an NSF without the developer or build server having to know anything about Domino. Pretty much everything can work with SFTP, so this makes accessing those files a lot easier. This is similar to the WebDAV capabilities Domino has had for a very long time, but with a different protocol.

Server Keypairs

In the first version, the app would generate and store the server's SSH keypair on the filesystem, in the data directory. This is fine, but part of the point of this whole project is that I like to get away from non-replicating stuff, and so I moved these keys to the configuration NSF. Now, on first connection, the server will look for a keypair document in the NSF and, if it doesn't exist, will generate a new one and store it there. Since I've been working with encrypted fields a lot for client work lately, I also realized that this was a good use for it: the public key is a normal text item (so you could distribute and verify it as needed), but the private key is encrypted with the generating server's ID file. Since only the server itself ever needs to know its private key, this works swimmingly.

JNX

This isn't a new app feature per se, but this was a good situation for me to put JNX to work in an open-source project. I had originally written this using the lotus.domino classes for most work and the IBM NAPI for things like generating sessions for a given username, but switching to JNX let me ditch both of those.

Admittedly, this is a case where switching to JNX didn't grant me significant new capabilities, but it DID let me do a couple things better. Some things are distinct feature improvements, like improving password authentication (previously, I was doing a compare of hashes "manually", which is fragile), while others are just making the code smoother, like no longer having to do the read-convert-recycle dance with DateTimes in LSXBE. It's just pleasant, and let me find a few places where the JNX API could be improved.

Future Additions

When I pick this project back up, there are certainly a couple things I'd like to add.

One would be to look into rsync support: rsync is tremendously useful for things like synchronizing filesystem-bound configs, but it's its own protocol tunneled over SSH, and so just having SFTP isn't enough. The underlying Apache Mina SSHD project is a general SSH server and not just SFTP, so it may be possible to do it by intercepting the commands sent over to initialize rsync, but it will be non-trivial. There's a library in Java that provides an rsync server, but it's GPL-licensed, and so I have to keep away for license-safety's sake.

Beyond that, it's mostly that I'd like to implement more filesystem types. Presenting data as a filesystem can be a very powerful tool: you could imagine providing access to documents in a DB as DXL or YAML, or listing files from a Document Library NSF, or (as I'd like to do some day) having the NSF ODP Tooling project replicate the ODP layout over SFTP.

For now, I'm looking forward to putting it to more use as a coordinating point. If I keep messing around with apps on TrueNAS, it'll give me a good feeling of security to have more info stashed in Domino and less prone to destruction if one server happens to blow up.

Realmz

Sun Mar 31 11:35:14 EDT 2024

For a while now, I've wanted to just kind of gush about an old Mac game I played when I was a teenager, and the last day of Marchintosh for the year is as good a time as any.

Overview

Realmz is a game that ran on the classic Mac OS and, in later versions, Windows. It was shareware at the time - one of the few shareware games I ended up cobbling together the money for - but has long been made fully available for free, with my go-to source being the Macintosh Garden. If you have SheepShaver around, it works nicely there.

The game itself is quickly identified as a party-based fantasy RPG. I didn't really realize it at the time, but it's a full-on CRPG in the nerdiest sense. I mean, look at this freaking character sheet:

Screenshot of the Realmz character creation sheet

While it's not strictly D&D rules, it basically is. Older versions (which are also available on the Macintosh Garden) even used THAC0 before switching to an "Armor Rating" system.

CRPG

Looking back, I'm glad I had an experience with such a true-blood CRPG at the time. I didn't play D&D growing up, didn't play the Gold Box games, and was too busy playing pretty much exclusively Blizzard games to play the Infinity Engine games or Neverwinter Nights when they came out. It wasn't really until Dragon Age: Origins and then (especially) Pillars of Eternity that I realized the glory of the genre. But looking at Realmz, it's obvious that it's right in the same lineage.

Combat is strictly turn-based, takes place on a grid, and is suitably technical:

Screenshot of a combat situation in Realmz

It even does some of the weird stuff: for example, martial characters won't just get multiple attacks per round, but will also get "partial" steps like my rogue Hebs there, who gets three attacks every two rounds, as a stepping stone to 2 / 1.

Realmz also has its own mechanics-heavy take on the thing CRPGs try to do where they want to emulate an open-ended experience a DM might oversee beyond just combat. For example, early on, you meet a kid who wants you to help his dog, which is stuck in a well. When you get there, you're presented with the "encounter" screen, where you can try all sorts of things:

Screenshot of the Realmz encounter screen

There are a lot of ways to deal with these encounters. In this case, I might have Galba there do an Acrobatic Act, which has about even odds. My sorcerer Fenton there might use a Spider Climb (might not be the name) spell to make scaling the well effortless. Or, if I stocked up, I might just use a rope. You can easily fail this - if you do, the kid runs off crying and you have to wait for the guards to show up to help you, with no experience gain. Realmz has a bunch of these scenarios and they're pretty neat. Admittedly, they fall short in the ways that all non-DM-run games eventually do, where your actual options aren't truly limitless. The "Speak" option is available in other situations, but it's only ever really practical if you have, say, a magic word to open a door or something. It's not a true tabletop experience, but it's trying, bless its heart.

Mac-ness

One thing I really enjoy about games in the heyday of Mac shareware games (by the way, read The Secret History of Mac Gaming if you haven't - it's great) is how thoroughly Mac-like they are. For both practical and cultural reasons, a lot of Mac games didn't necessarily take over the whole screen with their own interface like DOS and Windows games usually do. While there are some Windows games that use the Windows UI, like another small classic Castle of the Winds, it's very common in Mac games. For example, there's Scarab of Ra:

Screenshot of Scarab of Ra from Macintosh Garden

As it happens, Scarab of Ra is another game where I didn't appreciate its lineage at the time: it's a true roguelike, albeit with a first-person perspective.

Realmz doesn't go quite as hard in using native widgets for everything, but you can see the menu bar in earlier screenshots - you use the normal Mac menu to access game commands, the bestiary, your ally list, your collected notes, and so forth. It's just neat. Also, like a lot of Mac software at the time, Realmz's program directory is just a delight to look at:

Screenshot of the Realmz 2.5 installation folder

My use of the "Drawing Board" Appearance Manager theme helps it too, but just check out those icons. That sort of thing wasn't strictly necessary, but it was the Mac way, and it was wonderful.

Versions

And this isn't exactly a Mac-like attribute, but I like that Realmz wasn't afraid of using version numbers. It went from version 1.x all the way up through 8.x, with minor and patch versions along the way. It was updated all the time, and it was always exciting to see a new major version to find what the big changes are.

Mostly, the changes were things like adding classes: the old versions have the same sort of handful you'd find in basic D&D, while the later ones have so many that you can pick between "Archer" and "Marksman" or "Bard" and "Minstrel". Some of the changes were less like finding a D&D source book and more like the game gradually morphing into its own sequel, though.

For example, the original versions didn't have music of any kind, as was the style at the time. Somewhere along the line (version 5, I think), it gained music, and... boy, it's a doozy. Here's, for example, the camping music:

What I assume happened is that the developer wanted to add some music and then found some free or cheap module files and slapped them in there where they kind of work. The tone is absolutely bizarre, and it's kind of great for it.

It was just neat seeing the game progress, with the changes in systems and new features, even the "eh, not the best idea" stuff like parts of dungeons that switch to a first-person mode.

Scenarios

I have to admit that, though I played a ton of Realmz, I never even got that far into it. A big part of that was that the scenarios beyond the starting City of Bywater also cost money above the core game, and it's a tall order for a cash-strapped teenager to cough up money at all, let alone add-on costs. So my characters probably always plateaued around level 10, as there's just not that much to do in the base scenario. That's good news for future me, though: there's a ton of stuff waiting for me whenever I want to get back into it.

If you have a taste for older games or CRPGs in general, I definitely suggest you give it a try. The Windows version may be easier to run than the Mac one, though it loses some of the appeal. Depending on your temperament, Realmz may be easier to get into from scratch than games like, say, the original Baldur's Gate, with the latter's janky RTwP combat and constant fourth-wall-breaking dweebiness. Definitely keep it in mind for a cozy-day game, I say.

Adding Code Coverage Reports To Domino-Container-Run Tests

Mon Mar 11 15:33:02 EDT 2024

Tags: docker testing

When you're writing test suites for your code, it can be very useful to use a tool to analyze the code coverage of your tests. While people can get a little obsessive about coverage percents, there's certainly no denying that it's helpful to know how much of your code is actually run when testing, and also being able to look down into the specifics of what is covered.

With Java, one of the preeminent tools for this is JaCoCo, a venerable open-source library that you can integrate with your test suites to give reports of your coverage. In a normal project, such as a build run via Maven, you can use the Maven plugin in tandem with the Maven Surefire and Failsafe plugins. However, things get more complicated if the code you're actually testing isn't in the Surefire JVM, but rather inside a container.

That's exactly the situation I have with the integration-test suite of the XPages Jakarta EE project, where it creates a Docker container with the current build of the project deployed as OSGi plugins, and then executes HTTP calls against OSGi bundles and NSFs. I figured this was a solvable problem, so I set out doing so.

I first came across this blog post, which describes the general idea well, but unfortunately references Gists that seem to no longer exist. Still, it gave me a good starting point.

Installing JaCoCo in Domino

The first thing I had to do was to get the JaCoCo Java agent into the container. I added it as a Maven dependency to the IT suite project:

1
2
3
4
5
6
<dependency>
	<groupId>org.jacoco</groupId>
	<artifactId>org.jacoco.agent</artifactId>
	<version>0.8.11</version>
	<scope>test</scope>
</dependency>

Conveniently, this dependency is itself a wrapper for the agent JAR and comes with a convenience method for accessing the JAR data. I used that to read it into memory and send it to the Docker runtime during the container build:

1
2
3
4
5
byte[] agentData;
try(InputStream is = AgentJar.getResourceAsStream()) {
	agentData = IOUtils.toByteArray(is);
}
withFileFromTransferable("staging/jacoco.jar", Transferable.of(agentData)); //$NON-NLS-1$

The use of Transferable here allows me to keep the process independent of whether Docker is running locally or remote - I run remotely almost all the time nowadays, due to Domino's continued lack of an ARM port.

With the file in place, I modified my Dockerfile to copy it to a known location in the container:

1
2
COPY --chown=notes:notes staging/jacoco.jar /local/
COPY --chown=notes:notes staging/JavaOptionsFile.txt /local/

The JavaOptionsFile.txt was already there for another ARM-related reason, but it's important to note for the next step. This sort of file is how you enable JaCoCo in the Domino JVM: I set JavaUserOptionsFile=/local/JavaOptionsFile.txt and it'll read its rules from there. Following the instructions, I added -javaagent:/local/jacoco.jar=output=file,destfile=/tmp/jacoco.exec on its own line in this file. This causes JaCoCo to be automatically loaded with the HTTP JVM and to store its report in the named file on shutdown.

Reading the Data

That said, this didn't work immediately. The file "/tmp/jacoco.exec" was created properly inside the container, so the agent was running, but the file content was always zero bytes. I realized that this was due to the merciless way in which the container is killed by my test suite: there's no proper shutdown step, and so JaCoCo's shutdown hook never fires.

Fortunately, writing to a file isn't the only way JaCoCo can do its reporting - you can also have it open up a TCP port to connect to and read. So I changed the Java option line to:

1
-javaagent:/local/jacoco.jar=output=tcpserver,address=*,port=6300

I modified the withExposedPorts(...) call inside the class that builds my Testcontainers container to also include 6300, and then used getMappedPort(6300) to identify the actual randomized port mapped by Docker.

The remaining task was to figure out the little protocol used by JaCoCo to signal that it should collect and return its data. I get the impression that it's not too complicated, but I still figured it'd be best to use an existing implementation. I found jacocotogo, a Maven plugin that reads the data, and it looked promising. However, it had two problems: being a Maven plugin, it came with a bunch of transitive dependencies I didn't want, and it's also 11 years old and thus a bit out of date.

I ended up forking the main utility class, trimming out the parts I didn't need (like JMX), switching it to NIO, and going from there.

Using the Data

With that all in place, a test run will end up with a file named "jacoco.exec" inside the "target" directory. Using this file varies by IDE, but, in Eclipse, you can install the EclEmma tool, open the "Coverage" view, right-click in the table area, and choose "Import Session...". That will let you locate the file and then choose the projects from your workspace that you're looking to analyze.

When I did that, I got my results:

Screenshot of Eclipse's Coverage tool detailing my test suite's coverage of somewhere around 50-65%

This is surprisingly good for the project, especially when you consider how large chunks of the red bars are things like the servlet wrapper package, which includes a lot of delegating code that is obligatory to match the interface but is not likely to be actually used in practice.

While this is currently the only project where I've needed to do this, it'll certainly be good to keep these techniques in mind. The TCP port thing in particular should be handy in future edge cases even without the Docker part.