In Development: Containerized Builds in NSF ODP

Apr 30, 2023, 11:46 AM

Most of my active development happens macOS-side - I'll periodically use Designer in Windows when necessary, but otherwise I'll jump through a tremendous number of hoops to keep things in the Mac realm. The biggest example of this is the NSF ODP Tooling, born from my annoyance with syncing ODPs in Designer and expanded to add some pleasantries for working with ODPs directly in normal Eclipse.

Over the last few years, though, the process of compiling NSFs on macOS has gotten kind of... melty. Apple's progressive locking-down of traditional native loading mechanisms and the general weirdness of the Notes package and its embedded non-JDK JVM have made things get a little weird. I always end up with a configuration that can work, but it's rough going for sure.

Switching to Remote

The switch to ARM for my workspace and the lack of an ARM-native macOS Notes client threw another wrench into the works, and I decided it'd be simpler to switch to remote compilation. Remote operations were actually the first mechanism I added in, since it was a lot easier to have a pre-made Domino+OSGi environment than spinning one up locally, and I've kept things up since.

My first pass at this was to install the NSF ODP bundles on my main dev server whenever I needed them. This worked, but it was annoying: I'd frequently need to uninstall whatever other bundles I was using for normal work, install NSF ODP, to my compilation/export, and then swap back. Not the best.

Basic Container

Since I had already gotten in the habit of using a remote x64 Docker host, I decided it'd make sense to make a container specifically to handle NSF ODP operations. Since I would just be feeding it ODPs and NSFs, it could be almost entirely faceless, listening only via HTTP and using an auto-generated server ID.

The tack I took for this was to piggyback on the work I had already done to make an IT-suite container for the XPages JEE project. I start with the baseline Domino container from the community script, feed it some basic auto-configure params to relax the HTTP upload-size limits, and add a current build of the NSF ODP OSGi plugins to the Domino server via the filesystem. Leaving out the specifics of the auto-config script, the Dockerfile looks like:

FROM hclcom/domino:12.0.2

ENV SetupAutoConfigure="1"
ENV SetupAutoConfigureParams="/local/runner/domino-config.json"

RUN mkdir -p /local/runner && mkdir -p /local/eclipse/eclipse/plugins

COPY --chown=notes:notes domino-config.json /local/runner/
COPY --chown=notes:notes /opt/hcl/domino/notes/latest/linux/osgi/rcp/eclipse/links/
COPY --chown=notes:notes staging/plugins/* /local/eclipse/eclipse/plugins/

The runner script copies the current NSF ODP build to "staging/plugins" and it all holds together nicely. Technically, I could skip the bit - that's mostly an affectation because I prefer to take as light a touch as possible when modifying the Domino program directory in a container image.

Automating The Process

While this server has worked splendidly for me, it got me thinking about an idea I've been kicking around for a little while. Since the needs of NSF ODP are very predictable, there's no reason that I wouldn't automate the whole process in a Maven build, add a third option beyond local and remote operations where the plugin could spin up a temporary container to do the work. That would dramatically lower the requirements on the local environment, making it so that you just need a Docker-compatible environment with a Domino image.

And, as above, my experience writing integration tests with Testcontainers paid off. In fact, it paid off directly: though Testcontainers is clearly meant for testing, the work it does is exactly what I need, so I'm re-using it here. It has exactly the sort of API I want for this: I can specify that I want a container from a Dockerfile, I can add in resources from the current project and generate them on the fly, and the library's scaffolding will ensure that the container is removed when the process is complete.

The path I've taken so far is to start up a true Domino server and communicate with it via HTTP, piggybacking on the existing weird little line-delimited-JSON format I made. This is working really well, and I have it successfully building my most-complex NSFs nicely. I'm not fully comfortable with the HTTP approach, though, since it requires that you can contact the Docker host on an arbitrary port. That's fine for a local Docker runtime or, in my case, a VM on the same local network, where you don't have to worry about firewalls blocking off the random port it opens. I think I could do this by executing CLI commands in the container and copying a file out, which would happen all via the Docker socket, but that'll take some work to make sure I can reliably monitor the status. I have some ideas for that, but I may just ship it using HTTP for the first version so I can have a solid baseline.

Overall, I'm pleased with the experience, and continue to be very happy with Testcontainers even when I'm using it outside its remit. My plan for the short term is to clean the experience up a bit and ship it as part of 3.11.0.

XPages JEE 2.11.0 and the Javadoc Provider

Apr 20, 2023, 9:47 AM

Yesterday, I put two releases up on OpenNTF, and I figure it'd be worth mentioning them here.

XPages Jakarta EE Support

The first is a new version of the XPages Jakarta EE Support project. As with the last few, this one is mostly iterative, focusing on consolidation and bug fixes, but it added a couple neat features.

The largest of those is the JPA support I blogged about the other week, where you can build on the JDBC support in XPages to add JPA entities. This is probably a limited-need thing, but it'd be pretty cool if put into practice. This will also pay off all the more down the line if I'm able to add in Jakarta Data support in future versions, which expands the Repository idiom currently in the NoSQL build I use to cover both NoSQL and RDBMS databases.

I also added the ability to specify a custom JsonbConfig object via CDI to customize the output of JSON in REST services. That is, if you have a service like this:

public SomeCustomObject get() {
	return findSomeObject();

In this case, the REST framework uses JSON-B to turn SomeCustomObject into JSON. The defaults are usually fine, but sometimes (either for personal preference or for migration needs) you'll want to customize it, particularly changing the behavior from using bean getters for properties to instead use object fields directly as Gson does.

I also expanded view support in NoSQL by adding a mechanism for querying views with full-text searches. This is done via the ViewQuery object that you can pass to a repository method. For example, you could have a repository like this:

public interface EmployeeRepository extends DominoRepository<Employee, String> {
	Stream<Employee> listFromSomeView(Sorts sorts, ViewQuery query);

Then, you could perform a full-text query and retrieve only the matching entries:

Stream<Employee> result = repo.listFromSomeView(
		.ftSearch("Department = 'HR'", Collections.singleton(FTSearchOption.EXACT))

Down the line, I plan to add this capability for whole-DB queries, but (kind of counter-intuitively) that would get a bit fiddlier than doing it for views.

XPages Javadoc Provider

The second one is a new project, the XPages Javadoc Provider. This is a teeny-tiny project, though, not even containing any Java code. This is a plugin for either Designer or normal Eclipse and it provides Javadoc for some standard XPages classes - specifically, those covered in the official Javadoc for Designer and the XPages Extensibility APIs. This covers things like and the core stuff from, but doesn't cover things like javax.faces.* or lotus.domino.

The way this works is that it uses Eclipse's Javadoc extension point to tell Designer/Eclipse that it can find Javadoc for a couple bundles via the hosted version, really just linking the IDE to the public HTML. I went this route (as opposed to embedding the Javadoc in the plugin) because the docs don't explicitly say they're redistributable, so I have to treat them as not. Interestingly, the docs are actually still hosted at If HCL publishes them on their site or makes them officially redistributable, I'll be able to update the project, but for now it's relying on nobody at IBM remembering that they're up there.

In any event, it's not a huge deal, but it's actually kind of nice. Being able to have Javadoc for things like XspLibrary removes a bit of the guesswork in using the API and makes the experience feel just a bit better.

Dipping My Feet Into DKIM and DMARC

Apr 10, 2023, 10:56 AM

Tags: admin

For a very long time now, I've had my mail set up in a grandfathered-in free Google Whatever-It's-Called-Now account, which, despite its creepiness, serves me well. It's readily supported by everything and it takes almost all of the mail-hosting hassle out of my hands.

Not all of the hassle, though, and over the past couple weeks I decided that I should look into configuring DKIM and DMARC, first for my personal mail and (if it doesn't blow up) for my company mail. I had set up SPF a couple years back, and I figured it was high time to finish the rest.

As with any admin-related post, keep in mind that I'm just tinkering with this stuff. I Am Not A Lawyer, and so forth.

The Standards

DKIM is a neat little standard. It's sort of like S/MIME's mail-signing capabilities, except less hierarchical and more commonly enforced on the server than on the client. That "sort of" does some heavy lifting, but it should suit to think of it like that. What you do is have your server generate a keypair (Google has a system for this), take the public key from that, and stick it in your DNS configuration. The sending server will then add a header to outgoing messages with a signature and a lookup key - in turn, the receiving server can choose to look up the key in the claimed DNS to verify it. If the key exists in DNS and the signature is valid, then the receiver can be fairly certain that the receiver can at least be confident that the sender is who they say they are (in the sense of having control of a sending server and DNS, anyway). Since this signing is server-based, it requires a lot less setup than S/MIME or GPG for mail users, though it also doesn't confer all the benefits. Neat, though.

DMARC is an interesting thing. It kind of sits on top of SPF and DKIM and allows an admin to define some requested handling of mail for their domain. You can explicitly state that you expect your SPF and DKIM records to be enforced and provide some guidance for recipient servers to do so. For example, you might own "" and go whole-hog: declare that your definitions are complete and that remote servers should outright reject 100% of email claiming to be from "" but either didn't come from a server enumerated in your SPF record or lack a valid DKIM signature. Most likely, at least when rolling it out, you'll start softer, maybe saying to not reject anything, or to quarantine some percentage of failing messages. It's a whole process, but it's good that gradual adoption is built in.

Interestingly, DMARC also lets you request that servers that received mail from "you" email you summaries from time to time. These generally (always?) take the form of a ZIP attachment containing an XML file. In there, you'll get a list of servers that contacted them claiming to be you and a summary of the pass/fail state of SPF and DKIM for them. This has been useful - I found that I had to do a little tweaking to SPF for known-good servers. This is vital for a slow roll-out, since it's very difficult to be completely sure you got everything when you first start setting this stuff up, and you don't want to too-eagerly poison your outgoing mail.


Really, configuring this stuff wasn't bad. I mostly followed Google's guides for DKIM and DMARC, which are pretty clear and give you a good plan for a slow rollout.

Though Google is my main sender, I still have some older agents that might send out mail for my old ID from time to time from Domino, so I wanted to make sure that was covered too. Fortunately, Domino supports DKIM as well, and it wasn't too bad. Admittedly, the process is a little more "raw" than with Google's admin site, but it's not too bad. It's not like I'm uncomfortable with a CLI-based approach, and it's in line with other recent-era security additions using the keymgmt tool, like shared DAOS encryption.

It just came down to following the instructions in HCL's docs and it worked swimmingly. If you have a document in your cred store that matches an INI-configured "domain to ID" value for outgoing mail, Domino will use it. Like how DMARC has a slow-roll-out system built in, Domino lets you choose between signing mail just when available or being harsher about it, and refusing to send out any mail it doesn't know how to sign. I'll probably switch to the second option eventually, since it sounds like a good way to ensure that your server is being a good citizen across the board.


In any event, this is all pretty neat. It's outside my bailiwick, but it's good to know about it, and it also helps reinforce a pub-key mental model similar to things like OIDC. It also, as always, just feels good to check a couple more boxes for being a good modern server.