In Development: Containerized Builds in NSF ODP

Sun Apr 30 11:46:46 EDT 2023

Most of my active development happens macOS-side - I'll periodically use Designer in Windows when necessary, but otherwise I'll jump through a tremendous number of hoops to keep things in the Mac realm. The biggest example of this is the NSF ODP Tooling, born from my annoyance with syncing ODPs in Designer and expanded to add some pleasantries for working with ODPs directly in normal Eclipse.

Over the last few years, though, the process of compiling NSFs on macOS has gotten kind of... melty. Apple's progressive locking-down of traditional native loading mechanisms and the general weirdness of the Notes package and its embedded non-JDK JVM have made things get a little weird. I always end up with a configuration that can work, but it's rough going for sure.

Switching to Remote

The switch to ARM for my workspace and the lack of an ARM-native macOS Notes client threw another wrench into the works, and I decided it'd be simpler to switch to remote compilation. Remote operations were actually the first mechanism I added in, since it was a lot easier to have a pre-made Domino+OSGi environment than spinning one up locally, and I've kept things up since.

My first pass at this was to install the NSF ODP bundles on my main dev server whenever I needed them. This worked, but it was annoying: I'd frequently need to uninstall whatever other bundles I was using for normal work, install NSF ODP, to my compilation/export, and then swap back. Not the best.

Basic Container

Since I had already gotten in the habit of using a remote x64 Docker host, I decided it'd make sense to make a container specifically to handle NSF ODP operations. Since I would just be feeding it ODPs and NSFs, it could be almost entirely faceless, listening only via HTTP and using an auto-generated server ID.

The tack I took for this was to piggyback on the work I had already done to make an IT-suite container for the XPages JEE project. I start with the baseline Domino container from the community script, feed it some basic auto-configure params to relax the HTTP upload-size limits, and add a current build of the NSF ODP OSGi plugins to the Domino server via the filesystem. Leaving out the specifics of the auto-config script, the Dockerfile looks like:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
FROM hclcom/domino:12.0.2

ENV LANG="en_US.UTF-8"
ENV SetupAutoConfigure="1"
ENV SetupAutoConfigureParams="/local/runner/domino-config.json"
ENV DOMINO_DOCKER_STDOUT="yes"

RUN mkdir -p /local/runner && mkdir -p /local/eclipse/eclipse/plugins

COPY --chown=notes:notes domino-config.json /local/runner/
COPY --chown=notes:notes container.link /opt/hcl/domino/notes/latest/linux/osgi/rcp/eclipse/links/container.link
COPY --chown=notes:notes staging/plugins/* /local/eclipse/eclipse/plugins/

The runner script copies the current NSF ODP build to "staging/plugins" and it all holds together nicely. Technically, I could skip the container.link bit - that's mostly an affectation because I prefer to take as light a touch as possible when modifying the Domino program directory in a container image.

Automating The Process

While this server has worked splendidly for me, it got me thinking about an idea I've been kicking around for a little while. Since the needs of NSF ODP are very predictable, there's no reason that I wouldn't automate the whole process in a Maven build, add a third option beyond local and remote operations where the plugin could spin up a temporary container to do the work. That would dramatically lower the requirements on the local environment, making it so that you just need a Docker-compatible environment with a Domino image.

And, as above, my experience writing integration tests with Testcontainers paid off. In fact, it paid off directly: though Testcontainers is clearly meant for testing, the work it does is exactly what I need, so I'm re-using it here. It has exactly the sort of API I want for this: I can specify that I want a container from a Dockerfile, I can add in resources from the current project and generate them on the fly, and the library's scaffolding will ensure that the container is removed when the process is complete.

The path I've taken so far is to start up a true Domino server and communicate with it via HTTP, piggybacking on the existing weird little line-delimited-JSON format I made. This is working really well, and I have it successfully building my most-complex NSFs nicely. I'm not fully comfortable with the HTTP approach, though, since it requires that you can contact the Docker host on an arbitrary port. That's fine for a local Docker runtime or, in my case, a VM on the same local network, where you don't have to worry about firewalls blocking off the random port it opens. I think I could do this by executing CLI commands in the container and copying a file out, which would happen all via the Docker socket, but that'll take some work to make sure I can reliably monitor the status. I have some ideas for that, but I may just ship it using HTTP for the first version so I can have a solid baseline.

Overall, I'm pleased with the experience, and continue to be very happy with Testcontainers even when I'm using it outside its remit. My plan for the short term is to clean the experience up a bit and ship it as part of 3.11.0.

New Comment