Showing posts for tag "docker"

Adding Code Coverage Reports To Domino-Container-Run Tests

Mon Mar 11 15:33:02 EDT 2024

Tags: docker testing

When you're writing test suites for your code, it can be very useful to use a tool to analyze the code coverage of your tests. While people can get a little obsessive about coverage percents, there's certainly no denying that it's helpful to know how much of your code is actually run when testing, and also being able to look down into the specifics of what is covered.

With Java, one of the preeminent tools for this is JaCoCo, a venerable open-source library that you can integrate with your test suites to give reports of your coverage. In a normal project, such as a build run via Maven, you can use the Maven plugin in tandem with the Maven Surefire and Failsafe plugins. However, things get more complicated if the code you're actually testing isn't in the Surefire JVM, but rather inside a container.

That's exactly the situation I have with the integration-test suite of the XPages Jakarta EE project, where it creates a Docker container with the current build of the project deployed as OSGi plugins, and then executes HTTP calls against OSGi bundles and NSFs. I figured this was a solvable problem, so I set out doing so.

I first came across this blog post, which describes the general idea well, but unfortunately references Gists that seem to no longer exist. Still, it gave me a good starting point.

Installing JaCoCo in Domino

The first thing I had to do was to get the JaCoCo Java agent into the container. I added it as a Maven dependency to the IT suite project:

1
2
3
4
5
6
<dependency>
	<groupId>org.jacoco</groupId>
	<artifactId>org.jacoco.agent</artifactId>
	<version>0.8.11</version>
	<scope>test</scope>
</dependency>

Conveniently, this dependency is itself a wrapper for the agent JAR and comes with a convenience method for accessing the JAR data. I used that to read it into memory and send it to the Docker runtime during the container build:

1
2
3
4
5
byte[] agentData;
try(InputStream is = AgentJar.getResourceAsStream()) {
	agentData = IOUtils.toByteArray(is);
}
withFileFromTransferable("staging/jacoco.jar", Transferable.of(agentData)); //$NON-NLS-1$

The use of Transferable here allows me to keep the process independent of whether Docker is running locally or remote - I run remotely almost all the time nowadays, due to Domino's continued lack of an ARM port.

With the file in place, I modified my Dockerfile to copy it to a known location in the container:

1
2
COPY --chown=notes:notes staging/jacoco.jar /local/
COPY --chown=notes:notes staging/JavaOptionsFile.txt /local/

The JavaOptionsFile.txt was already there for another ARM-related reason, but it's important to note for the next step. This sort of file is how you enable JaCoCo in the Domino JVM: I set JavaUserOptionsFile=/local/JavaOptionsFile.txt and it'll read its rules from there. Following the instructions, I added -javaagent:/local/jacoco.jar=output=file,destfile=/tmp/jacoco.exec on its own line in this file. This causes JaCoCo to be automatically loaded with the HTTP JVM and to store its report in the named file on shutdown.

Reading the Data

That said, this didn't work immediately. The file "/tmp/jacoco.exec" was created properly inside the container, so the agent was running, but the file content was always zero bytes. I realized that this was due to the merciless way in which the container is killed by my test suite: there's no proper shutdown step, and so JaCoCo's shutdown hook never fires.

Fortunately, writing to a file isn't the only way JaCoCo can do its reporting - you can also have it open up a TCP port to connect to and read. So I changed the Java option line to:

1
-javaagent:/local/jacoco.jar=output=tcpserver,address=*,port=6300

I modified the withExposedPorts(...) call inside the class that builds my Testcontainers container to also include 6300, and then used getMappedPort(6300) to identify the actual randomized port mapped by Docker.

The remaining task was to figure out the little protocol used by JaCoCo to signal that it should collect and return its data. I get the impression that it's not too complicated, but I still figured it'd be best to use an existing implementation. I found jacocotogo, a Maven plugin that reads the data, and it looked promising. However, it had two problems: being a Maven plugin, it came with a bunch of transitive dependencies I didn't want, and it's also 11 years old and thus a bit out of date.

I ended up forking the main utility class, trimming out the parts I didn't need (like JMX), switching it to NIO, and going from there.

Using the Data

With that all in place, a test run will end up with a file named "jacoco.exec" inside the "target" directory. Using this file varies by IDE, but, in Eclipse, you can install the EclEmma tool, open the "Coverage" view, right-click in the table area, and choose "Import Session...". That will let you locate the file and then choose the projects from your workspace that you're looking to analyze.

When I did that, I got my results:

Screenshot of Eclipse's Coverage tool detailing my test suite's coverage of somewhere around 50-65%

This is surprisingly good for the project, especially when you consider how large chunks of the red bars are things like the servlet wrapper package, which includes a lot of delegating code that is obligatory to match the interface but is not likely to be actually used in practice.

While this is currently the only project where I've needed to do this, it'll certainly be good to keep these techniques in mind. The TCP port thing in particular should be handy in future edge cases even without the Docker part.

Homelab Rework: Phase 3 - TrueNAS Core to Scale

Sat Mar 02 14:00:08 EST 2024

  1. Planning a Homelab Rework
  2. Homelab Rework: Phase 1
  3. Homelab Rework: Phase 2
  4. Homelab Rework: Phase 3 - TrueNAS Core to Scale

When I last talked about the ragtag fleet of computers I generously call a "homelab" now, I had converted my gaming/VM machine back from Proxmox to Windows, where it remains (successfully) to this day.

For a while, though, I've been eyeing converting my NAS from TrueNAS Core to Scale. While I really like FreeBSD technically and philosophically, running Linux was very appealing for a number of reasons. Still, it was a high-risk operation, even though the actual process of migration looked almost impossibly easy. For some reason, I decided to take the plunge this week.

The Setup

Before going into the actual process, I'll describe the setup a bit. The machine in question is a Mac Pro 1,1: two Xeon 5150s, four traditional HDDs for storage, and a handful of M.2 drives. The machine itself is far, far too old to have NVMe on the motherboard, but it does have PCIe, so I got a couple adapter cards. The boot volume is a SATA M.2 disk on one of them, while I have some actual NVMe ones serving as cache/log devices in the ZFS pool. Also, though everything says that the maximum RAM capacity is 32 GB, I actually have 64 in there and it's worked perfectly.

It's a bit of a weird beast this way, but those old Mac Pros were built to last, and it's holding up.

Also, if you're not familiar with TrueNAS and its different variants, it's worth a bit of explanation. TrueNAS Core (née FreeNAS) is a FreeBSD-based NAS-focused OS. You primarily interact with it via a web-based GUI and its various features heavily revolve around the use of ZFS, while its app system uses FreeBSD jails and its VM system uses Bhyve. TrueNAS Scale is a related system, but based on Debian Linux instead of FreeBSD. It still uses ZFS, and its GUI is similar to Core, but it implements its apps and VMs differently (more on this in a bit). For NAS/file-share uses, there's actually less of a difference than you might think based on their different underlying OSes, but the distinctions come into play once you go beyond the basics.

The Conversion

If anything, the above-linked documentation overstates the complexity of the operation. I didn't even need to go the "manual update" route: I went to the Update panel, switched from the TrueNAS Core train to the current non-beta TrueNAS Scale one, hit Update, and let it go. It took a long time, presumably due to the age of the machine, but it did its job and came back up on its own.

Well, mostly: for some reason, the actual data ZFS pool was sort of half-detached. The OS knew it was supposed to have a pool by its name, but didn't match it up to the existing disks. To fix this, I deleted the configuration for the pool (but did not delete the connected service configuration) and then went to Import Pool, where the real one existed. Once it was imported, everything lined back up without further issue.

Being basically a completely-different OS, there are a number of features that Core supports but Scale doesn't. Of that list, the only one I was using was the plugin/jail system, but I had whittled my use down to just Postgres (containing only discardable dev data) and Plex. These are both readily available in Scale's app system, and it was quick enough to get Plex re-set-up with the same library data.

Apps

As I mentioned, TrueNAS Core uses a custom-built "plugin" system sitting on top of the venerable FreeBSD jail capabilities. Those jails are similar in concept to things like Docker containers, and work very similarly in practice to the Linux Containers system I experienced with Proxmox.

TrueNAS Scale, for its part, uses Kubernetes, specifically by way of K3s, and provides its own convenient UI on top of it. Good thing it does provide this UI, too, since Kubernetes is a whole freaking thing, and I've up until this point stayed away from learning it. I guess my time has come, though. Kubernetes is distinct from Docker - while older versions used Docker as a runtime of sorts, this was always an implementation detail, and the system in use in current TrueNAS Scale is containerd.

Setting aside the conceptual complexity of Kubernetes, this distinction from Core is handy: while not being Docker, Kubernetes can consume Docker-compatible images and run them, and that ecosystem is huge. Additionally, while TrueNAS ships with a set of common app "charts" (Plex included), there's a community project named TrueCharts that adds definitions for tons and tons more.

Domino

That brings me to our beloved Domino. I had actually kind of gotten Domino running in a jail on TrueNAS Core, but it was much more an exercise in seeing if I could do it than anything useful: the installer didn't run, so I had to copy an installation from elsewhere, and the JVM wouldn't even load up without crashing. Neat to see, but I didn't keep it around.

The prospect on Scale is better, though. For one, it's actually Linux and thus doesn't need a binary-compatibility shim like FreeBSD has, and the container runtime meant I could presumably just use the normal image-building process. I could also run it in a VM, since the Linux hypervisor works on this machine while bhyve did not, but I figured I'd give the container path a shot.

Before I go any further, I'll give a huge caveat: while this works better than running it on FreeBSD, I wouldn't recommend actually doing what I've done for production. It'll presumably do what I want it to do here (be a local replica of all of my DBs without requiring a distinct VM), it's not ideal. For one, Domino plus Kubernetes is a weird mix: Kubernetes is all about building up and tearing down a swarm of containers dynamically, while Domino is much more of a single-server sort of thing. It works, certainly, but Kubernetes is always there to tempt you into doing things weird. Also, I know almost nothing about Kubernetes anyway, so don't take anything I say here as advice. It's good fun, though.

That said, on to the specifics!

Deploying the Container

The way the TrueNAS app UI works, you can go to "Custom App" and configure your container by referencing a Docker image from a repository. I don't normally actually host a Docker registry, instead manually loading the image into the runtime. It might be possible to do that here, but I took the opportunity to set up a quick local-network-only one on my other machine, both because I figured it'd be neat to learn to do that and because I forgot about the Harbor-hosted option on that link.

Since the local registry used HTTP and there's nowhere in the TrueNAS UI to tell it to not use HTTPS, I followed this suggestion to configure K3s to explicitly map it. With that in place, I was able to start pulling images from my registry.

The Domino Version

One quirk I quickly ran into was that I can't use Domino 14 on here. The reason for this isn't an OS problem, but rather a hardware limitation: the new glibc that Domino 14 uses requires the "x86-64-v2" microarchitecture level and the Xeon 5150 just doesn't have that by dint of pre-dating it by two years.

That's fine, though: I really just want this to house data, not app development, and 12.0.2 will do that with aplomb.

Volume Configuration

The way I usually set up a Domino container when using e.g. Docker Compose is that I define a handful of volumes to go with it: for the normal data dir, for DAOS, for the Transaction Log, and so forth. This is a bit of an affectation, I suppose, since I could also just define one volume for everything and it's not like I actually host these volumes elsewhere, but... I don't know, I like it. It keeps things disciplined.

Anyway, I originally set this up equivalently in the Custom App UI in TrueNAS, creating a "Volume" entry for each of these. However, I found that, for some reason, Domino didn't have write access to the newly-created volumes. Maybe this is due to the uid the container is built to use or something, but I worked around it by using Host Path Volumes instead. The net effect is the same, since they're in the same ZFS pool, and this actually makes it easier to peek at the data anyway, since it can be in the SMB share.

Once I did that and made sure the container user could modify the data, all was well. Mostly, anyway.

Transaction Logs, ZFS, and Sector Size

Once Domino got going, I noticed a minor problem: it would crash quickly, every time. Specifically, it crashed when it started preparing the transaction log directory. I eventually remembered running into the same problem on Proxmox at one point, and it brought me back to this blog post by Ted Hardenburgh. Long story short, my ZFS pool uses 4K sectors and Domino's transaction logs can't deal with that, at least in 12.0.2 and below.

This put me in a bit of a sticky spot, since the way to change this is to re-create the entire pool and I really didn't want to do that.

I came up with a workaround, though, in the form of making a little disk image and formatting it ext4. You can use a loop device to mount a file like a disk, so the process looks like this:

1
2
3
4
dd if=/dev/zero of=tlog.img bs=1G count=1
sudo /sbin/losetup --find --show tlog.img
sudo mkfs.ext4 /dev/loop0
sudo mount /dev/loop0 /mnt/tlog

That makes a 1GB disk image, formats it ext4, and mounts it as "/mnt/tlog". This process defaults to 512-byte sectors, so I made a directory within it writable by the container user (more on this shortly), configured the Domino container to map the transaction log directory to that path, and all was well.

Normally, to get this mounted at boot, you'd likely put an entry in fstab. However, TrueNAS assumes control over system configuration files like that, and you shouldn't edit them directly. Instead, what I did was write a small script that does the losetup and mount lines above and added an entry in "System Settings" - "Advanced" - "Init/Shutdown Scripts" to run this at pre-init.

Networking

The next hurdle I wanted to get over was the networking side. You can map ports in apps in a similar way to what you'd do with Docker, but you have to map them to a port 9000 or above. That would be an annoying issue in general, but especially for NRPC. Fortunately, the app configuration allows you give the container its own IP address in the "Add external Interfaces" (sic) configuration section. Since the virtual MAC address changes each time the container is deployed, I gave it a static IP address matching a reservation I carved out on my DHCP server, pointed it to the DNS server, and all was well. All of Domino's open ports are available on that IP, and it's basically like a VM in that way.

Container User

Normally, containers in TrueNAS's app system run as the "apps" user, though this is configurable per-app. The way the Domino container launches, though, it runs as UID 1000, which is notes inside the container. Outside the container, on my setup, that ID maps to... my user account jesse.

Administration-wise, that's not exactly the best! In a less "for fun" situation, I'd change the container user or look into UID mapping as I've done with Docker in the past, but honestly it's fine here. This means it's easy for me to access and edit Domino data/config files over the share, and it made the volume mapping above work without incident. As long as no admins find out about this, it can be my secret shame.

Future Uses

So, at this point, the server is doing the jobs it was doing previously, plus acting as a nice extra replica server for Domino. It's positioned well now for me to do a lot of other tinkering.

For one, it'll be a good opportunity for me to finally learn about Kubernetes, which I've been dragging my feet on. I installed the Portainer chart from TrueCharts to give me a look into the K8s layer in a way that's less abstracted than the TrueNAS UI but more familiar and comfortable than the kubectl tool for me for now.

Additionally, since the hypervisor works on here, it'll be another good location for me to store utility VMs when I need them, rather than putting everything on the Windows machine (which has half as much RAM).

I could possibly use it to host servers for various games like Terraria, though I'm a bit wary of throwing such ancient processors at the task. We'll see about that.

In general, I want to try hosting more things from home when they're non-critical, and this will definitely give me the opportunity. It's also quite fun to tinker with, and that's the most important thing.

Planning a Homelab Rework

Sun Jun 25 20:39:37 EDT 2023

  1. Planning a Homelab Rework
  2. Homelab Rework: Phase 1
  3. Homelab Rework: Phase 2
  4. Homelab Rework: Phase 3 - TrueNAS Core to Scale

Over the years, I've shifted how I use my various computers here, gaining new ones and shifting roles around. My current setup is a mix of considered choices and coincidence, but it works reasonably well. "Homelab" may be a big term for it, but I have something approaching a reasonable array:

  • nemesis, a little Netgate-made device running pfSense and serving as my router and firewall
  • trantor, a first-edition Mac Pro running TrueNAS Core
  • terminus, my gaming PC that does double duty as my Hyper-V VM host for the Windows and Linux VMs I use for work
  • nephele, a 2013 MacBook Air I recently conscripted to be a Docker host running Debian
  • tethys, my current desktop workhorse, an M2 Mac mini running macOS
  • ione, an M1 MacBook Air with macOS that does miscellaneous and travel duty

While these things work pretty well for me, there are some rough edges. tethys and ione need no changes, but others are up in the air.

The NAS

trantor's Xeon is too old to run the bhyve hypervisor used by TrueNAS Core, so I can't use it for VM duty. It can run jails, which I've historically used for some apps like Plex and Postgres, and those are good. Still, as much as I like FreeBSD, I have to admit that running Linux there would get me a bit more, such as another Domino server (I got Domino running under Linux binary compatibility, but only in a very janky and unreliable way). I'm considering side-grading it to TrueNAS Scale, but the extremely-high likelihood of trouble with that switch on such an odd, old machine gives me pause. As a NAS, it's doing just fine as-is, and I may leave it like that.

The Router

nemesis is actually perfectly fine as it is, but I'm pondering giving it more work. While Linode is still doing well for me, its recent price hike left a bad taste in my mouth, and I wouldn't mind moving some less-essential services to my own computers here. If I do that, I could either map a port through the firewall to an internal host (as I've done previously) or, as I'm considering, run HAProxy directly on it and have it serve as a reverse proxy for all machines in the house. On the one hand, it seems incorrect and maybe unwise to have a router do double-duty like this. On the other, since I'm not going to have a full-fledged DMZ or anything, it's perfectly-positioned to do the job. Assuming its little ARM processor is up to the task, anyway.

Fortunately, this one is kind of speculative anyway. While I used to have some outwardly-available services I ran from in the house, those have fallen by the wayside, so this would be for a potential future case where I do something different.

The VM Host/Gaming PC

terminus, though, is where most of my thoughts have been spent lately. It's running Windows Server 2019, an artifact of a time when I thought I'd be doing more Server-specific stuff with it. Instead, though, the only server task I use it for is Hyper-V, and that runs just the same on client Windows. The straightforward thing to do with it would be to replace Windows Server with Windows 11 - that'd put me back on a normal release train, avoid some edge cases of programs complaining about OS compatibility, and generally be a bit more straightforward. However, the only reason I have Windows as the top-level OS on there at all is for gaming needs. Moreover, it just sits in the basement, and I don't even play games directly at its console anymore - it's all done through Parsec.

That has me thinking: why don't I demote Windows to a VM itself? The processor has a built-in video chipset for basic needs, and I could use PCIe passthrough to give full control of my video card to a Windows VM. I could pick a Linux variant to be the true OS for the machine, run VMs and containers as top-level peers, and still get full speed for games inside the Windows VM.

I have a few main contenders in mind for this job.

The frontrunner is Proxmox, a purpose-built Debian variant geared towards running VMs and Linux Containers as its bread-and-butter. Though I don't need a lot of the good features here, it's tough to argue with an OS that's specifically meant for the task I'm thinking of. However, it's a little sad that the containers I generally run aren't LXC-type containers, and so I'd have to make a VM or container to then run Docker. In the latter form, there wouldn't be too much overhead to it, but it'd be kind of ugly, and it'd feel weird to do a full revamp to then immediately do one of the main tasks only in a roundabout way.

That leads to my dark-horse candidate of TrueNAS Scale. This isn't as obvious a choice, since this machine isn't intended to be a NAS, but Scale bumps Kubernetes and Docker containers to the top level, and that's exactly how I'd plan to use it. I assume that the VM management in TrueNAS isn't as fleshed out as Proxmox, but it's on a known foundation and looks plenty suitable for my needs.

Finally, I could just skip the "utility" distributions entirely and just install normal Debian. It'd certainly be up to the task, though I'd miss out on some GUI conveniences that I'd otherwise use heavily. On the other hand, that'd be a good way to force myself to learn some more fundamentals of KVM, which is the sort of thing that usually pays off down the line.

For Now

Or, of course, I could leave everything as-is! It does generally work - Hyper-V is fine as a hypervisor, and it's only sort of annoying having an extra tier in there. The main thing that will force me to do something is that I'll want to ditch Windows Server regardless, even if I just do an in-place switch to Windows 11. We'll see!

Weekend Tinkering With Traefik

Mon May 29 11:57:34 EDT 2023

Tags: docker

For my D&D group, we've been using the venerable Roll20 for a good long time. It's served us okay, but it's barely improved for our needs over the years and our eyes have been wandering. Specifically, our eyes wandered over to Foundry VTT. Foundry has a lot going for it: it's sharp-looking, it has tons of mods, and you can host it yourself.

So, a bit ago, I set up just such an instance, making a Docker container out of it on one of my Linode servers and configuring my nginx reverse proxy on another Linode to point to it. There was a little fiddling to be done to my usual setup to make sure it passes along the WebSocket stuff, but it worked.

However, when we put it to the test, the DM side seemed slow, in a way that could be readily attributable to the fact that there's an extra network hop between the reverse proxy and the WebSocket destination. To lessen that as a possibility, I decided I should point the DNS directly to the host running it, eliminating the hop.

My first plan was to do the same thing I had with the larger setup, but just locally: spin up nginx and pair it with certbot on a cron job to handle the HTTPS certificates. However, it's been a long time since I had developed my current standard setup and I figured there's probably a nicer way to do it, since this is a very normal case.

Traefik

And so my eyes turned to Traefik, a purpose-built tool for this sort of thing. It has a lot of nice fiddly options, but one of its cleanest uses is to deploy it as a Docker container and have it use the Docker socket for picking up configuration to route to other containers.

I ended up with a Compose configuration that's more-or-less right out of any tutorial you'd find for this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
version: "3.3"
services:
  traefik:
    image: "traefik:v2.10"
    privileged: true
    userns_mode: host
    command:
      - "--providers.docker=true"
      - "--providers.docker.exposedbydefault=false"
      - "--entrypoints.web.address=:80"
      - "--entrypoints.websecure.address=:443"
      - "--certificatesresolvers.leresolver.acme.email=<my email>"
      - "--certificatesresolvers.leresolver.acme.httpchallenge.entrypoint=web"
      - "--certificatesresolvers.leresolver.acme.storage=/letsencrypt/acme.json"
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - "/var/run/docker.sock:/var/run/docker.sock:ro"
      - "letsencrypt:/letsencrypt"
    networks:
      - traefiknet
networks:
  traefiknet:
    name: traefiknet
    external: true
volumes:
  letsencrypt: {}

You can configure Traefik with configuration files as well, but the route I'm taking is to pass the config I need in the command parameters, so the entire thing is specified in the Compose file. I have it configured here to use Docker for its configuration discovery, to listen on ports 80 and 443, and to enable a Let's Encrypt resolver. On that last point, it really handles basically everything automatically: if you have an app that declares itself as "app.foo.com" on the HTTPS endpoint, Traefik will pick up on that and automatically do the dance with Let's Encrypt to present the certificate.

I created a Docker network named "traefiknet" for this and all participating apps to sit in. You can also do this by using host networking, but I kind of like this way.

Foundry

With that set up, my next step was to configure Foundry to participate in this. I tweaked the Foundry Compose config to remove the published port, join the common network, and to include Traefik information in its labels:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
version: "3.8"

services:
  foundry:
    image: felddy/foundryvtt:release
    init: true
    restart: always
    volumes:
      - foundry_data:/data
    networks:
      - traefiknet
    environment:
      - "FOUNDRY_USERNAME=<my username>"
      - "FOUNDRY_PASSWORD=<my password>"
      - "FOUNDRY_ADMIN_KEY=secret-admin-key"
    labels:
      - "traefik.enable=true"
      - "traefik.docker.network=traefiknet"
      
      - "traefik.http.routers.vtt-example.rule=Host(`my.vtt.host`)"
      - "traefik.http.routers.vtt-example.entrypoints=websecure"
      - "traefik.http.services.vtt-example.loadbalancer.server.port=30000"
      - "traefik.http.routers.vtt-example.tls=true"
      - "traefik.http.routers.vtt-example.tls.certresolver=leresolver"
      - "traefik.http.routers.vtt-example.tls.domains[0].main=my.vtt.host"
volumes:
  foundry_data: {}
networks:
  traefiknet:
    name: traefiknet
    external: true

The labels are the meat of it here. I declare that the container participates in the Traefik configuration and will be accessible via the "traefiknet" network I created. Then, I have bits to describe the specific routing. Here, "vtt-example" is an arbitrary name that I picked for this routing config - mostly, it's important that it's distinct from other routing configurations, but otherwise you can pick whatever.

The .rule=Host(my.vtt.host) bit is enough to map all requests beneath that host name to this container. There are other ways to do this - by path, by headers, and other things, and a combination thereof - but this suffices for my needs. This handles the normal sensible defaults for such a thing, including passing WebSockets through nicely. With .entrypoints=websecure, I have it opt in to the HTTPS port (left out of this is that I have another container that configures blanket HTTP -> HTTPS redirection for all hosts). With .loadbalancer.server.port (under "services" instead of "router"), I can declare that the Foundry app is listening on port 30000 within the container.

The .tls bits declare that this should get a TLS certificate, that it should use the Let's Encrypt resolver (by the name I chose, "leresolver"), and that it should use the domain I specified for it. In theory, I think it should pick up on that domain from the Host rule, but in my setup that didn't work for me - it's possible that that was just due to teething problems in my config, though.

Conclusion

I haven't yet had the opportunity to see if this fixed the sluggishness problem, but I'm glad it gave me the impetus to tinker with this. While I'll probably keep using nginx for most of my configuration (some of my configs are a lot more fiddly than this), I really like this as a default for on-host routing. If you combine that with my overall move to figuring that all server software should be deployed in a container unless you have a good reason to do otherwise, this slots in very nicely. I really like how the configuration is distributed away from the reverse proxy and to the apps that are actually being proxied to. With that, you can see everything you need in one place: you know the proxy is out there somewhere, and now the app's Compose file has everything important right in it. So, if you have a need, I'd say give it a look - it's quite neat.

In Development: Containerized Builds in NSF ODP

Sun Apr 30 11:46:46 EDT 2023

Most of my active development happens macOS-side - I'll periodically use Designer in Windows when necessary, but otherwise I'll jump through a tremendous number of hoops to keep things in the Mac realm. The biggest example of this is the NSF ODP Tooling, born from my annoyance with syncing ODPs in Designer and expanded to add some pleasantries for working with ODPs directly in normal Eclipse.

Over the last few years, though, the process of compiling NSFs on macOS has gotten kind of... melty. Apple's progressive locking-down of traditional native loading mechanisms and the general weirdness of the Notes package and its embedded non-JDK JVM have made things get a little weird. I always end up with a configuration that can work, but it's rough going for sure.

Switching to Remote

The switch to ARM for my workspace and the lack of an ARM-native macOS Notes client threw another wrench into the works, and I decided it'd be simpler to switch to remote compilation. Remote operations were actually the first mechanism I added in, since it was a lot easier to have a pre-made Domino+OSGi environment than spinning one up locally, and I've kept things up since.

My first pass at this was to install the NSF ODP bundles on my main dev server whenever I needed them. This worked, but it was annoying: I'd frequently need to uninstall whatever other bundles I was using for normal work, install NSF ODP, to my compilation/export, and then swap back. Not the best.

Basic Container

Since I had already gotten in the habit of using a remote x64 Docker host, I decided it'd make sense to make a container specifically to handle NSF ODP operations. Since I would just be feeding it ODPs and NSFs, it could be almost entirely faceless, listening only via HTTP and using an auto-generated server ID.

The tack I took for this was to piggyback on the work I had already done to make an IT-suite container for the XPages JEE project. I start with the baseline Domino container from the community script, feed it some basic auto-configure params to relax the HTTP upload-size limits, and add a current build of the NSF ODP OSGi plugins to the Domino server via the filesystem. Leaving out the specifics of the auto-config script, the Dockerfile looks like:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
FROM hclcom/domino:12.0.2

ENV LANG="en_US.UTF-8"
ENV SetupAutoConfigure="1"
ENV SetupAutoConfigureParams="/local/runner/domino-config.json"
ENV DOMINO_DOCKER_STDOUT="yes"

RUN mkdir -p /local/runner && mkdir -p /local/eclipse/eclipse/plugins

COPY --chown=notes:notes domino-config.json /local/runner/
COPY --chown=notes:notes container.link /opt/hcl/domino/notes/latest/linux/osgi/rcp/eclipse/links/container.link
COPY --chown=notes:notes staging/plugins/* /local/eclipse/eclipse/plugins/

The runner script copies the current NSF ODP build to "staging/plugins" and it all holds together nicely. Technically, I could skip the container.link bit - that's mostly an affectation because I prefer to take as light a touch as possible when modifying the Domino program directory in a container image.

Automating The Process

While this server has worked splendidly for me, it got me thinking about an idea I've been kicking around for a little while. Since the needs of NSF ODP are very predictable, there's no reason that I wouldn't automate the whole process in a Maven build, add a third option beyond local and remote operations where the plugin could spin up a temporary container to do the work. That would dramatically lower the requirements on the local environment, making it so that you just need a Docker-compatible environment with a Domino image.

And, as above, my experience writing integration tests with Testcontainers paid off. In fact, it paid off directly: though Testcontainers is clearly meant for testing, the work it does is exactly what I need, so I'm re-using it here. It has exactly the sort of API I want for this: I can specify that I want a container from a Dockerfile, I can add in resources from the current project and generate them on the fly, and the library's scaffolding will ensure that the container is removed when the process is complete.

The path I've taken so far is to start up a true Domino server and communicate with it via HTTP, piggybacking on the existing weird little line-delimited-JSON format I made. This is working really well, and I have it successfully building my most-complex NSFs nicely. I'm not fully comfortable with the HTTP approach, though, since it requires that you can contact the Docker host on an arbitrary port. That's fine for a local Docker runtime or, in my case, a VM on the same local network, where you don't have to worry about firewalls blocking off the random port it opens. I think I could do this by executing CLI commands in the container and copying a file out, which would happen all via the Docker socket, but that'll take some work to make sure I can reliably monitor the status. I have some ideas for that, but I may just ship it using HTTP for the first version so I can have a solid baseline.

Overall, I'm pleased with the experience, and continue to be very happy with Testcontainers even when I'm using it outside its remit. My plan for the short term is to clean the experience up a bit and ship it as part of 3.11.0.

Quick Tip: Stashing Log Files From Domino Testcontainers

Tue Mar 28 11:36:53 EDT 2023

Tags: docker

I've been doing a little future-proofing in the XPages Jakarta EE project lately and bumped against a common pitfall in my test setup: since I create a fresh Domino Testcontainer with each run, diagnostic information like the XPages log files are destroyed at the end of each test-suite execution.

Historically, I've combatted this manually: if I make sure to not close the container and I kill the Ryuk watcher container the framework spawns before testing is over, then the Domino container will linger around. That's fine and all, but it's obviously a bit crude. Plus, other than when I want to make subsequent HTTP calls against it, I generally want the same stuff: IBM_TECHNICAL_SUPPORT and the Equinox logs dir.

Building on a hint from a GitHub issue reply, I modified my test container to add a hook to its close event to copy the log files into the IT module's target directory.

In my DominoContainer class, which builds up the container from my settings, I added an implementation of containerIsStopping:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
@SuppressWarnings("nls")
@Override
protected void containerIsStopping(InspectContainerResponse containerInfo) {
	super.containerIsStopping(containerInfo);
		
	try {
		// If we can see the target dir, copy log files
		Path target = Paths.get(".").resolve("target"); //$NON-NLS-1$ //$NON-NLS-2$
		if(Files.isDirectory(target)) {
			this.execInContainer("tar", "-czvf", "/tmp/IBM_TECHNICAL_SUPPORT.tar.gz", "/local/notesdata/IBM_TECHNICAL_SUPPORT");
			this.copyFileFromContainer("/tmp/IBM_TECHNICAL_SUPPORT.tar.gz", target.resolve("IBM_TECHNICAL_SUPPORT.tar.gz").toString());
				
			this.execInContainer("tar", "-czvf", "/tmp/workspace-logs.tar.gz", "/local/notesdata/domino/workspace/logs");
			this.copyFileFromContainer("/tmp/workspace-logs.tar.gz", target.resolve("workspace-logs.tar.gz").toString());
		}
	} catch(IOException | UnsupportedOperationException | InterruptedException e) {
		e.printStackTrace();
	}
}

This will tar/gzip up the logs en masse and drop them in my project's output:

Screenshot of the target directory with logs copied

Having this happen automatically should save me a ton of hassle in the cases where I need this, and I figured it was worth sharing in case it's useful to others.

JPA in the XPages Jakarta EE Project

Sat Mar 18 11:55:36 EDT 2023

For a little while now, I'd had an issue open to implement Jakarta Persistence (JPA) in the project.

JPA is the long-standing API for working with relational-database data in JEE and is one of the bedrocks of the platform, used by presumably most normal apps. That said, it's been a pretty low priority here, since the desire to write applications based on a SQL database but running on Domino could be charitably described as "specialized". Still, the spec has been staring me in the face, maybe it'd be useful, and I could pull a neat trick with it.

The Neat Trick

When possible, I like to make the XPages JEE project act as a friendly participant in the underlying stack, building on good use of the ComponentModule system, the existing app lifecycle, and so forth. This is another one of those areas: XPages (re-)gained support for relational data over a decade ago and I could use this.

Tucked away in the slide deck that ships with the old ExtLib is this tidbit:

Screenshot of a slide, highlighting 'Available using JNDI'

JNDI is a common, albeit creaky, mechanism used by app servers to provide resources to apps running on them. If you've done LDAP from Java, you've probably run into it via InitialContext and whatnot, but it's used for all sorts of things, DB connections included. What this meant is that I could piggyback on the existing mechanism, including its connection pooling. Given its age and lack of attention, I imagine that it's not necessarily the absolute best option, but it has the advantage of being built in to the platform, limiting the work I'd need to do and the scope of bugs I'd be responsible for.

Implementation

With one piece of the puzzle taken care for me, my next step was to actually get a JPA implementation working. The big, go-to name in this area is Hibernate (which, incidentally, I remember Toby Samples getting running in XPages long ago). However, it looks like Hibernate kind of skipped over the Jakarta EE 9 target with its official releases: the 5.x series uses the javax.persistence namespace, while the 6.x series uses jakarta.persistence but requires Java 11, matching Jakarta EE 10. Until Domino updates its creaky JVM, I can't use that.

Fortunately, while I might be able to transform it, Hibernate isn't the only game in town. There's also EclipseLink, another well-established implementation that has the benefits of having an official release series targeting JEE 9 and also using a preferable license.

And actually, there's not much more to add on that front. Other than writing a library to provide it to the NSF and a resolver to account for OSGi's separation, I didn't have to write a lot of code.

Most of what I did write was the necessary code and configuration for normal JPA use. There's a persistence.xml file in the normal format (referencing the source made by the XPages JDBC config file), a model class, and then access using the normal API.

In a normal full app server, the container would take care of some of the dirty work done by the REST resource there, and that's something I'm considering for the future, but this will do for now.

Writing Tests

One of the neat side effects is that, when I went to write the test case for this, I got to make better use of Testcontainers. I'm a huge fan of Testcontainers and I've used it for a good while for my IT suites, but I've always lost a bit by not getting to use the scaffolding it provides for common open-source projects. Now, though, I could add a PostgreSQL container alongside the Domino one:

1
2
3
4
5
6
postgres = new PostgreSQLContainer<>("postgres:15.2")
	.withUsername("postgres")
	.withPassword("postgres")
	.withDatabaseName("jakarta")
	.withNetwork(network)
	.withNetworkAliases("postgresql");

Here, I configure a basic Postgres container, and the wrapper class provides methods to specify the extremely-secure username and password to use, as well as the default database name. Here, I pass it a network object that lets it share the same container network space as the Domino server, which will then be able to refer to it via TCP/IP as the bare name "postgresql".

The remaining task was to write a method in the test suite to make sure the table exists. You can do this in other ways - Testcontainers lets you run init scripts via URL, for example - but for one table this suits me well. In the test class where I want to access the REST service I wrote, I made a @BeforeAll method to create the table:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
@BeforeAll
public static void createTable() throws SQLException {
	PostgreSQLContainer<?> container = JakartaTestContainers.instance.postgres;
		
	try(Connection conn = container.createConnection(""); Statement stmt = conn.createStatement()) {
		stmt.executeUpdate("CREATE TABLE IF NOT EXISTS public.companies (\n"
				+ "	id BIGSERIAL PRIMARY KEY,\n"
				+ "	name character varying(255) NOT NULL\n"
				+ ");");
	}
}

Testcontainers takes care of some of the dirty work of figuring out and initializing the JDBC connection for me. That's not particularly-onerous work, but it's one of the small benefits you get when you're doing the same sort of thing other users of the tool are doing.

With that, everything went swimmingly. Domino saw the Postgres container (thanks to copying the JDBC driver to the classpath) and the JPA access worked just the same as it does in my real environment.

Like with the implementation, there's not much there beyond "yep, do the things the docs say and it works". Though there were the usual hurdles that I've gotten used to with adding things like this to Domino, this all went pleasantly smoothly. I may build on this in the future - such as the aforementioned server-managed JPA bits - but that will depend on whether I or others have need. Regardless, I'm glad it's in there.

The Big Apple Silicon Switch, Followup

Thu Feb 16 13:17:49 EST 2023

Tags: docker java

Yesterday, I wrote a post detailing my recent switch to Apple Silicon, and in it I noted that the main remaining problem I was butting heads with was running Domino in Docker containers, specifically in the context of working with Java.

While my general solution of connecting to Docker running remotely is good, it's all the better to have this stuff working locally. I gave it another shot today and found that the solution is in the post: disable the JIT.

Java uses the JIT - its just-in-time compiler - to speed up execution dynamically when running a program, and it's part of what makes Java surprisingly fast in practice. It's a whole rabbit hole of performance implications and comparisons, but it will suffice to say "JIT = good". However, as I found when running test cases that load libnotes.dylib directly, J9's JIT doesn't play well with the x64-to-ARM JIT that these Macs use, either Rosetta's or qemu's.

So, long story short, the solution was to modify my testing container to disable the JIT when in such an environment (the referenced commit has since been followed up a few times).

First off, I added a bit in my Domino One-Touch Config file to point to a Java options file in the notes.ini:

1
2
3
4
5
6
7
8
{
	/* ... */
	"notesINI": {
		/* ... */
		"JavaUserOptionsFile": "/local/JavaOptionsFile.txt"
	}
	/* ... */
}

I modified the Dockerfile to bring this in from the build environment:

1
2
# ...
COPY --chown=notes:notes staging/JavaOptionsFile.txt /local/

Finally, I modified the class I use to build the container to optionally populate this file with the flag to disable the JIT:

1
2
3
4
5
6
String arch = DockerClientFactory.instance().getInfo().getArchitecture();
if(!"x86_64".equals(arch)) {
	withFileFromTransferable("staging/JavaOptionsFile.txt", Transferable.of("-Djava.compiler=NONE"));
} else {
	withFileFromTransferable("staging/JavaOptionsFile.txt", Transferable.of(""));
}

(Transferable is a Testcontainers-ism for any available source of a file to put into the build environment. Often, this will be something from the project classpath, but it can also be arbitrary data like this.)

With that in place, the test suite works! Eventually!

The thing to know about this is that it's slow. It's going to be naturally slow due to emulation, and I imagine the lack of Java's JIT doesn't help anything. Running the XPages JEE test suite takes about 520 seconds in this setup, as compared to 180 seconds when run via my remote native VM. Such a dramatic drop in speed is enough that I'm likely to continue using the remote VM for most things, and only use local emulation for things that significantly benefit from filesystem binds. For what it's worth, it seems like multithreading is what really kills it: most tests are slower by roughly 50%, while a big multithread test balloons from 20s to 160s. That's consistent with my experience with local unit tests, where the multithread ones were the ones that crashed the process.

For the record, I found that this currently only works at all with Docker Desktop's default (qemu-based) emulation. When I switch the emulator to Rosetta, I hit a StackOverflowError from the HTTP JVM during init without any useful other information. That's fair enough, since that emulation type is still flagged as Experimental in Docker Desktop anyway.

Anyway, it's good to know that there's a way to make it work. It's still no replacement for a native container, but at least it works at all. Similar techniques should work with other tasks, like setting the above option in the JAVA_OPTS environment variable.

The Big Apple Silicon Switch

Wed Feb 15 12:01:19 EST 2023

Tags: docker java

Almost exactly two years ago now, I wrote a post describing using Apple Silicon for work while my iMac was in the shop. That was an interesting experience, but too thorny for me to want to really stick with. Some of that was due to the limitations of the hardware - it was a pretty low-end MacBook Air - but most of that was due to the very-much-in-progress world of Java tooling for ARM Macs.

Since then, I'd been itching to find a machine to replace the iMac Pro and the recently-released Mac mini finally fit the bill. It, now dubbed Tethys, arrived on Friday, and I've been going through the process of re-carving-out my working environment, this time starting clean.

Eclipse

Compared to my first dive into ARM Macs, things have gotten a lot smoother. The biggest one for me is the stability of Java generally and Eclipse specifically. Last time, I was on the bleeding edge of that, and I was fortunate enough to at least help diagnose and report some of the things barring the port from being complete. The experience was very janky but the sheer snappiness of Eclipse on ARM really stuck with me, and it's still there in spades.

And, really, there's not even a lot of special things to know. If you download the aarch64 build of Eclipse, it just works like you'd want it to. I had to make sure to download some x64 builds of Java for when I'm running legacy native code, but that wasn't really different from my current collection of different Java versions anyway.

This is all a huge distinction from before, where I was going to weird lengths to try to run Eclipse via X11 remotely just to get something responsive.

Notes and libnotes

For my needs, Notes works fine. It's a shame that it's not native, but, as long as I run my test suites with an x64 JVM, things work about as well as they do on x64.

One immediate problem I ran into, though, was that some test cases that push multithreading would hard crash with a native error to do with the x64-to-ARM translation and JIT. Running with an OpenJ9-based JVM, I found that specifying -Djava.compiler=NONE in the launch args avoided this trouble. I'm sure that the tests run a bit slower now, but they were already emulated anyway, so it's not noticeable.

Domino and Docker

One of the sticking points with my half-transition years ago was the need to run my Domino Docker containers in emulation. Though HCL will have to port Domino to ARM eventually, they haven't yet released such a thing, and so that situation remains the same.

And, unfortunately, the segfault I encounter during my use remains. I had hoped that changed in qemu would fix it, but it appears to be unchanged. Recent versions of Docker can also take advantage of Apple's support for Rosetta in Linux, but for my needs that just creates a different segfault earlier in the process.

So, on this front, I'm doing the same thing I started doing before: running Docker on a VM on actual x64 hardware and using the DOCKER_HOST environment variable to point to it. Fortunately, I've taken steps in the intervening years to make this more practical. The main thing to work around with such a setup is the inability to use filesystem binds, and so I've changed a lot of my Dockerfiles and Testcontainers setups to instead copy more into the image at build. I still haven't solved every problem there, in particular a huge in-container build I'd like to do that points at a giant pool of dependencies that would be onerous to put into the image, but I'm working on that.

Conclusion

Overall, I'm pleased as punch with this thing. It's zippy all around, I haven't once heard the fan even when really hitting the CPU and GPU, and the state of the software ecosystem is such that I only use a very few x64-compiled processes during my daily work. I'd expected to have more of a checklist to work down to clean them all up, but it's really just the for-now-required bits. Not bad at all.

Building a Full Domino Image for JUnit Tests

Sun Jan 23 15:57:51 EST 2022

Tags: docker domino
  1. Tinkering With Testcontainers for Domino-based Web Apps
  2. Adding Selenium Browser Tests to My Testcontainers Setup
  3. Building a Full Domino Image for JUnit Tests

Last year, I wrote about how I built images to use Testcontainers to run tests against a Liberty app that uses a Domino runtime. In that situation, I used the Domino Docker image from Flexnet but then pulled out the program files and stock data, mixed with pre-configured server support files from the repository.

Recently, though, I've had a need to have a similar setup, but altered in three ways:

  1. It should fully run Domino, not just use the data and program files for the runtime in Liberty
  2. It should not require pre-populating a server ID, notes.ini, and names.nsf from outside - that is, it should be self-configuring
  3. It should also have an extra component installed, one that must be installed after Domino is configured but before the image is fully built
  4. On the next launch, I need a post-install agent to run for final configuration, and the tests need to wait on that

Additionally, it should still be runnable with the same basic tools - the image should be built and the container started and destroyed by automated tests. This stricture comes into play in weird ways.

One-Touch Setup

The second requirement - that the container be self-configuring - is handled adroitly by One-Touch Setup, the feature in Domino 12 and above that lets you specify a configuration JSON file. I used this here in basically the same way as I described in that post: the script sets up and certifies a throwaway domain with a known admin username + password, and also deploys a few NTFs on first proper launch. Since this server intentionally doesn't communicate with the outside world, I don't need to provide any external files like a certificate authority or server ID.

Switching the Baseline

Initially, I continued to use the official image from Flexnet. However, I had run into some trouble with multi-stage builds using this earlier. I lack enough Docker knowledge to be sure, but my suspicion is that it declares /local/notesdata as an expected externally-provided volume. This on its own is fine, but it interacts oddly with automated container building. The trouble comes in when you do something like this:

1
2
3
4
5
6
7
FROM domino-docker:V1201_11222021prod
ENV SetupAutoConfigure "1"
ENV SetupAutoConfigureParams "/local/domino-config.json"
COPY --chown=notes:wheel domino-config.json /local/
RUN /local/start.sh

RUN /some/script/that/uses/notesdata

By the time it hits that second RUN line using automated build mechanisms, /local/notesdata is depopulated. I'm not sure why this is different from a command-line docker build, but it is what it is.

Fortunately, the "community" version of the image builder from https://github.com/IBM/domino-docker doesn't exhibit this behavior. I had been considering switching over already, so this made the decision all the easier.

Breaking Up Setup Into Stages

With this change in hand, I was able to more-properly break up the setup into multiple stages. This is important both for requirement #3 above and because it allows Docker to cache intermediate results. Though I want the server to auto-configure itself when building, I don't need the results of that to be different every run, and thus I can save tons of time in subsequent launches if I handle these caches well.

So my Dockerfile started to look something like this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
FROM hclcom/domino:12.0.1
# Relay Domino output to the container logs
ENV DOMINO_DOCKER_STDOUT "yes"
ENV SetupAutoConfigure "1"
ENV SetupAutoConfigureParams "/local/DominoAutoConfig.json"

COPY --chown=notes:wheel domino-config.json /local/DominoAutoConfig.json
COPY --chown=notes:wheel notesdata/* /local/notesdatatemp/

RUN /domino_docker_entrypoint.sh

# Copy the app executable and support scripts
USER root
COPY appinstall /local/appinstall
RUN chmod +x /local/appinstall/install.sh && /local/appinstall/install.sh

# Back to notes user for the Domino entrypoint
USER notes

But here I hit another distinction between docker build and the automated mechanisms: that RUN /domino_docker_entrypoint.sh line would execute and get to the point where it emits "Application configuration completed successfully", but then would not actually exit properly. Again, I'm not sure why: the JSON file tells the server to not launch after configuration, and indeed it doesn't, but the script just doesn't relinquish control - but does when built from the command line.

So I rolled up my sleeves and wrote a wrapper script to kill the process when it's known to be done:

1
2
3
#!/usr/bin/env bash
/domino_docker_entrypoint.sh > /tmp/domsetup &
until tail -f /tmp/domsetup | grep -q "Application configuration completed successfully"; do : sleep 1; done

This runs the setup process, redirecting the output to a temp file, in the background. Then, it watches that file for the known ending text and exits when observed. A little janky in multiple ways for multiple reasons, but it works. That allows image build to progress normally to the next step in all environments, caching the results of the initial server setup. I replaced the RUN /domino_docker_entrypoint.sh line above with copying in and executing this script, and all was well.

Post-Install Agent

After the "appinstall" step, I have the peculiar need to then run code in a Notes context to fiddle with some components that aren't configured earlier. For now, I've settled on writing an agent that runs on server start, then signing and enabling it in the domino-config.json file:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
{
    "action": "create",
    "filePath": "postinstall.nsf",
    "title": "Post Install",
    "templatePath": "/local/notesdatatemp/postinstall.ntf",
    "signUsingAdminp": true,
    "agents": [
        {
            "name": "PostInstall",
            "action": "sign"
        },
        {
            "name": "PostInstall",
            "action": "enable"
        }
    ]
}

Originally, I had this agent emit the text "postinstall done" when it finished, so that the Testcontainers runtime could look for that to know when it's safe to execute tests. However, this added a good while to the launch stage: at this point, launching the container has to wait on final post-install tasks from Domino, then signing the DB with adminp, then actually executing the agent. This added about a minute to the test pre-run time, and thus was a prime target for further caching.

So I altered the agent to check to see if it actually needs to work and then, if so, shuts down the server when it's done:

1
2
3
4
5
6
Session session = getSession();

if(needToDoWork()) {
    doWork();
    session.sendConsoleCommand(session.getServerName(), "q");
}

Then, I altered my Dockerfile to amend the end bit:

1
2
3
4
5
6
7
8
9
# ...snip
# Back to notes user for the Domino entrypoint
USER notes

# Run the server once to wait for postinstall to execute, then shut down
WORKDIR /local/notesdata
RUN /opt/hcl/domino/bin/server

# Now let the entrypoint launch again, without requiring further configuration

Again: janky, but it works. Now, all of the setup stages are cached on subsequent runs.

Building the Image in Testcontainers

In my original post, I showed using dockerfile-maven-plugin to build the image just before executing the tests. This works, but it complicated the pom.xml a bit and meant that running the tests in an IDE meant first running a Maven build. Not the end of the world, but not ideal.

Fortunately, Testcontainers can also build images. That meant that, rather than building the image in Maven and then re-using the same-named container in Java, I could do it all Java-side. To do this, I created a subclass of GenericContainer to centralize the configuration of the container:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
package example;

import java.io.IOException;
import org.testcontainers.containers.GenericContainer;
import org.testcontainers.containers.wait.strategy.HttpWaitStrategy;
import org.testcontainers.images.builder.ImageFromDockerfile;

public class DominoAppContainer extends GenericContainer<DominoAppContainer> {
    public static class DominoAppImage extends ImageFromDockerfile {
        public DominoAppImage() {
            super("example-app-it-testcontainers:1.0.0", false);
            withFileFromClasspath("Dockerfile", "/docker/Dockerfile");
            withFileFromClasspath("domino-config.json", "/docker/domino-config.json");
            withFileFromClasspath("firstrun.sh", "/docker/firstrun.sh");
            withFileFromClasspath("notesdata/exampledata.ntf", "/docker/notesdata/exampledata.ntf");
            withFileFromClasspath("notesdata/postinstall.ntf", "/docker/notesdata/postinstall.ntf");
            withFileFromClasspath("appinstall/install.sh", "/docker/appinstall/install.sh");
            withFileFromClasspath("appinstall/appinstall.jar", "/docker/appinstall/appinstall.jar");
        }
    }

    public DominoAppContainer() {
        super(new DominoAppImage());

        withImagePullPolicy(imageName -> false);
        withExposedPorts(80);
        waitingFor(
            new HttpWaitStrategy()
            .forPort(80)
            .forStatusCodeMatching(code -> code >= 200 && code < 400)
        );
        withLogConsumer(frame -> {
            switch (frame.getType()) {
                case STDERR:
                    try {
                        System.err.write(frame.getBytes());
                    } catch (IOException e) {
                        e.printStackTrace();
                    }
                    break;
                case STDOUT:
                    try {
                        System.out.write(frame.getBytes());
                    } catch (IOException e) {
                        e.printStackTrace();
                    }
                    break;
                default:
                case END:
                    break;
            }
        });
    }
}

It's a little fiddlier in that now I need to enumerate each classpath resource to copy in, but that also makes it all the more portable. I removed the dockerfile-maven-plugin execution from the pom.xml and switched to this instead. Since I name the image and tell it to not auto-delete on completion, this retained the desired caching behavior.

Conclusion

Overall, this whole process brought the test pre-launch time (after the first run) down from 3-5 minutes to about 20 seconds while also reducing the split between the Maven config and Java. Much more bearable when making small tweaks and re-running the suite, and it makes it a bit more explicable what's going on.

New Adventures in Administration: Docker Compose and One-Touch Setup

Sat Dec 04 14:23:58 EST 2021

Tags: admin docker

As I do from time to time, this weekend I dipped a bit into the more server-admin-focused side of Domino development. This time, I had set out to improve the deployment experience for one of my client's apps. This is the sprawling multi-NSF-plus-OSGi one, and I've long been blessed to not have to worry about actually deploying anything. However, I knew in the back of my head the whole time that it must be fairly time-consuming between installing Domino, getting all the Java code in place, deploying the DBs, and configuring all the documents that associate them.

I had also had a chance this past week to give Docker Compose a swing. Though I'd certainly known of it for a good while, I hadn't actually used it for anything - all my Docker scripting involved really fiddly operations that I ended up using Bash scripts to launch a single container for anyway, so Compose didn't bring so much to the table. However, using it to tie together the process of launching a Postgres server with pre-populated user info and schema scripts whetted my appetite.

So today I set out to tinker with the Domino side of things.

Deploying To The Domino Data Directory

Some parts of this were the same as what I've done before: I wanted to deploy some JARs to jvm/lib/ext for legacy purposes and then drop ".java.policy" into the notes user's home directory. That was accomplished easily enough with some COPY operations in the Dockerfile:

1
2
COPY program/ /opt/hcl/domino/notes/latest/linux/
COPY --chown=notes java.policy /local/notes/.java.policy

What wouldn't be accomplished so easily, though, would be getting files into the data directory: the app's NTFs and the OSGi plugins. This is because of the way the Domino Docker image works, where it deploys the contents of a ZIP to /local/notesdata on launch, in order to let you work properly with mounted volumes. Because of this, I couldn't just copy the files there in the Dockerfile, since it would conflict with the volume mount; however, I still wanted to do this in an automated way.

This was my impetus to switch away from the official Docker images on Flexnet and over to the "community-ish" Domino-on-Docker build script maintained at https://github.com/IBM/domino-docker. This script is generally more feature-rich than the official one, and one feature in particular caught my eye: the ability to add your own ZIP file (or URL, I believe) to deploy to the data directory at first launch.

So I downloaded the repo, build the image, bundled the OSGi plugins and NTFs into a ZIP, and altered my Dockerfile:

1
2
3
4
5
FROM hclcom/domino:12.0.0

COPY program/ /opt/hcl/domino/notes/latest/linux/
COPY --chown=notes java.policy /local/notes/.java.policy
COPY --chown=notes data.zip /tmp/

Then, I set the environment variable in my "docker-compose.yaml" file: CustomNotesdataZip=/tmp/data.zip. That worked like a charm.

One-Touch Setup

Next up, I wanted to automate the initial server setup. I knew that Domino had been gaining some automated setup capabilities recently, and that they really came of age in V12. What I hadn't appreciated until today is how much capability this system has. I'd figured it would let you configure the server either as a new domain or additional and to create an admin user, but I hadn't noticed that it also has the ability to declaratively create and modify databases and documents. Looking over the sample file that Daniel Nashed put up, I realized that this would cover essentially all of my remaining needs.

The file there was most of what I needed: other than tweaking the server and user names, the main things I'd want to change in the basic config were to set HTTP_AllowAnonymous/HTTP_SSLAnonymous to "1" and also add a line to set OnBehalfOfInvokerLst to "LocalDomainAdmins" (which allows XPages to run properly).

Then, I got to the meat of the app deployment. That's all done in the $.appConfiguration.databases object near the bottom, and I set out to adding entries to deploy each of the NTFs I'd copied to the data directory, and adding the required documents to tie them together. This also went smoothly.

The Final Scripts

The final form of the setup is pretty clean. The Dockerfile is very similar to the above, with just an added line to copy in the config file:

1
2
3
4
5
6
FROM hclcom/domino:12.0.0

COPY program/ /opt/hcl/domino/notes/latest/linux/
COPY --chown=notes java.policy /local/notes/.java.policy
COPY --chown=notes domino-config.json /tmp/
COPY --chown=notes data.zip /tmp/

The docker-compose.yaml file is longer, but I think pretty explicable. It maps the ports, sets up some volumes for the various persistent-data portions of Domino, and configures the environment variables for setup:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
services:
  clientapp:
    build: .
    ports:
      - "1352:1352"
      - "80:80"
      - "443:443"
    volumes:
      - data:/local/notesdata
      - ft:/local/ft
      - nif:/local/nif
      - translog:/local/translog
      - daos:/local/daos
    restart: always
    environment:
      - LANG=en_US.UTF-8
      - CustomNotesdataZip=/tmp/data.zip
      - SetupAutoConfigure=1
      - SetupAutoConfigureParams=/tmp/domino-config.json
volumes:
  data: {}
  ft: {}
  nif: {}
  translog: {}
  daos: {}

Miscellaneous Notes

In doing this, I came across a few things that are worth noting for anyone diving into it clean as I was:

  • Docker Compose (intentionally) doesn't rebuild images on docker compose up when they already exist. Since I was changing all sorts of stuff, I switched to docker compose build && docker compose up.
  • Errors during server autoconfig don't show up in the console output from docker compose up: if the server doesn't come up like you expect, check in "/local/notesdata/IBM_TECHNICAL_SUPPORT/autoconfigure.log". It's easy for a problem to gum up the whole works, such as when using "computeWithForm": true on a created document throws an exception.
  • Daniel's example autoconf file above places the admin user ID in "/local/notesdata/domino/html/admin.id", so it will be accessible via http://servername/admin.id after the server comes up. Alternatively, you could snag it by copying it via Docker commands.
  • This really drives home the desperate need for a full web-based admin app for Domino.

All in all, this was a delight to work with. Next, I should be able to make a script that generates the config JSON for me based on all the app's NTFs, and then include that whole thing as part of the Maven build in a distribution ZIP. That will be pretty neat.

Adding Selenium Browser Tests to My Testcontainers Setup

Tue Jul 20 11:20:42 EDT 2021

  1. Tinkering With Testcontainers for Domino-based Web Apps
  2. Adding Selenium Browser Tests to My Testcontainers Setup
  3. Building a Full Domino Image for JUnit Tests

Yesterday, I talked about how I dove into Testcontainers for my app-testing needs. Today, I decided to use this to close another bit of long-open business: automated browser testing. I've been very much a dilettante when it comes to that, but we have a handful of browser-ish tests just to make sure the login page, the main page, and some utility pages load up and include expected content, and those can serve as a foundation for much more.

Background

In general, when you think "automated browser testing", that means Selenium. As a toolkit, Selenium has hooks for the browsers you want and has essentially universal support, working smoothly in Java with JUnit. However, the actual act of loading a real browser is miserable, mostly on account of needing you to install the browser and point to it programmatically, which is doable but is another potential system-specific configuration that I'd much, much rather avoid in my automated builds.

Accordingly, and because my needs have been simple, I've used HtmlUnit, which is a portable Java browser-like library that does the yeoman's work of letting you perform basic Selenium tests without having to configure actual native OS installations. It's neat, imposes basically no strictures on your workflow, and I recommend it for lots of uses. Still, it's not the same as real browsers, and I had to do things like disable JavaScript processing to avoid it tripping up on some funky JS that full-grown browsers can deal with.

Enter Webdriver Containers

So, now that I had Testcontainers configured to run the web app, my eye turned to Webdriver Containers, an ancillary capability of Testcontainers that lets you run these full-fledged browsers via their Docker images, and even has cool abilities like letting you record the screen interactions over VNC. Portability and full production representation? Sign me up.

The initial setup was pretty easy, just adding some dependencies for the Selenium remote driver (replacing my HtmlUnit driver) and the Testcontainers Selenium module:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
<dependency>
    <groupId>org.seleniumhq.selenium</groupId>
    <artifactId>selenium-remote-driver</artifactId>
    <version>3.141.59</version>
    <scope>test</scope>
</dependency>
<dependency>
    <groupId>org.testcontainers</groupId>
    <artifactId>selenium</artifactId>
    <version>1.15.3</version>
    <scope>test</scope>
</dependency>

Programmatic Container Setup

After that, my next task was to configure the containers. I'll skip over some of my troubleshooting and just describe where I ended up. Basically, since both the webapp and browsers are in Docker containers, I had to coordinate how they communicate with each other. There seem to be a few ways to do this, but the route I went was to build a Docker network in my container orchestration class, bind all of the containers to it, and then reference the app via a network alias.

With that addition and some containers for Chrome and Firefox, the class looks more like this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
public enum AppTestContainers {
    instance;
    
    public final Network network = Network.builder()
        .driver("bridge") //$NON-NLS-1$
        .build();
    public final GenericContainer<?> webapp;
    public final BrowserWebDriverContainer<?> chrome;
    public final BrowserWebDriverContainer<?> firefox;
    
    @SuppressWarnings("resource")
    private AppTestContainers() {
        webapp = new GenericContainer<>(DockerImageName.parse("client-webapp-test:1.0.0-SNAPSHOT")) //$NON-NLS-1$
                .withExposedPorts(8080)
                .withNetwork(network)
                .withNetworkAliases("client-webapp-test"); //$NON-NLS-1$
        
        chrome = new BrowserWebDriverContainer<>()
            .withCapabilities(new ChromeOptions())
            .withNetwork(network);
        firefox = new BrowserWebDriverContainer<>()
            .withCapabilities(new FirefoxOptions())
            .withNetwork(network);

        webapp.start();
        chrome.start();
        firefox.start();
        
        Runtime.getRuntime().addShutdownHook(new Thread(() -> {
            webapp.close();
            chrome.close();
            firefox.close();
            network.close();
        }));
    }
}

Now that they're all on the same Docker network, the browser containers are able to refer to the webapp like "http://client-webapp-test:8080".

Adding Parameterized Tests

The handful of UI tests I'd set up previously had lines like WebDriver driver = new HtmlUnitDriver(BrowserVersion.FIREFOX, true) to create their WebDriver instance, but now I want to run the tests with both real Firefox and real Chrome. Since I want to test that the app works consistently, I'll want the same tests across browsers - and that's a call for parameterized tests in JUnit.

The way parameterized tests work in JUnit is that you declare a test as being parameterized, and then feed it your parameters via one of a number of mechanisms - "all values of an enum", "this array of strings", and a handful of others. The one to use here is to make a class implementing ArgumentsProvider and configure that:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
import java.util.stream.Stream;

import org.junit.jupiter.api.extension.ExtensionContext;
import org.junit.jupiter.params.provider.Arguments;
import org.junit.jupiter.params.provider.ArgumentsProvider;
import org.testcontainers.containers.BrowserWebDriverContainer;

public class BrowserArgumentsProvider implements ArgumentsProvider {
    @Override
    public Stream<? extends Arguments> provideArguments(ExtensionContext context) throws Exception {
        return Stream.of(
            AppTestContainers.instance.chrome,
            AppTestContainers.instance.firefox
        )
        .map(BrowserWebDriverContainer::getWebDriver)
        .map(Arguments::of);
    }
}

This class will take my configured browser containers, get the WebDriver instance for each, and provide that as parameters to a test method. In turn, the test method looks like this:

1
2
3
4
5
6
7
8
@ParameterizedTest
@ArgumentsSource(BrowserArgumentsProvider.class)
public void testDefaultLoginPage(WebDriver driver) {
    driver.get(getContainerRootUrl());
    assertEquals("Expected App Title", driver.getTitle());

    // Other tests follow
}

Now, JUnit will run the test twice, once for each browser, and I can add any other configurations I want smoothly.

Minor Gotcha: Container vs. Non-Container URLs

Though some of my tests were using Selenium already, most of them just use the JAX-RS REST client from the testing JVM directly, which is not containerized in this setup. That meant that I had to start worrying about the distinction between the URLs - the containers can't access "localhost:(some random port)", while the JUnit JVM can't access "client-webapp-test:8080".

For the most part, that's not too tough: I added some more utility methods named to suit and changed the UI tests to use those. However, there was one tricky bit: one of the UI tests uses Selenium to fetch the page and process the HTML, but then uses the JAX-RS client to make sure that a bunch of references on the page resolve to non-404 resources properly. Stuff like this:

1
2
3
4
5
driver.findElements(By.xpath("//link[@rel='stylesheet']"))
    .stream()
    .map(link -> link.getAttribute("href"))
    .map(href -> rootUri.resolve(href))
    .forEach(uri -> checkUrlWorks(uri, jaxRsClient));

(It's highly likely that there's a better way to do this in Selenium, but hey, it's still a useful example.)

The trouble with the above was that the URLs coming out of Selenium included the full container URL, not the host-accessible one.

Fortunately, that's not too tricky - it's really just string substitution, since the host and container URLs are known at runtime and won't conflict with anything. So I added a "decontainerize" method and run my URLs through it in the stream:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
public URI decontainerize(URI uri) {
    String url = uri.toString();
    if(url.startsWith(getContainerRootUrl())) {
        return URI.create(getRootUrl() + url.substring(getContainerRootUrl().length()));
    } else {
        return uri;
    }
}

// later

driver.findElements(By.xpath("//link[@rel='stylesheet']"))
    .stream()
    .map(link -> link.getAttribute("href"))
    .map(href -> rootUri.resolve(href))
    .map(this::decontainerize)
    .forEach(uri -> checkUrlWorks(uri, jaxRsClient));

With that, all the results came back green again.

Overall, this was a little fiddly, but mostly in a way that helped me learn a little bit more about how this sort of thing works, and now I'm prepped to do real, portable full test suites. Neat!

Tinkering With Testcontainers for Domino-based Web Apps

Mon Jul 19 12:46:48 EDT 2021

  1. Tinkering With Testcontainers for Domino-based Web Apps
  2. Adding Selenium Browser Tests to My Testcontainers Setup
  3. Building a Full Domino Image for JUnit Tests

(Fair warning: this post is not about testing, say, a normal XPages app via Testcontainers. One could get there on this path, but this has a lot of prerequisites that are almost specific to me alone.)

For a while now, I've seen the Testcontainers project hanging around in my periphery. The idea of the project is that it uses Docker to allow you to programmatically load services needed by your automated test suites, rather than having to have the servers running separately. This is a clean match for something like a WAR-based Java webapp that uses, say, Postgres as its backend database: with this, you can spin up a Postgres image from the public repository, fill it with test data, run the suite, and tear it down cleanly.

However, this is generally not a proper match for Domino. Since the code you're testing almost always directly uses Domino API calls (from Notes.jar or another source) and that means having a local Notes runtime initialized in the test code, it's no help to have a separate container somewhere. So, instead, I've been left watching from afar, seeing all the kids having fun in a playground I never got to go to.

The Change

This situation has shifted a bit for my needs, though, thanks to secondary effects of changes I've made in one of my client projects. This is the one where I do all the bells and whistles of my tinkering over the years: XPages outside Domino, building a bunch of NSFs with Jenkins, and so forth.

For a while, I had been building test suites run using tycho-surefure-plugin, but somewhat recently moved the project to maven-bundle-plugin to reap the benefits of that. One drawback, though, was that the test suites became much more difficult to run, in large part due to the restrictions on environment propagation in macOS.

Initially, I just let them wither, but eventually I started to rebuild the test suites. The app had REST services for a while, but they've grown in prominence since we've started gradually replacing XPages-based components with Angular apps. And REST services, fortunately, are best tested at a remove.

First Pass: liberty-maven-plugin

The first way I started writing test suites for the REST services was by using liberty-maven-plugin, which is a general Swiss army knife for working with Liberty during Maven builds, but has particular support for starting a server before tests and terminating it after them. So I set up a config that boots up a Liberty server that can then initialize using a configured Notes runtime, and I started writing tests against it using the Jakarta REST client API and a bit of HtmlUnit.

To its credit, this setup did its job swimmingly. It still has the down side that you have to balance teacups to get a Notes or Domino runtime configured, but, once you do, it'll work nicely.

Next Pass: Testcontainers

Still, it'd be all the better to avoid the need to have a local Notes or Domino setup to run these tests. There's still going to be some weirdness due to things like having to have the non-public Domino Docker image pre-loaded and having an ID file and notes.ini somewhere, but that can be overcome. Plus, I've already overcome those for the CI servers I have set up with each build: I have some dev IDs in the repository and, for each build, Jenkins constructs a Docker image housing the webapp and starts a container using a technique similar to what I described a few months back to run a Liberty app with Domino stuff brought in for support.

So I decided to try adapting that to work with Testcontainers. Instead of my Maven config constructing and launching a Liberty server, I would instead build a Docker image that would then be loaded in Java with the Testcontainers library. In the case of the CI server scripts, I used Bash to copy files into a scratch directory to avoid having to include the whole repo in the Docker build context (prohibitive on the Mac particularly), and so I sought to mirror that in Maven as well.

Building the App Image in Maven

To accomplish this goal, I used maven-resources-plugin to copy the app and support files to a scratch directory, and then com.spotify:dockerfile-maven-plugin to build the Docker image:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
<!-- snip -->
    <!-- Copy Docker support resources into scratch space -->
    <plugin>
        <groupId>org.apache.maven.plugins</groupId>
        <artifactId>maven-resources-plugin</artifactId>
        <version>3.2.0</version>
        <executions>
            <execution>
                <?m2e ignore?>
                <id>prepare-docker-scratch</id>
                <goals>
                    <goal>copy-resources</goal>
                </goals>
                <phase>pre-integration-test</phase>
                <configuration>
                    <outputDirectory>${project.build.directory}/dockerscratch</outputDirectory>
                    <resources>
                        <!-- Dockerfile to build -->
                        <resource>
                            <directory>${project.basedir}</directory>
                            <includes>
                                <include>testcontainer.Dockerfile</include>
                            </includes>
                        </resource>
                        <!-- The just-built WAR -->
                        <resource>
                            <directory>${project.build.directory}</directory>
                            <includes>
                                <include>client-webapp.war</include>
                            </includes>
                        </resource>
                        <!-- Support files from the main repo Docker config -->
                        <resource>
                            <directory>${project.basedir}/../../../docker/support</directory>
                            <includes>
                                <!-- Contains Liberty server.xml, etc. -->
                                <include>liberty/client-app-a/**</include>
                                <!-- Contains a Domino server.id, names.nsf, and notes.ini -->
                                <include>notesdata-ciserver/**</include>
                            </includes>
                        </resource>
                    </resources>
                </configuration>
            </execution>
        </executions>
    </plugin>
    <!-- Build a Docker image to be used by Testcontainers -->
    <plugin>
        <groupId>com.spotify</groupId>
        <artifactId>dockerfile-maven-plugin</artifactId>
        <version>1.4.13</version>
        <executions>
            <execution>
                <?m2e ignore?>
                <id>build-webapp-image</id>
                <goals>
                    <goal>build</goal>
                </goals>
                <phase>pre-integration-test</phase>
                <configuration>
                    <repository>client-webapp-test</repository>
                    <tag>${project.version}</tag>
                    <dockerfile>${project.build.directory}/dockerscratch/testcontainer.Dockerfile</dockerfile>
                    <contextDirectory>${project.build.directory}/dockerscratch</contextDirectory>
                    <!-- Don't attempt to pull Domino images -->
                    <pullNewerImage>false</pullNewerImage>
                </configuration>
            </execution>
        </executions>
    </plugin>
<!-- snip -->

The Dockerfile itself is basically what I had in the afore-linked post, minus the special ENTRYPOINT stuff.

Of note in this config is <pullNewerImage>false</pullNewerImage> in the dockerfile-maven-plugin configuration. Without that set, the plugin would attempt to look for a Domino image on the public Dockerhub and then fail because it's unavailable. With that behavior disabled, it will just use the one locally loaded.

Configuring the Tests

Now that I had that configured, it was time to adjust the tests to suit. Previously, I had been using system properties passed from the Maven environment into the test runner to identity the Liberty server, but now the container initialization will happen in code. Since this app is pretty heavyweight, I didn't want to do what most of the Testcontainers examples show, which is to let the Testcontainers JUnit hooks spawn and terminate containers for each test. Instead, I set up a centralized class to launch the container once:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
package it.com.example;

import org.testcontainers.containers.GenericContainer;
import org.testcontainers.utility.DockerImageName;

public enum AppTestContainers {
    instance;
    
    public final GenericContainer<?> webapp;
    
    @SuppressWarnings("resource")
    private AppTestContainers() {
        webapp = new GenericContainer<>(DockerImageName.parse("client-webapp-test:1.0.0-SNAPSHOT")) //$NON-NLS-1$
                .withExposedPorts(8080);
        webapp.start();
    }
}

With this setup, there will only be one instance of the container launched for the whole test suite, and then Testcontainers will shut it down for me at the end. I can also use the normal mechanisms from the Testcontainers docs to get the actual name and port it ended up mapped to:

1
2
3
4
5
6
    public String getServicesBaseUrl() {
        String host = AppTestContainers.instance.webapp.getHost();
        int port = AppTestContainers.instance.webapp.getFirstMappedPort();
        String context = "clientapp";
        return AppPathUtil.concat("http://" + host + ":" + port, context, ServicesUtil.DEFAULT_JAXRS_ROOT);
    }

Once I did that, all the tests that had previously been running against a liberty-maven-plugin-run server now worked against the Docker container, and I no longer have any dependency on the local environment actually having Notes or Domino fully installed. Neat!

A Catch: Running on my Jenkins Server

Since the whole point of Docker is to make things reproducible across environments, I was flush with confidence when I checked these changes in and pushed them up to the repo. I watched with bated breath as Jenkins picked up the change and started to build. My heart sank, though, when it got to the integration test suite and it failed with a bunch of:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
Jul 19, 2021 11:02:10 AM org.testcontainers.utility.ResourceReaper lambda$null$1
WARNING: Can not connect to Ryuk at localhost:49158
java.net.ConnectException: Connection refused (Connection refused)
    at java.base/java.net.PlainSocketImpl.socketConnect(Native Method)
    at java.base/java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:399)
    at java.base/java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:242)
    at java.base/java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:224)
    at java.base/java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
    at java.base/java.net.Socket.connect(Socket.java:609)
    at org.testcontainers.utility.ResourceReaper.lambda$null$1(ResourceReaper.java:163)
    at org.rnorth.ducttape.ratelimits.RateLimiter.doWhenReady(RateLimiter.java:27)
    at org.testcontainers.utility.ResourceReaper.lambda$start$2(ResourceReaper.java:159)
    at java.base/java.lang.Thread.run(Thread.java:829)

What the heck? Well, I had noticed in my prep that "Ryuk" is the name of something Testcontainers uses in its orchestration work, and is what allowed me to spawn the container manually above without explicitly terminating it. I looked around for a while and saw that a lot of people had reported similar trouble over the years, but usually it was due to some quirk in a specific version of Docker on Windows or macOS, which was not the case here. I did, though, find that Bitbucket Pipelines tripped over this at one point, and it seemed to be due to their switch of using safer user namespaces. Though it sounds like newer versions of Testcontainers fixed that, I figured it's pretty likely that I was hitting a variant of it, as I do indeed use namespace remapping.

So I tweaked my failsafe-maven-plugin configuration to set the TESTCONTAINERS_RYUK_DISABLED environment variable to false and, to be safe, added a shutdown hook at the end of my AppTestContainers init method:

1
2
3
Runtime.getRuntime().addShutdownHook(new Thread(() -> {
    webapp.close();
}));

Now, Testcontainers doesn't use its Ryuk container, but the actual app container loads up just fine and is destroyed at the end of the suite. Perfect! If all continues to go well, this will mean that it'll be one step easier for other devs to run the test suites regardless of their local setup, which is always a thorn in the side of Domino-realm testing.

Closing: What About Testing Domino Apps?

I mentioned in my disclaimer at the start that this is specifically about testing apps that use a Domino runtime, not apps on Domino. Still, I bet you could do this to test a Domino app that you deploy as an NSF and/or OSGi plugins, and I may do that myself down the line so that the test suite even-more-closely matches what is actually running in production. You could adjust the maven-resources-plugin config above (or use maven-dependency-plugin) to bring in NSFs built earlier in the build with NSF ODP Tooling as well as OSGi update sites and then have your Dockerfile copy those into the Domino data directory and the workspace/applications/eclipse directory. Similarly, if you had a Domino addin that you launch as a task and which then itself listens on a port, you could do the same there.

It's still not as convenient as being able to just easily run Domino API tests without all the scaffolding, and it implies a lot of structure that makes these more firmly "integration" than "unit" tests, but that's still a powerful capability to have.

Tinkering With Cross-Container Domino Addins

Sun May 16 13:35:54 EDT 2021

Tags: docker domino

A good chunk of my work lately involves running distinct processes with a Domino runtime, either run from Domino or standalone for development or CI use. Something that had been percolating in the back of my mind was another step in this: running these "addin-ish" programs in Docker in a separate container from Domino, but participating in that active Domino runtime.

Domino addins in general are really just separate processes and, while they gain some special properties when run via load foo on the console or OSLoadProgram in the C API, that's not a hard requirement to getting a lot of things working.

I figured I could get this working and, armed with basically no knowledge about how this would work, I set out to try it.

Scaffolding

My working project at hand is a webapp run with the standard open-liberty Docker images. Though I'm using that as a starting point, I had to bring in the Notes runtime. Whether you use the official Domino Docker images from Flexnet or build your own, the only true requirement is that it match the version used in the running server, since libnotes does a version check on init. My Dockerfile looks like this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
FROM --platform=linux/amd64 open-liberty:beta

USER root
RUN useradd -u 1000 notes

RUN chown -R notes /opt/ol
RUN chown -R notes /logs

# Bring in the Domino runtime
COPY --from=domino-docker:V1200_03252021prod /opt/hcl/domino/notes/latest/linux /opt/hcl/domino/notes/latest/linux
COPY --from=domino-docker:V1200_03252021prod /local/notesdata /local/notesdata

# Bring in the Liberty app and configuration
COPY --chown=notes:users /target/jnx-example-webapp.war /apps/
COPY --chown=notes:users config/* /config/
COPY --chown=notes:users exec.sh /opt/
RUN chmod +x /opt/exec.sh

USER notes

ENV LD_LIBRARY_PATH "/opt/hcl/domino/notes/latest/linux"
ENV NotesINI "/local/notesdata/notes.ini"
ENV Notes_ExecDirectory "/opt/hcl/domino/notes/latest/linux"
ENV Directory "/local/notesdata"
ENV PATH="${PATH}:/opt/hcl/domino/notes/latest/linux:/opt/hcl/domino/notes/latest/linux/res/C"

EXPOSE 8080 8443

ENTRYPOINT ["/opt/exec.sh"]

I'll get to the "exec.sh" business later, but the pertinent parts now are:

  • Adding a notes user (to avoid permissions trouble with the data dir, if it comes up)
  • Tweaking the Liberty container's ownership to account for this
  • Bringing in the Domino runtime
  • Copying in my WAR file from the project and associated config files (common for Liberty containers)
  • Setting environment variables to tell the app how to init

So far, that's largely the same as how I run standalone Notes-runtime-enabled apps that don't talk to Domino. The only main difference is that, instead of copying in an ID and notes.ini, I instead mount the data volume to this container as I do with the main Domino one.

Shared Memory

The big new hurdle here is getting the separate apps to participate in Domino's shared memory pool. Now, going in, I had a very vague notion of what shared memory is and an even vaguer one of how it works. Certainly, the name is straightforward, and I know it in Domino's case mostly as "the thing that stops Notes from launching after a crash sometimes", but I'd need to figure out some more to get this working. Is it entirely a filesystem thing, as the Notes problem implies? Is it an OS-level thing with true memory? Well, both, apparently.

Fortunately, Docker has this covered: the --ipc flag for docker run. It has two main modes: you can participate in the host's IPC pool (essentially like what a normal, non-contained process does) or join another container specifically. I opted for the latter, which involved changing both the Domino launch arguments.

For Domino, I added --ipc=shareable to the argument list, basically registering it as an available host for other containers to glom on to.

For the separate app, I added --ipc=container:domino, where "domino" is the name of the Domino container.

With those in place, the "addin" process was able to see Domino and do addin-type stuff, like adding a status line and calling AddinLogMessageText to display a message on the server's console.

Great: this proved that it's possible. However, there were still a few show-stopping problems to overcome.

PIDs

From what I gather, Notes keeps track of processes sharing its memory by their reported process IDs. If you have a process that joins the pool and then exits (maybe only if it exits abruptly; I'm not sure) and then tries to rejoin with the same PID, it will fail on init with a complaint that the PID is already registered.

Normally, this isn't a problem, as the OS hands out distinct PIDs all the time. This is trouble with Docker, though: by default, in general, the direct process in a Docker container sees itself as PID 1, and will start as such each time. In the Domino container, its PID 1 is "start.sh", and that's still going, and it's not going to hear otherwise from some other process calling itself the same.

Fortunately, this was a quick fix: Docker's -pid option. Though the documentation for this is uncharacteristically slight, it turns out that the syntax for my needs is the same as the last option. Thus: --pid=container:domino. Once I set that, the running app got a distinct PID from the pool. That was pleasantly simple.

SIGTERM

And now we come to the toughest problem. As it turns out, dealing with SIGTERM - the signal sent by docker stop - is a whole big deal in the Java world. I banged my head at this for a while, with most of the posts I've found being not quite applicable, not working at all for me, or technically working but only in an unsustainable way.

For whatever reason, the Open Liberty Docker image doesn't handle this terribly well - when given a SIGTERM order, it doesn't stop the servlet context before dying, which means the contextDestroyed method in my ServletContextListener (such as this one) doesn't fire.

In many webapp cases, this is fine, but Domino is extremely finicky when it comes to memory-sharing processes needing to exit cleanly. If a process calls NotesInit but doesn't properly call NotesTerm (and close all its Notes-enabled threads), the server panics and dies. This is... not great behavior, but it is what it is, and I needed to figure out how to work with it. Unfortunately, the Liberty Docker container wasn't doing me any favors.

One option is to use Runtime.getRuntime().addShutdownHook(...). This lets you specify a Thread to execute when a SIGTERM is received, and it can work in some cases. It's a little shaky sometimes, though, and it's bad form to riddle otherwise-normal webapps with such things: ideally, even webapps that you intend to run in a container should be written such that they can participate in a normal multi-app environment.

What I ended up settling on was based on this blog post, which (like a number of others) uses a shell script as the main entrypoint. That's a common idiom in general, and Open Liberty's image does it, but its script doesn't account for this, apparently. I tweaked that post's shell script to use the Liberty start/stop commands and ended up with this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
#!/usr/bin/env bash
set -x

term_handler() {
  /opt/ol/helpers/runtime/docker-server.sh /opt/ol/wlp/bin/server stop
  exit 143; # 128 + 15 -- SIGTERM
}

trap 'kill ${!}; term_handler' SIGTERM

/opt/ol/helpers/runtime/docker-server.sh /opt/ol/wlp/bin/server start defaultServer

# echo the Liberty console
tail -f /logs/console.log &

while true
do
  tail -f /dev/null & wait ${!}
done

Now, when I issue a docker stop to the container, the script issues an orderly shutdown of the Liberty instance, which properly calls the contextDestroyed method and allows my code to close down its ExecutorService and call NotesTerm. Better still, Domino keeps running without crashing!

Conclusion

My final docker run scripts ended up being:

Domino

1
2
3
4
5
6
7
8
9
docker run --name domino \
	-d \
	-p 1352:1352 \
	-v notesdata:/local/notesdata \
	-v notesmisc:/local/notesmisc \
	--cap-add=SYS_PTRACE \
	--ipc=shareable \
	--restart=always \
	iksg-domino-12beta3

Webapp

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
docker build . -t example-webapp
docker run --name example-webapp \
	-it \
	--rm \
	-p 8080:8080 \
	-v notesdata:/local/notesdata \
	-v notesmisc:/local/notesmisc \
	--ipc=container:domino \
	--pid=container:domino \
	example-webapp

(Here, the webapp is run to be temporary and tied to the console, hence -it, --rm, and no -d)

One nice thing to note is that there's nothing webapp- or Java-specific here. One of the nice things about Docker is that it removes a lot of the hurdles to running whatever-the-heck type of program you want, so long as there's a Linux Docker image for it. I just happen to default to Java webapps for basically everything nowadays. The above script could be tweaked to work with most anything: the original post had it working with a Node app.

Now, considering that I was starting from nearly scratch here, I certainly can't say whether this is a bulletproof setup or even a reasonable idea in general. Still, it seems to work, and that's good enough for me for now.

Carving Out A Workspace On Apple Silicon

Wed Feb 17 11:24:19 EST 2021

Last month, I mentioned my particular computer trouble, in that my trusty iMac Pro has been afflicted by an ever-worsening fan noise problem. I'd just been toughing it out, since there's never a good time to lose your main machine for a week or two, and my traveler MacBook Escape wasn't up to the task of being a full replacement.

After about a month's delay, my fresh new M1 MacBook Air arrived a few weeks ago and I've been putting it through its paces.

The Basics

As pretty much anyone who has one of these computers has said, the performance is outstanding. For the most part, even with emulation, most of the tasks I do during the day feel the same as they did on my wildly-more-expensive iMac Pro. On top of that, the fact that this thing doesn't even have a fan is both a technical marvel and a godsend as far as ambient room noise is concerned.

For continuity's sake, I used Migration Assistant to bring over my iMac's environment, and everything there went swimmingly. The good-citizen apps I use like MarsEdit and Tower were already ported to ARM, while the laggards (unsurprisingly, the ones made by larger companies with more resources) remain Intel-only but run just fine in emulation.

Hardware

For a good while now, I've had the iMac screen flanked by a pair of similarly-sized but far-inferior Asus screens. With the iMac's lovely screen out of the setup for now, I've switched to using those two Asus screens as my primary ones, with the pretty-but-tiny laptop screen sitting beneath them. It works well enough, though I do miss the retina resolution and general brightness of the iMac.

The second external screen itself was a bit of an issue. Of themselves, these M1 Macs, either for good reason or to mark them as low end, support only two screens total, the laptop screen included. So I ended up ordering one of the StarTech DisplayLink adapters. I expected it to be a crappy experience overall, with noticeable lag, but it actually works much more smoothly than I'd have expected. Other than the fact that it doesn't support Night Shift and some wake-from-sleep slowness that I attribute to it, it actually feels just like a normally-attached monitor.

I also, in order to regain my precious Ethernet connection and (sort of) clean up the dongle situation, I got one of these Anker USB-C docks. I've only had it for a day, but it seems to be working like you'd want so far. So that's nice.

Eclipse and Java

Here's where I've hit my first bout of jankiness, though it's not too surprising. In general, Eclipse and Java work just fine through emulation, and I can even keep running tests and web servers using the libnotes.dylib from the Notes client as I want.

I've found times where tests lag or fail now when they didn't before, though, and that's a little ominous. Compiling locally with NSF ODP, which spawns a sub-process that loads the Notes libraries, usually works, though now I've set up another Domino server on my network to handle that reliably.

I've also noticed some trouble in one of my Eclipse workspaces where it periodically spends a long time (10+ minutes) "Building" without explaining what exactly it's doing, and this is new behavior since the switch. I can't say what the core trouble is there. It's my largest active workspace, so it could be that file polling or other system-call-intensive work is just slower, or it could be an artifact of moving it from machine to machine. I'll probably scrap it and make a new workspace with the same projects to see if it alleviates it.

This all should improve in time, though, when Eclipse, AdoptOpenJDK, and HCL all release macOS ARM ports. IntelliJ has an experimental ARM port out, and I'm curious how that does its thing. I'll probably spend some time kicking the tires on that, though I still find Eclipse's UI much more conducive to the "lots of semi-related projects" working style I have. Visual Studio Code is in a similar boat, so that'll be good for the JavaScript development I do (under protest).

In the mean time, I've done some tinkering with how I could get a fully-native Eclipse environment running and showing up on my Mac, including firing up the venerable XQuartz to run Eclipse as an X client from a Linux VM in the basement. While that technically works, the experience is... well, I'll charitably call it "not Mac-like". Still, it's kind of neat and would in theory push aside any number of concerns.

Docker

Here's the real trouble I'm butting my head against. I've taken to using Docker more and more for various reasons: running app servers with a Domino runtime, running Domino outright, and (where my trouble is now) performing cross-compilation and other native-specific compilation tasks. For example, for one of my clients, I have a script that mounts the project directory to a Docker container to perform a full Maven build with NSF compilation and compile-time tests, without having to worry about the user's particular Notes or Domino installation.

However, while Docker is doing Hurculean work to smooth the process, most of the work I do ends up hitting one of the crashing snags in poor qemu, which crop up particularly with Java compilation tasks. Since compiling Java is basically all I do all day, that leaves me hoping either for improvements in future versions or a Linux/aarch64 port of Domino (or at least libnotes.so).

In the mean time, I'm making use of Docker's network transparency to run Docker on an x64 VM and set DOCKER_HOST locally to point to it. For about half of what I need, this works great: I can run Domino servers and Notes-enabled webapps this way, and I just change which address I'm pointing to to interact with them. However, it naturally removes the possibility of connecting with the local filesystem, at least without pairing it with some file-share jankiness, so it's not a replacement all around. It also topples quickly into the bizarre inner Docker world: for example, I wanted to set up Codewind to work remotely, but the instructions I found for getting started with your own server were not helpful.

Future Use

Still, despite the warts, I'd say this laptop is performing admirably, and better than one would normally expect. Plus, it's a useful exercise in finding more ways to make my workflow less machine-specific. Though I still bristle at the thought of going full Eclipse Che and working out of a web browser, at least moving some more aspects of my workspace to float above the rough seas is just good practice.

I'll probably go back to using the iMac Pro as my main machine once I get it fixed, even if only for the display, but this humble, low-end M1 has planted its flag more firmly than a MacBook Air normally has any right to.

Getting to Appreciate the Idioms of Docker

Mon Sep 14 09:28:53 EDT 2020

Tags: docker
  1. Weekend Domino-Apps-in-Docker Experimentation
  2. Executing a Complicated OSGi-NSF-Surefire-NPM Build With Docker
  3. Getting to Appreciate the Idioms of Docker

Now that I've been working with Docker more, I'm starting to get used to its way of doing things. As with any complicated tool - especially one as fond of making up its own syntax as Docker is - there's both the process of learning how to do things as well as learning why they're done that way. Since I'm on this journey myself, I figure it could be useful to share what I've learned so far.

What Is Docker?

To start with, it's useful to understand what Docker is both conceptually and technically, since a lot of discussion about it is buried under terms like "cloud native" that obscure the actual topic. That's even before you get to the giant pile of names like "Kubernetes" and "Rancher" that build on top of the core.

Before I get to the technical bits, the overall idea is that Docker is a way to run programs isolated from each other and in a consistent way across deployments. In a Domino context, it's kind of like how an NSF is still its own mostly-consistent app regardless of what OS Domino is on or what version it is - the NSF is its own little world on Domino-the-host. Technically, it diverges wildly from that, but it can be a loose point of reference.

Now, for the nuts and bolts.

Docker (the tool, not the company or service) is a Linux-born toolset for OS-level virtualization. It uses the term "containers", but other systems over time have used terms like "partitions" and "jails" to mean the same thing. In essence, what OS-level virtualization means is that a program or set of programs is put into a box that looks like the whole OS, but is really just a subset view provided by a host OS. This is distinct from virtualization in the sense of VMWare or Parallels in that the app still uses the code of the host OS, rather than loading up a whole additional OS.

Things admittedly get a little muddled on non-Linux systems. Other than Microsoft's peculiar variant of Docker that runs Windows-based apps, "a Docker container" generally means "a Linux container". To accomplish this, and to avoid having a massively-fragmented array of images (more on those in a bit), Docker Desktop on macOS and (usually) Windows uses hardware virtualization to launch a Linux system. In those cases, Docker is using both hardware virtualization and in-OS container virtualization, but the former is just a technical implementation detail. On a Linux host, though, no such second tier is needed.

Beyond making use of this OS service, Docker consists of a suite of tools for building and managing these images and containers, and then other tools (like Kubernetes) operate at a level above that. But all the stuff you deal with with Docker - Dockerfiles, Compose, all that - comes down to creating and managing these walled-off apps.

Docker Images

Docker images are the part that actually contains the programs and data to run and use, which are then loaded up into a container.

A Docker image is conceptually like a disk image used by a virtualization app or macOS - it's a bunch of files ready to be used in a filesystem. You can make your own or - very commonly - pull them from a centralized library like the main Docker Hub. These images are generally components of a larger system, but are sometimes full-on tools to run yourself. For example, the PostgreSQL image is ready to run in your Docker environment and can be used as essentially a quick-start way to set up a Postgres server.

The particular neat trick that Docker images pull is that they're layered. If you look at a Dockerfile (the script used to build these images), you can see that they tend to start with a FROM line, indicating the base image that they stack on top of. This can go many layers deep - for example, the Maven image builds on top of the OpenJDK image, which is based on the Alpine Linux image.

You can think of this as a usually-simple dependency line in something like Maven. Rather than including all of the third-party code needed, a Maven module will just reference dependencies, which are then brought in and woven together as needed in the final app. This is both useful for creating your images and is also an important efficiency gain down the line.

Dockerfiles

The main way to create a Docker image is to use a Dockerfile, which is a text file with a syntax that appears to have come from another dimension. Still, once you're used to the general form of one, they make sense. If you look at one of the example files, you can see that it's a sequential series of commands describing the steps to create the final image.

When writing these, you more-or-less can conceptualize them like a shell script, where you're copying around files, setting environment properties, and executing commands. Once the whole thing is run, you end up with an image either in your local registry or as a standalone file. That final image is what is loaded and used as the operating environment of the container.

The neat trick that Dockerfiles pull, though, is that commands that modify the image actually create a new layer each, rather than changing the contents of a single image. For example, take these few lines from a Dockerfile I use for building a Domino-based project:

1
2
3
COPY docker/settings.xml /root/.m2/
RUN mkdir -p /root
COPY --from=domino-docker:V1101_03212020prod /opt/hcl/domino/notes/11000100/linux /opt/hcl/domino/notes/latest/linux

Each of these lines creates a new layer. The first two are tiny: one just contains the settings.xml file from my project and then the second just contains an empty /root directory. The third is more complicated, pulling in the whole Domino runtime from the official 11.0.1 image, but it's the same idea.

Each of these images is given a SHA-256 hash identifier that will uniquely identify it as a result of an operation on a previous base image state. This lets Docker cache these results and not have to perform the same operation each time. If it knows that, by the time it gets to the third line above, the starting image and the Domino image are both in the same state as they were the last time it ran, it doesn't actually need to copy the bits around: it can just reuse the same unchanged cached layer.

This is the reason why Maven-build Dockerfiles often include a dependency:go-offline line: because the project's dependencies rarely change, you can create a reusable image from the Maven dependency repository and not have to re-resolve them every build.

Wrap-Up

So that's the core of it: managing images and walled-off mini OS environments. Things get even more complicated in there even before you get to other tooling, but I've found it useful to keep my perspective grounded in those basics while I learn about the other aspects.

In the future, I think I'll talk about how and why Docker has been particularly useful for me when it comes to building and running Domino-based apps, in particularly helping somewhat to alleviate several of the long-standing impediments to working with Domino.

Executing a Complicated OSGi-NSF-Surefire-NPM Build With Docker

Thu Aug 13 14:42:58 EDT 2020

  1. Weekend Domino-Apps-in-Docker Experimentation
  2. Executing a Complicated OSGi-NSF-Surefire-NPM Build With Docker
  3. Getting to Appreciate the Idioms of Docker

The other month, I got my feet wet with Docker after only conceptually following it for a long time. With that, I focused on getting a basic Jakarta EE app up and running with an active Notes runtime by way of the official Domino-on-Docker image provided by HCL.

Since that time, I'd been mulling over another use for it: having it handle the build process of my client's sprawling app. This started to become a more-pressing desire thanks to a couple factors:

  1. Though I have the build working pretty well on Jenkins, it periodically blocks indefinitely when it tries to launch the NSF ODP Compiler, presumably due to some sort of contention. I can go in and kill the build, but that's only when I notice it.
  2. The project is focusing more on an Angular-based UI, with a distinct set of programmers working on it, and the process of keeping a consistent Domino-side development environment up and running for them is a real hassle.
  3. Setting up a new environment with a Notes runtime is a hassle even for in-the-weeds developers like me.

The Goal

So I set out to use Docker to solve this problem. My idea was to write a script that would compose a Docker image containing all the necessary base tools - Java, Maven, Make for some reason, and so forth - bring in the Domino runtime from HCL's image, and add in a standard Notes ID file, names.nsf, and notes.ini that would be safe to keep in the private repo. Then, I'd execute a script within that environment that would run the Maven build inside the container using my current project tree.

The Dockerfile

Since I'm still not fully adept at Docker, it's been a rocky process, but I've managed to concoct something that works. I have a Dockerfile that looks like this (kindly ignore all cargo-culting for now):

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
FROM maven:3.6.3-adoptopenjdk-8-openj9
USER root

# Install toolchain files for the NPM native components
RUN apt update
RUN apt install -y python make gcc g   openssh-client git

# Configure the Maven environment and permissive root home directory
COPY settings.xml /root/.m2/
COPY build-app.sh /
RUN mkdir -p /root/.m2/repository
RUN chmod -R 777 /root

# Bring in the Domino runtime
COPY --from=domino-docker:V1101_03212020prod /opt/hcl/domino/notes/11000100/linux /opt/hcl/domino/notes/latest/linux
COPY --from=domino-docker:V1101_03212020prod /local/notesdata /local/notesdata

# Some LotusScript libraries use an all-caps name for lsconst.lss
RUN ln -s lsconst.lss /opt/hcl/domino/notes/latest/linux/LSCONST.LSS

# Copy in our stock Notes ID and configuration files
COPY notesdata/* /local/notesdata/

# Prepare a permissive data environment
RUN chmod -R 777 /local/notesdata

The gist here is similar to my previous example, where it starts from the baseline Maven package. One notable difference is that I switched away from the -alpine variant I had inherited from my original Codewind example: I found that I would encounter npm: not found during the frontend build process, and discovered that this had to do with the starting Linux distribution.

The rest of it brings in the core Domino runtime and data directory from the official image, plus my pre-prepared Maven configuration. It also does the fun job of symlinking "lsconst.lss" to "LSCONST.LSS" to account for the fact that some of the LotusScript in the NSFs was written to assume Windows and refers to the include file by that name, which doesn't fly on a case-sensitive filesystem. That was a fun one to track down.

The build-app.sh script is just a shell script that runs several Maven commands specific to this project.

The Executor Script

The other main component is a Bash script, ./build.sh:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
#!/usr/bin/env bash

set -e

mkdir -p ~/.m2/repository
mkdir -p ~/.ssh

# Clean any existing NPM builds
rm -rf ../app-ui/*/node_modules
rm -rf ../app-ui/*/dist

# Set up the Docker workspace
rm -rf scratch
mkdir -p scratch/builder
cp maven/* scratch/builder/
cp -r notesdata-server scratch/builder/notesdata

# Build the image and execute a Maven install
docker build scratch/builder -f build.Dockerfile -t app-build
docker run \
    --mount type=bind,source="$(pwd)/..",target=/build \
    --mount type=bind,source="$HOME/.m2/repository",target=/root/.m2/repository \
    --mount type=bind,source="$HOME/.ssh",target=/root/.ssh \
    --rm \
    --user $(id -u):$(id -g) \
    app-build \
    sh /build-app.sh

This script ensures that some common directories exist for the user, clears out any built Node results (useful for a local dev environment), copies configuration files into an image-building directory, and builds the image using the aforementioned Dockerfile. Then, it executes a command to spawn a temporary container using that image, run the build, and delete the container when done. Some of the operative bits and notes are:

  • I'm using --mount here maybe as opposed to --volume because I don't know that much about Docker. Or maybe it's the right one for my needs? It works, anyway, even if performance on macOS is godawful currently
  • I bring in the current user's Maven repository so that it doesn't have to regenerate the entire world on each build. I'm going to investigate a way to pre-package the dependencies in a cacheable Maven RUN command as my previous example did, but the sheer size of the project and OSGi dependencies tree makes that prohibitive at the moment
  • I bring in the current user's ~/.ssh directory because one of the NPM dependencies references its dependency via a GitHub SSH URL, which is insane and bad but I have to account for it. Looking at it now, I should really mark that one read-only
  • The --rm is the part that discards the container after completing, which is convenient
  • I use --user to specify a non-root user ID to run the build, since otherwise Docker on Linux ends up making the target results root-owned and un-deletable by Jenkins. This is also the cause of all those chmod -R 777 ... calls in the Dockerfile. There are gotchas to keep in mind when doing this

Miscellaneous Other Configuration

To get ODP ? NSF compilation working, I had to make sure that Maven knew about the Domino runtime. Fortunately, since it'll now be consistent, I'm able to make a stock settings.xml file and copy that in:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
<?xml version="1.0"?>
<settings xmlns="http://maven.apache.org/SETTINGS/1.0.0"
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xsi:schemaLocation="http://maven.apache.org/SETTINGS/1.0.0 http://maven.apache.org/xsd/settings-1.0.0.xsd">
	<profiles>
		<profile>
			<id>notes-program</id>
			<properties>
				<notes-program>/opt/hcl/domino/notes/latest/linux</notes-program>
				<notes-data>/local/notesdata</notes-data>
				<notes-ini>/local/notesdata/notes.ini</notes-ini>
			</properties>
		</profile>
	</profiles>
	<activeProfiles>
		<activeProfile>notes-program</activeProfile>
	</activeProfiles>
</settings>

Those three are the by-convention properties I use in the NSF ODP Tooling and my Tycho-run test suites to pass information along to initialize the Notes process.

Future Improvements

The main thing I want to improve in the future is getting the dependencies loaded into the image ahead of time. Currently, in addition to sharing the local Maven repository, the command brings in not only the full project structure but also the app-dependencies submodule we use to store giant blobs of p2 sites needed by the build. The "Docker way" would be to compose these in as layers of the image, so that I could skip the --mount bit for them but have Docker's cache avoid the need to regenerate a large dependencies image each time.

I'd also like to pair this with app-runner Dockerfiles to launch the webapp variants of the XPages and JAX-RS projects in Liberty-based containers. Once I get that clean enough, I'll be able to hand that off to the frontend developers so that they can build the full app and have a local development environment with the latest changes from the repo, and no longer have to wonder whether one of the server-side developers has updated the Domino server with some change. Especially when that server-side developer is me, and it's Friday afternoon, and I just want to go play Baba Is You in peace.

In the mean time, though, it works, and works in a repeatable way. Once I figure out how to get Jenkins to read the test results of a freestyle project after the build, I hope to replace the Jenkins build process with this script, which should both make the process more reliable and allow me to run multiple simultaneous builds per node without worry about deadlocking contention.

Weekend Domino-Apps-in-Docker Experimentation

Sun Jun 28 18:37:19 EDT 2020

  1. Weekend Domino-Apps-in-Docker Experimentation
  2. Executing a Complicated OSGi-NSF-Surefire-NPM Build With Docker
  3. Getting to Appreciate the Idioms of Docker

For a couple of years now, first IBM and then HCL have worked on and adapted community work to get Domino running in Docker. I've observed this for a while, but haven't had a particular need: while it's nice and all to be able to spin up a Domino server in Docker, it's primarily an "admin" thing. I have my suite of development Domino servers in VMs, and they're chugging along fine.

However, a thought has always gnawed at the back of my mind: a big pitch of Docker is that it makes not just deployment consistent, but also development, taking away a chunk of the hassle of setting up all sorts of associated tools around development. It's never been difficult, per se, to install a Postgres server, but it's all the better to be able to just say that your app expects to have one around and let the tooling handle the specifics for you. Domino isn't quite as Docker-friendly as Postgres or other tools, but the work done to get the official image going with 11.0.1 brought it closer to practicality. This weekend, I figured I'd give it a shot.

The Problem

It's worth taking a moment to explain why it'd be worth bothering with this sort of setup at all. The core trouble is that running an app with a Notes runtime is extremely annoying. You have to make sure that you're pointing at the right libraries, they're all in the right place to be available in their internal dependency tree, you have to set a bunch of environment variables, and you have to make sure that you provide specialized contextual info, like an ID file. You actually have the easiest time on Windows, though it's still a bit of a hurdle. Linux and macOS have their own impediments, though, some of which can be showstoppers for certain tasks. They're impediments worth overcoming to avoid having to use Windows, but they're impediments nonetheless.

The Setup

But back to Docker.

For a little while now, the Eclipse Marketplace has had a prominent spot for Codewind, an IBM-led Eclipse Foundation project to improve the experience of development with Docker containers. The project supplies plugins for Eclipse, IntelliJ, and VS Code / Eclipse Che, but I still spend most of my time in Eclipse, so I went with the former.

To begin with, I started with the default "Open Liberty" project you get when you create a new project with the tooling. As I looked at it, I realized with a bit of relief that there's not too much special about the project itself: it's a normal Maven project with war packaging that brings in some common dependencies. There's no Maven build step that expects Docker at all. The specialized behavior comes (unsurprisingly, if you use Docker already) in the Dockerfile, which goes through the process of building the app, extracting the important build results into a container based on the open-liberty runtime image, bringing in support files from the project, and launching Liberty. Nothing crazy, and the vast majority of the code more shows off MicroProfile features than anything about Docker specifically.

Bringing in Domino

The Docker image that HCL provides is a fully-fledged server, but I don't really care about that: all I really need is the sweet, sweet libnotes.so and associated support libraries. Still, the easiest way to accomplish that is to just copy in the whole /opt/hcl/domino/notes/11000100/linux directory. It's a little wasteful, and I plan to find just what's needed later, but it works to do that.

Once you have that, you need to do the "user side" of it: the ID file and configuration. With a fully-installed Domino server, the data directory balloons in side rapidly, but you don't actually need the vast majority of it if you just want to use the runtime. In fact, all you really need is an ID file, a notes.ini, and a names.nsf - and the latter two can even be massively trimmed down. They do need to be custom for your environment, unfortunately, but at least it's much easier to provide just a few files than spin up and maintain a whole server or run the Notes client locally.

Then, after you've extracted the juicy innards of the Domino image and provided your local resources, you can call NotesInitExtended pointing to your data directory (/local/notesdata in the HCL Docker image convention) and the notes.ini, and voila: you have a running app that can make local and remote Notes native API calls.

Example Project

I uploaded a tiny project to demonstrate this to GitHub: https://github.com/jesse-gallagher/domino-docker-war-example. All it does is provide one JAX-RS resource that emits the server ID, but that shows the Notes API working. In this case, I used the Darwino Domino NAPI (which I really need to refresh from upstream), but Domino JNA would also work. Notes.jar would too, but I think you'll need one of those projects to do the NotesInitExtended call with arguments.

The Dockerfile for the project goes through the steps enumerated above, based on how the original example image does it, and was tweaked to bring in the Domino runtime and support files. I stripped the Liberty-specific stuff out of the pom.xml - I think that the original route the example did of packaging up the whole server and then pulling it apart in Docker image creation has its uses, but isn't needed here.

Much like the pom.xml, the code itself is slim and doesn't explicitly refer to Docker at all. I have a ServletContextListener to init and term the Notes runtime, as well as a Filter implementation to init/term the request thread, but otherwise it just calls the Notes API with no fuss.

Larger Projects

I haven't yet tried this with larger projects, but there's no reason it shouldn't work. The build-deploy-run cycle takes a bit more time with Docker than with just a Liberty server embedded in Eclipse normally, but the consistency may be worth it. I've gotten used to running a killall -KILL java whenever an errant process gloms on to my Notes ID file and causes the server to stop being able to init the runtime, but I'd be glad to be done with that forever. And, for my largest project - the one with the hundreds of XPages and CCs - I don't see why that wouldn't work here too.

Normal Domino Projects

Another route that I've considered for Domino in Docker is to use it to deploy NSFs and OSGi projects. This would involve using the Domino image for its intended purpose of running a full server, but configuring the INI to just serve HTTP, and having the Dockerfile place the built OSGi plugins and NSFs in their right places. This would certainly be much faster than the build-deploy-run cycle of replacing NSF designs and deploying the plugins to an Update Site NSF, though there would be a few hurdles to get over. Not impossible, though.


I figure I'll kick the tires on this some more this week - maybe try deploying the aforementioned giant XPages .war project to it - to see if it will fit into my workflow. There's a chance that the increased deployment times won't be worth it, and I won't really gain the "consistent with production" advantages of Docker when the way I'm developing the app is already a wildly-unsupported configuration. It might be worth it if I try the remote mode of Codewind, though: I have some Liberty servers that Jenkins deploys to, but it'd be even-better to be able to show my running app to co-developers to work on something immediately, instead of waiting for the full build. It's worth some investigation, anyway.