- Tinkering With Testcontainers for Domino-based Web Apps
- Adding Selenium Browser Tests to My Testcontainers Setup
- Building a Full Domino Image for JUnit Tests
Last year, I wrote about how I built images to use Testcontainers to run tests against a Liberty app that uses a Domino runtime. In that situation, I used the Domino Docker image from Flexnet but then pulled out the program files and stock data, mixed with pre-configured server support files from the repository.
Recently, though, I've had a need to have a similar setup, but altered in three ways:
- It should fully run Domino, not just use the data and program files for the runtime in Liberty
- It should not require pre-populating a server ID, notes.ini, and names.nsf from outside - that is, it should be self-configuring
- It should also have an extra component installed, one that must be installed after Domino is configured but before the image is fully built
- On the next launch, I need a post-install agent to run for final configuration, and the tests need to wait on that
Additionally, it should still be runnable with the same basic tools - the image should be built and the container started and destroyed by automated tests. This stricture comes into play in weird ways.
The second requirement - that the container be self-configuring - is handled adroitly by One-Touch Setup, the feature in Domino 12 and above that lets you specify a configuration JSON file. I used this here in basically the same way as I described in that post: the script sets up and certifies a throwaway domain with a known admin username + password, and also deploys a few NTFs on first proper launch. Since this server intentionally doesn't communicate with the outside world, I don't need to provide any external files like a certificate authority or server ID.
Switching the Baseline
Initially, I continued to use the official image from Flexnet. However, I had run into some trouble with multi-stage builds using this earlier. I lack enough Docker knowledge to be sure, but my suspicion is that it declares
/local/notesdata as an expected externally-provided volume. This on its own is fine, but it interacts oddly with automated container building. The trouble comes in when you do something like this:
By the time it hits that second
RUN line using automated build mechanisms,
/local/notesdata is depopulated. I'm not sure why this is different from a command-line
docker build, but it is what it is.
Fortunately, the "community" version of the image builder from https://github.com/IBM/domino-docker doesn't exhibit this behavior. I had been considering switching over already, so this made the decision all the easier.
Breaking Up Setup Into Stages
With this change in hand, I was able to more-properly break up the setup into multiple stages. This is important both for requirement #3 above and because it allows Docker to cache intermediate results. Though I want the server to auto-configure itself when building, I don't need the results of that to be different every run, and thus I can save tons of time in subsequent launches if I handle these caches well.
So my Dockerfile started to look something like this:
But here I hit another distinction between
docker build and the automated mechanisms: that
RUN /domino_docker_entrypoint.sh line would execute and get to the point where it emits "Application configuration completed successfully", but then would not actually exit properly. Again, I'm not sure why: the JSON file tells the server to not launch after configuration, and indeed it doesn't, but the script just doesn't relinquish control - but does when built from the command line.
So I rolled up my sleeves and wrote a wrapper script to kill the process when it's known to be done:
This runs the setup process, redirecting the output to a temp file, in the background. Then, it watches that file for the known ending text and exits when observed. A little janky in multiple ways for multiple reasons, but it works. That allows image build to progress normally to the next step in all environments, caching the results of the initial server setup. I replaced the
RUN /domino_docker_entrypoint.sh line above with copying in and executing this script, and all was well.
After the "appinstall" step, I have the peculiar need to then run code in a Notes context to fiddle with some components that aren't configured earlier. For now, I've settled on writing an agent that runs on server start, then signing and enabling it in the
Originally, I had this agent emit the text "postinstall done" when it finished, so that the Testcontainers runtime could look for that to know when it's safe to execute tests. However, this added a good while to the launch stage: at this point, launching the container has to wait on final post-install tasks from Domino, then signing the DB with adminp, then actually executing the agent. This added about a minute to the test pre-run time, and thus was a prime target for further caching.
So I altered the agent to check to see if it actually needs to work and then, if so, shuts down the server when it's done:
Then, I altered my Dockerfile to amend the end bit:
Again: janky, but it works. Now, all of the setup stages are cached on subsequent runs.
Building the Image in Testcontainers
In my original post, I showed using
dockerfile-maven-plugin to build the image just before executing the tests. This works, but it complicated the pom.xml a bit and meant that running the tests in an IDE meant first running a Maven build. Not the end of the world, but not ideal.
Fortunately, Testcontainers can also build images. That meant that, rather than building the image in Maven and then re-using the same-named container in Java, I could do it all Java-side. To do this, I created a subclass of
GenericContainer to centralize the configuration of the container:
It's a little fiddlier in that now I need to enumerate each classpath resource to copy in, but that also makes it all the more portable. I removed the
dockerfile-maven-plugin execution from the pom.xml and switched to this instead. Since I name the image and tell it to not auto-delete on completion, this retained the desired caching behavior.
Overall, this whole process brought the test pre-launch time (after the first run) down from 3-5 minutes to about 20 seconds while also reducing the split between the Maven config and Java. Much more bearable when making small tweaks and re-running the suite, and it makes it a bit more explicable what's going on.