- Weekend Domino-Apps-in-Docker Experimentation
- Executing a Complicated OSGi-NSF-Surefire-NPM Build With Docker
- Getting to Appreciate the Idioms of Docker
The other month, I got my feet wet with Docker after only conceptually following it for a long time. With that, I focused on getting a basic Jakarta EE app up and running with an active Notes runtime by way of the official Domino-on-Docker image provided by HCL.
Since that time, I'd been mulling over another use for it: having it handle the build process of my client's sprawling app. This started to become a more-pressing desire thanks to a couple factors:
- Though I have the build working pretty well on Jenkins, it periodically blocks indefinitely when it tries to launch the NSF ODP Compiler, presumably due to some sort of contention. I can go in and kill the build, but that's only when I notice it.
- The project is focusing more on an Angular-based UI, with a distinct set of programmers working on it, and the process of keeping a consistent Domino-side development environment up and running for them is a real hassle.
- Setting up a new environment with a Notes runtime is a hassle even for in-the-weeds developers like me.
So I set out to use Docker to solve this problem. My idea was to write a script that would compose a Docker image containing all the necessary base tools - Java, Maven, Make for some reason, and so forth - bring in the Domino runtime from HCL's image, and add in a standard Notes ID file,
notes.ini that would be safe to keep in the private repo. Then, I'd execute a script within that environment that would run the Maven build inside the container using my current project tree.
Since I'm still not fully adept at Docker, it's been a rocky process, but I've managed to concoct something that works. I have a
Dockerfile that looks like this (kindly ignore all cargo-culting for now):
The gist here is similar to my previous example, where it starts from the baseline Maven package. One notable difference is that I switched away from the
-alpine variant I had inherited from my original Codewind example: I found that I would encounter
npm: not found during the frontend build process, and discovered that this had to do with the starting Linux distribution.
The rest of it brings in the core Domino runtime and data directory from the official image, plus my pre-prepared Maven configuration. It also does the fun job of symlinking "lsconst.lss" to "LSCONST.LSS" to account for the fact that some of the LotusScript in the NSFs was written to assume Windows and refers to the include file by that name, which doesn't fly on a case-sensitive filesystem. That was a fun one to track down.
build-app.sh script is just a shell script that runs several Maven commands specific to this project.
The Executor Script
The other main component is a Bash script,
This script ensures that some common directories exist for the user, clears out any built Node results (useful for a local dev environment), copies configuration files into an image-building directory, and builds the image using the aforementioned
Dockerfile. Then, it executes a command to spawn a temporary container using that image, run the build, and delete the container when done. Some of the operative bits and notes are:
- I'm using
--mounthere maybe as opposed to
--volumebecause I don't know that much about Docker. Or maybe it's the right one for my needs? It works, anyway, even if performance on macOS is godawful currently
- I bring in the current user's Maven repository so that it doesn't have to regenerate the entire world on each build. I'm going to investigate a way to pre-package the dependencies in a cacheable Maven
RUNcommand as my previous example did, but the sheer size of the project and OSGi dependencies tree makes that prohibitive at the moment
- I bring in the current user's
~/.sshdirectory because one of the NPM dependencies references its dependency via a GitHub SSH URL, which is insane and bad but I have to account for it. Looking at it now, I should really mark that one read-only
--rmis the part that discards the container after completing, which is convenient
- I use
--userto specify a non-root user ID to run the build, since otherwise Docker on Linux ends up making the
targetresults root-owned and un-deletable by Jenkins. This is also the cause of all those
chmod -R 777 ...calls in the
Dockerfile. There are gotchas to keep in mind when doing this
Miscellaneous Other Configuration
To get ODP ? NSF compilation working, I had to make sure that Maven knew about the Domino runtime. Fortunately, since it'll now be consistent, I'm able to make a stock settings.xml file and copy that in:
Those three are the by-convention properties I use in the NSF ODP Tooling and my Tycho-run test suites to pass information along to initialize the Notes process.
The main thing I want to improve in the future is getting the dependencies loaded into the image ahead of time. Currently, in addition to sharing the local Maven repository, the command brings in not only the full project structure but also the
app-dependencies submodule we use to store giant blobs of p2 sites needed by the build. The "Docker way" would be to compose these in as layers of the image, so that I could skip the
--mount bit for them but have Docker's cache avoid the need to regenerate a large dependencies image each time.
I'd also like to pair this with app-runner
Dockerfiles to launch the webapp variants of the XPages and JAX-RS projects in Liberty-based containers. Once I get that clean enough, I'll be able to hand that off to the frontend developers so that they can build the full app and have a local development environment with the latest changes from the repo, and no longer have to wonder whether one of the server-side developers has updated the Domino server with some change. Especially when that server-side developer is me, and it's Friday afternoon, and I just want to go play Baba Is You in peace.
In the mean time, though, it works, and works in a repeatable way. Once I figure out how to get Jenkins to read the test results of a freestyle project after the build, I hope to replace the Jenkins build process with this script, which should both make the process more reliable and allow me to run multiple simultaneous builds per node without worry about deadlocking contention.