Showing posts for tag "docker"

Adding Selenium Browser Tests to My Testcontainers Setup

Jul 20, 2021, 11:20 AM

Yesterday, I talked about how I dove into Testcontainers for my app-testing needs. Today, I decided to use this to close another bit of long-open business: automated browser testing. I've been very much a dilettante when it comes to that, but we have a handful of browser-ish tests just to make sure the login page, the main page, and some utility pages load up and include expected content, and those can serve as a foundation for much more.

Background

In general, when you think "automated browser testing", that means Selenium. As a toolkit, Selenium has hooks for the browsers you want and has essentially universal support, working smoothly in Java with JUnit. However, the actual act of loading a real browser is miserable, mostly on account of needing you to install the browser and point to it programmatically, which is doable but is another potential system-specific configuration that I'd much, much rather avoid in my automated builds.

Accordingly, and because my needs have been simple, I've used HtmlUnit, which is a portable Java browser-like library that does the yeoman's work of letting you perform basic Selenium tests without having to configure actual native OS installations. It's neat, imposes basically no strictures on your workflow, and I recommend it for lots of uses. Still, it's not the same as real browsers, and I had to do things like disable JavaScript processing to avoid it tripping up on some funky JS that full-grown browsers can deal with.

Enter Webdriver Containers

So, now that I had Testcontainers configured to run the web app, my eye turned to Webdriver Containers, an ancillary capability of Testcontainers that lets you run these full-fledged browsers via their Docker images, and even has cool abilities like letting you record the screen interactions over VNC. Portability and full production representation? Sign me up.

The initial setup was pretty easy, just adding some dependencies for the Selenium remote driver (replacing my HtmlUnit driver) and the Testcontainers Selenium module:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
<dependency>
    <groupId>org.seleniumhq.selenium</groupId>
    <artifactId>selenium-remote-driver</artifactId>
    <version>3.141.59</version>
    <scope>test</scope>
</dependency>
<dependency>
    <groupId>org.testcontainers</groupId>
    <artifactId>selenium</artifactId>
    <version>1.15.3</version>
    <scope>test</scope>
</dependency>

Programmatic Container Setup

After that, my next task was to configure the containers. I'll skip over some of my troubleshooting and just describe where I ended up. Basically, since both the webapp and browsers are in Docker containers, I had to coordinate how they communicate with each other. There seem to be a few ways to do this, but the route I went was to build a Docker network in my container orchestration class, bind all of the containers to it, and then reference the app via a network alias.

With that addition and some containers for Chrome and Firefox, the class looks more like this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
public enum AppTestContainers {
    instance;
    
    public final Network network = Network.builder()
        .driver("bridge") //$NON-NLS-1$
        .build();
    public final GenericContainer<?> webapp;
    public final BrowserWebDriverContainer<?> chrome;
    public final BrowserWebDriverContainer<?> firefox;
    
    @SuppressWarnings("resource")
    private AppTestContainers() {
        webapp = new GenericContainer<>(DockerImageName.parse("client-webapp-test:1.0.0-SNAPSHOT")) //$NON-NLS-1$
                .withExposedPorts(8080)
                .withNetwork(network)
                .withNetworkAliases("client-webapp-test"); //$NON-NLS-1$
        
        chrome = new BrowserWebDriverContainer<>()
            .withCapabilities(new ChromeOptions())
            .withNetwork(network);
        firefox = new BrowserWebDriverContainer<>()
            .withCapabilities(new FirefoxOptions())
            .withNetwork(network);

        webapp.start();
        chrome.start();
        firefox.start();
        
        Runtime.getRuntime().addShutdownHook(new Thread(() -> {
            webapp.close();
            chrome.close();
            firefox.close();
            network.close();
        }));
    }
}

Now that they're all on the same Docker network, the browser containers are able to refer to the webapp like "http://client-webapp-test:8080".

Adding Parameterized Tests

The handful of UI tests I'd set up previously had lines like WebDriver driver = new HtmlUnitDriver(BrowserVersion.FIREFOX, true) to create their WebDriver instance, but now I want to run the tests with both real Firefox and real Chrome. Since I want to test that the app works consistently, I'll want the same tests across browsers - and that's a call for parameterized tests in JUnit.

The way parameterized tests work in JUnit is that you declare a test as being parameterized, and then feed it your parameters via one of a number of mechanisms - "all values of an enum", "this array of strings", and a handful of others. The one to use here is to make a class implementing ArgumentsProvider and configure that:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
import java.util.stream.Stream;

import org.junit.jupiter.api.extension.ExtensionContext;
import org.junit.jupiter.params.provider.Arguments;
import org.junit.jupiter.params.provider.ArgumentsProvider;
import org.testcontainers.containers.BrowserWebDriverContainer;

public class BrowserArgumentsProvider implements ArgumentsProvider {
    @Override
    public Stream<? extends Arguments> provideArguments(ExtensionContext context) throws Exception {
        return Stream.of(
            AppTestContainers.instance.chrome,
            AppTestContainers.instance.firefox
        )
        .map(BrowserWebDriverContainer::getWebDriver)
        .map(Arguments::of);
    }
}

This class will take my configured browser containers, get the WebDriver instance for each, and provide that as parameters to a test method. In turn, the test method looks like this:

1
2
3
4
5
6
7
8
@ParameterizedTest
@ArgumentsSource(BrowserArgumentsProvider.class)
public void testDefaultLoginPage(WebDriver driver) {
    driver.get(getContainerRootUrl());
    assertEquals("Expected App Title", driver.getTitle());

    // Other tests follow
}

Now, JUnit will run the test twice, once for each browser, and I can add any other configurations I want smoothly.

Minor Gotcha: Container vs. Non-Container URLs

Though some of my tests were using Selenium already, most of them just use the JAX-RS REST client from the testing JVM directly, which is not containerized in this setup. That meant that I had to start worrying about the distinction between the URLs - the containers can't access "localhost:(some random port)", while the JUnit JVM can't access "client-webapp-test:8080".

For the most part, that's not too tough: I added some more utility methods named to suit and changed the UI tests to use those. However, there was one tricky bit: one of the UI tests uses Selenium to fetch the page and process the HTML, but then uses the JAX-RS client to make sure that a bunch of references on the page resolve to non-404 resources properly. Stuff like this:

1
2
3
4
5
driver.findElements(By.xpath("//link[@rel='stylesheet']"))
    .stream()
    .map(link -> link.getAttribute("href"))
    .map(href -> rootUri.resolve(href))
    .forEach(uri -> checkUrlWorks(uri, jaxRsClient));

(It's highly likely that there's a better way to do this in Selenium, but hey, it's still a useful example.)

The trouble with the above was that the URLs coming out of Selenium included the full container URL, not the host-accessible one.

Fortunately, that's not too tricky - it's really just string substitution, since the host and container URLs are known at runtime and won't conflict with anything. So I added a "decontainerize" method and run my URLs through it in the stream:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
public URI decontainerize(URI uri) {
    String url = uri.toString();
    if(url.startsWith(getContainerRootUrl())) {
        return URI.create(getRootUrl() + url.substring(getContainerRootUrl().length()));
    } else {
        return uri;
    }
}

// later

driver.findElements(By.xpath("//link[@rel='stylesheet']"))
    .stream()
    .map(link -> link.getAttribute("href"))
    .map(href -> rootUri.resolve(href))
    .map(this::decontainerize)
    .forEach(uri -> checkUrlWorks(uri, jaxRsClient));

With that, all the results came back green again.

Overall, this was a little fiddly, but mostly in a way that helped me learn a little bit more about how this sort of thing works, and now I'm prepped to do real, portable full test suites. Neat!

Tinkering With Testcontainers for Domino-based Web Apps

Jul 19, 2021, 12:46 PM

(Fair warning: this post is not about testing, say, a normal XPages app via Testcontainers. One could get there on this path, but this has a lot of prerequisites that are almost specific to me alone.)

For a while now, I've seen the Testcontainers project hanging around in my periphery. The idea of the project is that it uses Docker to allow you to programmatically load services needed by your automated test suites, rather than having to have the servers running separately. This is a clean match for something like a WAR-based Java webapp that uses, say, Postgres as its backend database: with this, you can spin up a Postgres image from the public repository, fill it with test data, run the suite, and tear it down cleanly.

However, this is generally not a proper match for Domino. Since the code you're testing almost always directly uses Domino API calls (from Notes.jar or another source) and that means having a local Notes runtime initialized in the test code, it's no help to have a separate container somewhere. So, instead, I've been left watching from afar, seeing all the kids having fun in a playground I never got to go to.

The Change

This situation has shifted a bit for my needs, though, thanks to secondary effects of changes I've made in one of my client projects. This is the one where I do all the bells and whistles of my tinkering over the years: XPages outside Domino, building a bunch of NSFs with Jenkins, and so forth.

For a while, I had been building test suites run using tycho-surefure-plugin, but somewhat recently moved the project to maven-bundle-plugin to reap the benefits of that. One drawback, though, was that the test suites became much more difficult to run, in large part due to the restrictions on environment propagation in macOS.

Initially, I just let them wither, but eventually I started to rebuild the test suites. The app had REST services for a while, but they've grown in prominence since we've started gradually replacing XPages-based components with Angular apps. And REST services, fortunately, are best tested at a remove.

First Pass: liberty-maven-plugin

The first way I started writing test suites for the REST services was by using liberty-maven-plugin, which is a general Swiss army knife for working with Liberty during Maven builds, but has particular support for starting a server before tests and terminating it after them. So I set up a config that boots up a Liberty server that can then initialize using a configured Notes runtime, and I started writing tests against it using the Jakarta REST client API and a bit of HtmlUnit.

To its credit, this setup did its job swimmingly. It still has the down side that you have to balance teacups to get a Notes or Domino runtime configured, but, once you do, it'll work nicely.

Next Pass: Testcontainers

Still, it'd be all the better to avoid the need to have a local Notes or Domino setup to run these tests. There's still going to be some weirdness due to things like having to have the non-public Domino Docker image pre-loaded and having an ID file and notes.ini somewhere, but that can be overcome. Plus, I've already overcome those for the CI servers I have set up with each build: I have some dev IDs in the repository and, for each build, Jenkins constructs a Docker image housing the webapp and starts a container using a technique similar to what I described a few months back to run a Liberty app with Domino stuff brought in for support.

So I decided to try adapting that to work with Testcontainers. Instead of my Maven config constructing and launching a Liberty server, I would instead build a Docker image that would then be loaded in Java with the Testcontainers library. In the case of the CI server scripts, I used Bash to copy files into a scratch directory to avoid having to include the whole repo in the Docker build context (prohibitive on the Mac particularly), and so I sought to mirror that in Maven as well.

Building the App Image in Maven

To accomplish this goal, I used maven-resources-plugin to copy the app and support files to a scratch directory, and then com.spotify:dockerfile-maven-plugin to build the Docker image:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
<!-- snip -->
    <!-- Copy Docker support resources into scratch space -->
    <plugin>
        <groupId>org.apache.maven.plugins</groupId>
        <artifactId>maven-resources-plugin</artifactId>
        <version>3.2.0</version>
        <executions>
            <execution>
                <?m2e ignore?>
                <id>prepare-docker-scratch</id>
                <goals>
                    <goal>copy-resources</goal>
                </goals>
                <phase>pre-integration-test</phase>
                <configuration>
                    <outputDirectory>${project.build.directory}/dockerscratch</outputDirectory>
                    <resources>
                        <!-- Dockerfile to build -->
                        <resource>
                            <directory>${project.basedir}</directory>
                            <includes>
                                <include>testcontainer.Dockerfile</include>
                            </includes>
                        </resource>
                        <!-- The just-built WAR -->
                        <resource>
                            <directory>${project.build.directory}</directory>
                            <includes>
                                <include>client-webapp.war</include>
                            </includes>
                        </resource>
                        <!-- Support files from the main repo Docker config -->
                        <resource>
                            <directory>${project.basedir}/../../../docker/support</directory>
                            <includes>
                                <!-- Contains Liberty server.xml, etc. -->
                                <include>liberty/client-app-a/**</include>
                                <!-- Contains a Domino server.id, names.nsf, and notes.ini -->
                                <include>notesdata-ciserver/**</include>
                            </includes>
                        </resource>
                    </resources>
                </configuration>
            </execution>
        </executions>
    </plugin>
    <!-- Build a Docker image to be used by Testcontainers -->
    <plugin>
        <groupId>com.spotify</groupId>
        <artifactId>dockerfile-maven-plugin</artifactId>
        <version>1.4.13</version>
        <executions>
            <execution>
                <?m2e ignore?>
                <id>build-webapp-image</id>
                <goals>
                    <goal>build</goal>
                </goals>
                <phase>pre-integration-test</phase>
                <configuration>
                    <repository>client-webapp-test</repository>
                    <tag>${project.version}</tag>
                    <dockerfile>${project.build.directory}/dockerscratch/testcontainer.Dockerfile</dockerfile>
                    <contextDirectory>${project.build.directory}/dockerscratch</contextDirectory>
                    <!-- Don't attempt to pull Domino images -->
                    <pullNewerImage>false</pullNewerImage>
                </configuration>
            </execution>
        </executions>
    </plugin>
<!-- snip -->

The Dockerfile itself is basically what I had in the afore-linked post, minus the special ENTRYPOINT stuff.

Of note in this config is <pullNewerImage>false</pullNewerImage> in the dockerfile-maven-plugin configuration. Without that set, the plugin would attempt to look for a Domino image on the public Dockerhub and then fail because it's unavailable. With that behavior disabled, it will just use the one locally loaded.

Configuring the Tests

Now that I had that configured, it was time to adjust the tests to suit. Previously, I had been using system properties passed from the Maven environment into the test runner to identity the Liberty server, but now the container initialization will happen in code. Since this app is pretty heavyweight, I didn't want to do what most of the Testcontainers examples show, which is to let the Testcontainers JUnit hooks spawn and terminate containers for each test. Instead, I set up a centralized class to launch the container once:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
package it.com.example;

import org.testcontainers.containers.GenericContainer;
import org.testcontainers.utility.DockerImageName;

public enum AppTestContainers {
    instance;
    
    public final GenericContainer<?> webapp;
    
    @SuppressWarnings("resource")
    private AppTestContainers() {
        webapp = new GenericContainer<>(DockerImageName.parse("client-webapp-test:1.0.0-SNAPSHOT")) //$NON-NLS-1$
                .withExposedPorts(8080);
        webapp.start();
    }
}

With this setup, there will only be one instance of the container launched for the whole test suite, and then Testcontainers will shut it down for me at the end. I can also use the normal mechanisms from the Testcontainers docs to get the actual name and port it ended up mapped to:

1
2
3
4
5
6
    public String getServicesBaseUrl() {
        String host = AppTestContainers.instance.webapp.getHost();
        int port = AppTestContainers.instance.webapp.getFirstMappedPort();
        String context = "clientapp";
        return AppPathUtil.concat("http://" + host + ":" + port, context, ServicesUtil.DEFAULT_JAXRS_ROOT);
    }

Once I did that, all the tests that had previously been running against a liberty-maven-plugin-run server now worked against the Docker container, and I no longer have any dependency on the local environment actually having Notes or Domino fully installed. Neat!

A Catch: Running on my Jenkins Server

Since the whole point of Docker is to make things reproducible across environments, I was flush with confidence when I checked these changes in and pushed them up to the repo. I watched with bated breath as Jenkins picked up the change and started to build. My heart sank, though, when it got to the integration test suite and it failed with a bunch of:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
Jul 19, 2021 11:02:10 AM org.testcontainers.utility.ResourceReaper lambda$null$1
WARNING: Can not connect to Ryuk at localhost:49158
java.net.ConnectException: Connection refused (Connection refused)
    at java.base/java.net.PlainSocketImpl.socketConnect(Native Method)
    at java.base/java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:399)
    at java.base/java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:242)
    at java.base/java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:224)
    at java.base/java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
    at java.base/java.net.Socket.connect(Socket.java:609)
    at org.testcontainers.utility.ResourceReaper.lambda$null$1(ResourceReaper.java:163)
    at org.rnorth.ducttape.ratelimits.RateLimiter.doWhenReady(RateLimiter.java:27)
    at org.testcontainers.utility.ResourceReaper.lambda$start$2(ResourceReaper.java:159)
    at java.base/java.lang.Thread.run(Thread.java:829)

What the heck? Well, I had noticed in my prep that "Ryuk" is the name of something Testcontainers uses in its orchestration work, and is what allowed me to spawn the container manually above without explicitly terminating it. I looked around for a while and saw that a lot of people had reported similar trouble over the years, but usually it was due to some quirk in a specific version of Docker on Windows or macOS, which was not the case here. I did, though, find that Bitbucket Pipelines tripped over this at one point, and it seemed to be due to their switch of using safer user namespaces. Though it sounds like newer versions of Testcontainers fixed that, I figured it's pretty likely that I was hitting a variant of it, as I do indeed use namespace remapping.

So I tweaked my failsafe-maven-plugin configuration to set the TESTCONTAINERS_RYUK_DISABLED environment variable to false and, to be safe, added a shutdown hook at the end of my AppTestContainers init method:

1
2
3
Runtime.getRuntime().addShutdownHook(new Thread(() -> {
    webapp.close();
}));

Now, Testcontainers doesn't use its Ryuk container, but the actual app container loads up just fine and is destroyed at the end of the suite. Perfect! If all continues to go well, this will mean that it'll be one step easier for other devs to run the test suites regardless of their local setup, which is always a thorn in the side of Domino-realm testing.

Closing: What About Testing Domino Apps?

I mentioned in my disclaimer at the start that this is specifically about testing apps that use a Domino runtime, not apps on Domino. Still, I bet you could do this to test a Domino app that you deploy as an NSF and/or OSGi plugins, and I may do that myself down the line so that the test suite even-more-closely matches what is actually running in production. You could adjust the maven-resources-plugin config above (or use maven-dependency-plugin) to bring in NSFs built earlier in the build with NSF ODP Tooling as well as OSGi update sites and then have your Dockerfile copy those into the Domino data directory and the workspace/applications/eclipse directory. Similarly, if you had a Domino addin that you launch as a task and which then itself listens on a port, you could do the same there.

It's still not as convenient as being able to just easily run Domino API tests without all the scaffolding, and it implies a lot of structure that makes these more firmly "integration" than "unit" tests, but that's still a powerful capability to have.

Tinkering With Cross-Container Domino Addins

May 16, 2021, 1:35 PM

Tags: docker domino

A good chunk of my work lately involves running distinct processes with a Domino runtime, either run from Domino or standalone for development or CI use. Something that had been percolating in the back of my mind was another step in this: running these "addin-ish" programs in Docker in a separate container from Domino, but participating in that active Domino runtime.

Domino addins in general are really just separate processes and, while they gain some special properties when run via load foo on the console or OSLoadProgram in the C API, that's not a hard requirement to getting a lot of things working.

I figured I could get this working and, armed with basically no knowledge about how this would work, I set out to try it.

Scaffolding

My working project at hand is a webapp run with the standard open-liberty Docker images. Though I'm using that as a starting point, I had to bring in the Notes runtime. Whether you use the official Domino Docker images from Flexnet or build your own, the only true requirement is that it match the version used in the running server, since libnotes does a version check on init. My Dockerfile looks like this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
FROM --platform=linux/amd64 open-liberty:beta

USER root
RUN useradd -u 1000 notes

RUN chown -R notes /opt/ol
RUN chown -R notes /logs

# Bring in the Domino runtime
COPY --from=domino-docker:V1200_03252021prod /opt/hcl/domino/notes/latest/linux /opt/hcl/domino/notes/latest/linux
COPY --from=domino-docker:V1200_03252021prod /local/notesdata /local/notesdata

# Bring in the Liberty app and configuration
COPY --chown=notes:users /target/jnx-example-webapp.war /apps/
COPY --chown=notes:users config/* /config/
COPY --chown=notes:users exec.sh /opt/
RUN chmod +x /opt/exec.sh

USER notes

ENV LD_LIBRARY_PATH "/opt/hcl/domino/notes/latest/linux"
ENV NotesINI "/local/notesdata/notes.ini"
ENV Notes_ExecDirectory "/opt/hcl/domino/notes/latest/linux"
ENV Directory "/local/notesdata"
ENV PATH="${PATH}:/opt/hcl/domino/notes/latest/linux:/opt/hcl/domino/notes/latest/linux/res/C"

EXPOSE 8080 8443

ENTRYPOINT ["/opt/exec.sh"]

I'll get to the "exec.sh" business later, but the pertinent parts now are:

  • Adding a notes user (to avoid permissions trouble with the data dir, if it comes up)
  • Tweaking the Liberty container's ownership to account for this
  • Bringing in the Domino runtime
  • Copying in my WAR file from the project and associated config files (common for Liberty containers)
  • Setting environment variables to tell the app how to init

So far, that's largely the same as how I run standalone Notes-runtime-enabled apps that don't talk to Domino. The only main difference is that, instead of copying in an ID and notes.ini, I instead mount the data volume to this container as I do with the main Domino one.

Shared Memory

The big new hurdle here is getting the separate apps to participate in Domino's shared memory pool. Now, going in, I had a very vague notion of what shared memory is and an even vaguer one of how it works. Certainly, the name is straightforward, and I know it in Domino's case mostly as "the thing that stops Notes from launching after a crash sometimes", but I'd need to figure out some more to get this working. Is it entirely a filesystem thing, as the Notes problem implies? Is it an OS-level thing with true memory? Well, both, apparently.

Fortunately, Docker has this covered: the --ipc flag for docker run. It has two main modes: you can participate in the host's IPC pool (essentially like what a normal, non-contained process does) or join another container specifically. I opted for the latter, which involved changing both the Domino launch arguments.

For Domino, I added --ipc=shareable to the argument list, basically registering it as an available host for other containers to glom on to.

For the separate app, I added --ipc=container:domino, where "domino" is the name of the Domino container.

With those in place, the "addin" process was able to see Domino and do addin-type stuff, like adding a status line and calling AddinLogMessageText to display a message on the server's console.

Great: this proved that it's possible. However, there were still a few show-stopping problems to overcome.

PIDs

From what I gather, Notes keeps track of processes sharing its memory by their reported process IDs. If you have a process that joins the pool and then exits (maybe only if it exits abruptly; I'm not sure) and then tries to rejoin with the same PID, it will fail on init with a complaint that the PID is already registered.

Normally, this isn't a problem, as the OS hands out distinct PIDs all the time. This is trouble with Docker, though: by default, in general, the direct process in a Docker container sees itself as PID 1, and will start as such each time. In the Domino container, its PID 1 is "start.sh", and that's still going, and it's not going to hear otherwise from some other process calling itself the same.

Fortunately, this was a quick fix: Docker's -pid option. Though the documentation for this is uncharacteristically slight, it turns out that the syntax for my needs is the same as the last option. Thus: --pid=container:domino. Once I set that, the running app got a distinct PID from the pool. That was pleasantly simple.

SIGTERM

And now we come to the toughest problem. As it turns out, dealing with SIGTERM - the signal sent by docker stop - is a whole big deal in the Java world. I banged my head at this for a while, with most of the posts I've found being not quite applicable, not working at all for me, or technically working but only in an unsustainable way.

For whatever reason, the Open Liberty Docker image doesn't handle this terribly well - when given a SIGTERM order, it doesn't stop the servlet context before dying, which means the contextDestroyed method in my ServletContextListener (such as this one) doesn't fire.

In many webapp cases, this is fine, but Domino is extremely finicky when it comes to memory-sharing processes needing to exit cleanly. If a process calls NotesInit but doesn't properly call NotesTerm (and close all its Notes-enabled threads), the server panics and dies. This is... not great behavior, but it is what it is, and I needed to figure out how to work with it. Unfortunately, the Liberty Docker container wasn't doing me any favors.

One option is to use Runtime.getRuntime().addShutdownHook(...). This lets you specify a Thread to execute when a SIGTERM is received, and it can work in some cases. It's a little shaky sometimes, though, and it's bad form to riddle otherwise-normal webapps with such things: ideally, even webapps that you intend to run in a container should be written such that they can participate in a normal multi-app environment.

What I ended up settling on was based on this blog post, which (like a number of others) uses a shell script as the main entrypoint. That's a common idiom in general, and Open Liberty's image does it, but its script doesn't account for this, apparently. I tweaked that post's shell script to use the Liberty start/stop commands and ended up with this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
#!/usr/bin/env bash
set -x

term_handler() {
  /opt/ol/helpers/runtime/docker-server.sh /opt/ol/wlp/bin/server stop
  exit 143; # 128 + 15 -- SIGTERM
}

trap 'kill ${!}; term_handler' SIGTERM

/opt/ol/helpers/runtime/docker-server.sh /opt/ol/wlp/bin/server start defaultServer

# echo the Liberty console
tail -f /logs/console.log &

while true
do
  tail -f /dev/null & wait ${!}
done

Now, when I issue a docker stop to the container, the script issues an orderly shutdown of the Liberty instance, which properly calls the contextDestroyed method and allows my code to close down its ExecutorService and call NotesTerm. Better still, Domino keeps running without crashing!

Conclusion

My final docker run scripts ended up being:

Domino

1
2
3
4
5
6
7
8
9
docker run --name domino \
	-d \
	-p 1352:1352 \
	-v notesdata:/local/notesdata \
	-v notesmisc:/local/notesmisc \
	--cap-add=SYS_PTRACE \
	--ipc=shareable \
	--restart=always \
	iksg-domino-12beta3

Webapp

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
docker build . -t example-webapp
docker run --name example-webapp \
	-it \
	--rm \
	-p 8080:8080 \
	-v notesdata:/local/notesdata \
	-v notesmisc:/local/notesmisc \
	--ipc=container:domino \
	--pid=container:domino \
	example-webapp

(Here, the webapp is run to be temporary and tied to the console, hence -it, --rm, and no -d)

One nice thing to note is that there's nothing webapp- or Java-specific here. One of the nice things about Docker is that it removes a lot of the hurdles to running whatever-the-heck type of program you want, so long as there's a Linux Docker image for it. I just happen to default to Java webapps for basically everything nowadays. The above script could be tweaked to work with most anything: the original post had it working with a Node app.

Now, considering that I was starting from nearly scratch here, I certainly can't say whether this is a bulletproof setup or even a reasonable idea in general. Still, it seems to work, and that's good enough for me for now.

Carving Out A Workspace On Apple Silicon

Feb 17, 2021, 11:24 AM

Last month, I mentioned my particular computer trouble, in that my trusty iMac Pro has been afflicted by an ever-worsening fan noise problem. I'd just been toughing it out, since there's never a good time to lose your main machine for a week or two, and my traveler MacBook Escape wasn't up to the task of being a full replacement.

After about a month's delay, my fresh new M1 MacBook Air arrived a few weeks ago and I've been putting it through its paces.

The Basics

As pretty much anyone who has one of these computers has said, the performance is outstanding. For the most part, even with emulation, most of the tasks I do during the day feel the same as they did on my wildly-more-expensive iMac Pro. On top of that, the fact that this thing doesn't even have a fan is both a technical marvel and a godsend as far as ambient room noise is concerned.

For continuity's sake, I used Migration Assistant to bring over my iMac's environment, and everything there went swimmingly. The good-citizen apps I use like MarsEdit and Tower were already ported to ARM, while the laggards (unsurprisingly, the ones made by larger companies with more resources) remain Intel-only but run just fine in emulation.

Hardware

For a good while now, I've had the iMac screen flanked by a pair of similarly-sized but far-inferior Asus screens. With the iMac's lovely screen out of the setup for now, I've switched to using those two Asus screens as my primary ones, with the pretty-but-tiny laptop screen sitting beneath them. It works well enough, though I do miss the retina resolution and general brightness of the iMac.

The second external screen itself was a bit of an issue. Of themselves, these M1 Macs, either for good reason or to mark them as low end, support only two screens total, the laptop screen included. So I ended up ordering one of the StarTech DisplayLink adapters. I expected it to be a crappy experience overall, with noticeable lag, but it actually works much more smoothly than I'd have expected. Other than the fact that it doesn't support Night Shift and some wake-from-sleep slowness that I attribute to it, it actually feels just like a normally-attached monitor.

I also, in order to regain my precious Ethernet connection and (sort of) clean up the dongle situation, I got one of these Anker USB-C docks. I've only had it for a day, but it seems to be working like you'd want so far. So that's nice.

Eclipse and Java

Here's where I've hit my first bout of jankiness, though it's not too surprising. In general, Eclipse and Java work just fine through emulation, and I can even keep running tests and web servers using the libnotes.dylib from the Notes client as I want.

I've found times where tests lag or fail now when they didn't before, though, and that's a little ominous. Compiling locally with NSF ODP, which spawns a sub-process that loads the Notes libraries, usually works, though now I've set up another Domino server on my network to handle that reliably.

I've also noticed some trouble in one of my Eclipse workspaces where it periodically spends a long time (10+ minutes) "Building" without explaining what exactly it's doing, and this is new behavior since the switch. I can't say what the core trouble is there. It's my largest active workspace, so it could be that file polling or other system-call-intensive work is just slower, or it could be an artifact of moving it from machine to machine. I'll probably scrap it and make a new workspace with the same projects to see if it alleviates it.

This all should improve in time, though, when Eclipse, AdoptOpenJDK, and HCL all release macOS ARM ports. IntelliJ has an experimental ARM port out, and I'm curious how that does its thing. I'll probably spend some time kicking the tires on that, though I still find Eclipse's UI much more conducive to the "lots of semi-related projects" working style I have. Visual Studio Code is in a similar boat, so that'll be good for the JavaScript development I do (under protest).

In the mean time, I've done some tinkering with how I could get a fully-native Eclipse environment running and showing up on my Mac, including firing up the venerable XQuartz to run Eclipse as an X client from a Linux VM in the basement. While that technically works, the experience is... well, I'll charitably call it "not Mac-like". Still, it's kind of neat and would in theory push aside any number of concerns.

Docker

Here's the real trouble I'm butting my head against. I've taken to using Docker more and more for various reasons: running app servers with a Domino runtime, running Domino outright, and (where my trouble is now) performing cross-compilation and other native-specific compilation tasks. For example, for one of my clients, I have a script that mounts the project directory to a Docker container to perform a full Maven build with NSF compilation and compile-time tests, without having to worry about the user's particular Notes or Domino installation.

However, while Docker is doing Hurculean work to smooth the process, most of the work I do ends up hitting one of the crashing snags in poor qemu, which crop up particularly with Java compilation tasks. Since compiling Java is basically all I do all day, that leaves me hoping either for improvements in future versions or a Linux/aarch64 port of Domino (or at least libnotes.so).

In the mean time, I'm making use of Docker's network transparency to run Docker on an x64 VM and set DOCKER_HOST locally to point to it. For about half of what I need, this works great: I can run Domino servers and Notes-enabled webapps this way, and I just change which address I'm pointing to to interact with them. However, it naturally removes the possibility of connecting with the local filesystem, at least without pairing it with some file-share jankiness, so it's not a replacement all around. It also topples quickly into the bizarre inner Docker world: for example, I wanted to set up Codewind to work remotely, but the instructions I found for getting started with your own server were not helpful.

Future Use

Still, despite the warts, I'd say this laptop is performing admirably, and better than one would normally expect. Plus, it's a useful exercise in finding more ways to make my workflow less machine-specific. Though I still bristle at the thought of going full Eclipse Che and working out of a web browser, at least moving some more aspects of my workspace to float above the rough seas is just good practice.

I'll probably go back to using the iMac Pro as my main machine once I get it fixed, even if only for the display, but this humble, low-end M1 has planted its flag more firmly than a MacBook Air normally has any right to.

Getting to Appreciate the Idioms of Docker

Sep 14, 2020, 9:28 AM

Tags: docker
  1. Weekend Domino-Apps-in-Docker Experimentation
  2. Executing a Complicated OSGi-NSF-Surefire-NPM Build With Docker
  3. Getting to Appreciate the Idioms of Docker

Now that I've been working with Docker more, I'm starting to get used to its way of doing things. As with any complicated tool - especially one as fond of making up its own syntax as Docker is - there's both the process of learning how to do things as well as learning why they're done that way. Since I'm on this journey myself, I figure it could be useful to share what I've learned so far.

What Is Docker?

To start with, it's useful to understand what Docker is both conceptually and technically, since a lot of discussion about it is buried under terms like "cloud native" that obscure the actual topic. That's even before you get to the giant pile of names like "Kubernetes" and "Rancher" that build on top of the core.

Before I get to the technical bits, the overall idea is that Docker is a way to run programs isolated from each other and in a consistent way across deployments. In a Domino context, it's kind of like how an NSF is still its own mostly-consistent app regardless of what OS Domino is on or what version it is - the NSF is its own little world on Domino-the-host. Technically, it diverges wildly from that, but it can be a loose point of reference.

Now, for the nuts and bolts.

Docker (the tool, not the company or service) is a Linux-born toolset for OS-level virtualization. It uses the term "containers", but other systems over time have used terms like "partitions" and "jails" to mean the same thing. In essence, what OS-level virtualization means is that a program or set of programs is put into a box that looks like the whole OS, but is really just a subset view provided by a host OS. This is distinct from virtualization in the sense of VMWare or Parallels in that the app still uses the code of the host OS, rather than loading up a whole additional OS.

Things admittedly get a little muddled on non-Linux systems. Other than Microsoft's peculiar variant of Docker that runs Windows-based apps, "a Docker container" generally means "a Linux container". To accomplish this, and to avoid having a massively-fragmented array of images (more on those in a bit), Docker Desktop on macOS and (usually) Windows uses hardware virtualization to launch a Linux system. In those cases, Docker is using both hardware virtualization and in-OS container virtualization, but the former is just a technical implementation detail. On a Linux host, though, no such second tier is needed.

Beyond making use of this OS service, Docker consists of a suite of tools for building and managing these images and containers, and then other tools (like Kubernetes) operate at a level above that. But all the stuff you deal with with Docker - Dockerfiles, Compose, all that - comes down to creating and managing these walled-off apps.

Docker Images

Docker images are the part that actually contains the programs and data to run and use, which are then loaded up into a container.

A Docker image is conceptually like a disk image used by a virtualization app or macOS - it's a bunch of files ready to be used in a filesystem. You can make your own or - very commonly - pull them from a centralized library like the main Docker Hub. These images are generally components of a larger system, but are sometimes full-on tools to run yourself. For example, the PostgreSQL image is ready to run in your Docker environment and can be used as essentially a quick-start way to set up a Postgres server.

The particular neat trick that Docker images pull is that they're layered. If you look at a Dockerfile (the script used to build these images), you can see that they tend to start with a FROM line, indicating the base image that they stack on top of. This can go many layers deep - for example, the Maven image builds on top of the OpenJDK image, which is based on the Alpine Linux image.

You can think of this as a usually-simple dependency line in something like Maven. Rather than including all of the third-party code needed, a Maven module will just reference dependencies, which are then brought in and woven together as needed in the final app. This is both useful for creating your images and is also an important efficiency gain down the line.

Dockerfiles

The main way to create a Docker image is to use a Dockerfile, which is a text file with a syntax that appears to have come from another dimension. Still, once you're used to the general form of one, they make sense. If you look at one of the example files, you can see that it's a sequential series of commands describing the steps to create the final image.

When writing these, you more-or-less can conceptualize them like a shell script, where you're copying around files, setting environment properties, and executing commands. Once the whole thing is run, you end up with an image either in your local registry or as a standalone file. That final image is what is loaded and used as the operating environment of the container.

The neat trick that Dockerfiles pull, though, is that commands that modify the image actually create a new layer each, rather than changing the contents of a single image. For example, take these few lines from a Dockerfile I use for building a Domino-based project:

1
2
3
COPY docker/settings.xml /root/.m2/
RUN mkdir -p /root
COPY --from=domino-docker:V1101_03212020prod /opt/hcl/domino/notes/11000100/linux /opt/hcl/domino/notes/latest/linux

Each of these lines creates a new layer. The first two are tiny: one just contains the settings.xml file from my project and then the second just contains an empty /root directory. The third is more complicated, pulling in the whole Domino runtime from the official 11.0.1 image, but it's the same idea.

Each of these images is given a SHA-256 hash identifier that will uniquely identify it as a result of an operation on a previous base image state. This lets Docker cache these results and not have to perform the same operation each time. If it knows that, by the time it gets to the third line above, the starting image and the Domino image are both in the same state as they were the last time it ran, it doesn't actually need to copy the bits around: it can just reuse the same unchanged cached layer.

This is the reason why Maven-build Dockerfiles often include a dependency:go-offline line: because the project's dependencies rarely change, you can create a reusable image from the Maven dependency repository and not have to re-resolve them every build.

Wrap-Up

So that's the core of it: managing images and walled-off mini OS environments. Things get even more complicated in there even before you get to other tooling, but I've found it useful to keep my perspective grounded in those basics while I learn about the other aspects.

In the future, I think I'll talk about how and why Docker has been particularly useful for me when it comes to building and running Domino-based apps, in particularly helping somewhat to alleviate several of the long-standing impediments to working with Domino.

Executing a Complicated OSGi-NSF-Surefire-NPM Build With Docker

Aug 13, 2020, 2:42 PM

  1. Weekend Domino-Apps-in-Docker Experimentation
  2. Executing a Complicated OSGi-NSF-Surefire-NPM Build With Docker
  3. Getting to Appreciate the Idioms of Docker

The other month, I got my feet wet with Docker after only conceptually following it for a long time. With that, I focused on getting a basic Jakarta EE app up and running with an active Notes runtime by way of the official Domino-on-Docker image provided by HCL.

Since that time, I'd been mulling over another use for it: having it handle the build process of my client's sprawling app. This started to become a more-pressing desire thanks to a couple factors:

  1. Though I have the build working pretty well on Jenkins, it periodically blocks indefinitely when it tries to launch the NSF ODP Compiler, presumably due to some sort of contention. I can go in and kill the build, but that's only when I notice it.
  2. The project is focusing more on an Angular-based UI, with a distinct set of programmers working on it, and the process of keeping a consistent Domino-side development environment up and running for them is a real hassle.
  3. Setting up a new environment with a Notes runtime is a hassle even for in-the-weeds developers like me.

The Goal

So I set out to use Docker to solve this problem. My idea was to write a script that would compose a Docker image containing all the necessary base tools - Java, Maven, Make for some reason, and so forth - bring in the Domino runtime from HCL's image, and add in a standard Notes ID file, names.nsf, and notes.ini that would be safe to keep in the private repo. Then, I'd execute a script within that environment that would run the Maven build inside the container using my current project tree.

The Dockerfile

Since I'm still not fully adept at Docker, it's been a rocky process, but I've managed to concoct something that works. I have a Dockerfile that looks like this (kindly ignore all cargo-culting for now):

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
FROM maven:3.6.3-adoptopenjdk-8-openj9
USER root

# Install toolchain files for the NPM native components
RUN apt update
RUN apt install -y python make gcc g   openssh-client git

# Configure the Maven environment and permissive root home directory
COPY settings.xml /root/.m2/
COPY build-app.sh /
RUN mkdir -p /root/.m2/repository
RUN chmod -R 777 /root

# Bring in the Domino runtime
COPY --from=domino-docker:V1101_03212020prod /opt/hcl/domino/notes/11000100/linux /opt/hcl/domino/notes/latest/linux
COPY --from=domino-docker:V1101_03212020prod /local/notesdata /local/notesdata

# Some LotusScript libraries use an all-caps name for lsconst.lss
RUN ln -s lsconst.lss /opt/hcl/domino/notes/latest/linux/LSCONST.LSS

# Copy in our stock Notes ID and configuration files
COPY notesdata/* /local/notesdata/

# Prepare a permissive data environment
RUN chmod -R 777 /local/notesdata

The gist here is similar to my previous example, where it starts from the baseline Maven package. One notable difference is that I switched away from the -alpine variant I had inherited from my original Codewind example: I found that I would encounter npm: not found during the frontend build process, and discovered that this had to do with the starting Linux distribution.

The rest of it brings in the core Domino runtime and data directory from the official image, plus my pre-prepared Maven configuration. It also does the fun job of symlinking "lsconst.lss" to "LSCONST.LSS" to account for the fact that some of the LotusScript in the NSFs was written to assume Windows and refers to the include file by that name, which doesn't fly on a case-sensitive filesystem. That was a fun one to track down.

The build-app.sh script is just a shell script that runs several Maven commands specific to this project.

The Executor Script

The other main component is a Bash script, ./build.sh:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
#!/usr/bin/env bash

set -e

mkdir -p ~/.m2/repository
mkdir -p ~/.ssh

# Clean any existing NPM builds
rm -rf ../app-ui/*/node_modules
rm -rf ../app-ui/*/dist

# Set up the Docker workspace
rm -rf scratch
mkdir -p scratch/builder
cp maven/* scratch/builder/
cp -r notesdata-server scratch/builder/notesdata

# Build the image and execute a Maven install
docker build scratch/builder -f build.Dockerfile -t app-build
docker run \
    --mount type=bind,source="$(pwd)/..",target=/build \
    --mount type=bind,source="$HOME/.m2/repository",target=/root/.m2/repository \
    --mount type=bind,source="$HOME/.ssh",target=/root/.ssh \
    --rm \
    --user $(id -u):$(id -g) \
    app-build \
    sh /build-app.sh

This script ensures that some common directories exist for the user, clears out any built Node results (useful for a local dev environment), copies configuration files into an image-building directory, and builds the image using the aforementioned Dockerfile. Then, it executes a command to spawn a temporary container using that image, run the build, and delete the container when done. Some of the operative bits and notes are:

  • I'm using --mount here maybe as opposed to --volume because I don't know that much about Docker. Or maybe it's the right one for my needs? It works, anyway, even if performance on macOS is godawful currently
  • I bring in the current user's Maven repository so that it doesn't have to regenerate the entire world on each build. I'm going to investigate a way to pre-package the dependencies in a cacheable Maven RUN command as my previous example did, but the sheer size of the project and OSGi dependencies tree makes that prohibitive at the moment
  • I bring in the current user's ~/.ssh directory because one of the NPM dependencies references its dependency via a GitHub SSH URL, which is insane and bad but I have to account for it. Looking at it now, I should really mark that one read-only
  • The --rm is the part that discards the container after completing, which is convenient
  • I use --user to specify a non-root user ID to run the build, since otherwise Docker on Linux ends up making the target results root-owned and un-deletable by Jenkins. This is also the cause of all those chmod -R 777 ... calls in the Dockerfile. There are gotchas to keep in mind when doing this

Miscellaneous Other Configuration

To get ODP → NSF compilation working, I had to make sure that Maven knew about the Domino runtime. Fortunately, since it'll now be consistent, I'm able to make a stock settings.xml file and copy that in:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
<?xml version="1.0"?>
<settings xmlns="http://maven.apache.org/SETTINGS/1.0.0"
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xsi:schemaLocation="http://maven.apache.org/SETTINGS/1.0.0 http://maven.apache.org/xsd/settings-1.0.0.xsd">
	<profiles>
		<profile>
			<id>notes-program</id>
			<properties>
				<notes-program>/opt/hcl/domino/notes/latest/linux</notes-program>
				<notes-data>/local/notesdata</notes-data>
				<notes-ini>/local/notesdata/notes.ini</notes-ini>
			</properties>
		</profile>
	</profiles>
	<activeProfiles>
		<activeProfile>notes-program</activeProfile>
	</activeProfiles>
</settings>

Those three are the by-convention properties I use in the NSF ODP Tooling and my Tycho-run test suites to pass information along to initialize the Notes process.

Future Improvements

The main thing I want to improve in the future is getting the dependencies loaded into the image ahead of time. Currently, in addition to sharing the local Maven repository, the command brings in not only the full project structure but also the app-dependencies submodule we use to store giant blobs of p2 sites needed by the build. The "Docker way" would be to compose these in as layers of the image, so that I could skip the --mount bit for them but have Docker's cache avoid the need to regenerate a large dependencies image each time.

I'd also like to pair this with app-runner Dockerfiles to launch the webapp variants of the XPages and JAX-RS projects in Liberty-based containers. Once I get that clean enough, I'll be able to hand that off to the frontend developers so that they can build the full app and have a local development environment with the latest changes from the repo, and no longer have to wonder whether one of the server-side developers has updated the Domino server with some change. Especially when that server-side developer is me, and it's Friday afternoon, and I just want to go play Baba Is You in peace.

In the mean time, though, it works, and works in a repeatable way. Once I figure out how to get Jenkins to read the test results of a freestyle project after the build, I hope to replace the Jenkins build process with this script, which should both make the process more reliable and allow me to run multiple simultaneous builds per node without worry about deadlocking contention.

Weekend Domino-Apps-in-Docker Experimentation

Jun 28, 2020, 6:37 PM

  1. Weekend Domino-Apps-in-Docker Experimentation
  2. Executing a Complicated OSGi-NSF-Surefire-NPM Build With Docker
  3. Getting to Appreciate the Idioms of Docker

For a couple of years now, first IBM and then HCL have worked on and adapted community work to get Domino running in Docker. I've observed this for a while, but haven't had a particular need: while it's nice and all to be able to spin up a Domino server in Docker, it's primarily an "admin" thing. I have my suite of development Domino servers in VMs, and they're chugging along fine.

However, a thought has always gnawed at the back of my mind: a big pitch of Docker is that it makes not just deployment consistent, but also development, taking away a chunk of the hassle of setting up all sorts of associated tools around development. It's never been difficult, per se, to install a Postgres server, but it's all the better to be able to just say that your app expects to have one around and let the tooling handle the specifics for you. Domino isn't quite as Docker-friendly as Postgres or other tools, but the work done to get the official image going with 11.0.1 brought it closer to practicality. This weekend, I figured I'd give it a shot.

The Problem

It's worth taking a moment to explain why it'd be worth bothering with this sort of setup at all. The core trouble is that running an app with a Notes runtime is extremely annoying. You have to make sure that you're pointing at the right libraries, they're all in the right place to be available in their internal dependency tree, you have to set a bunch of environment variables, and you have to make sure that you provide specialized contextual info, like an ID file. You actually have the easiest time on Windows, though it's still a bit of a hurdle. Linux and macOS have their own impediments, though, some of which can be showstoppers for certain tasks. They're impediments worth overcoming to avoid having to use Windows, but they're impediments nonetheless.

The Setup

But back to Docker.

For a little while now, the Eclipse Marketplace has had a prominent spot for Codewind, an IBM-led Eclipse Foundation project to improve the experience of development with Docker containers. The project supplies plugins for Eclipse, IntelliJ, and VS Code / Eclipse Che, but I still spend most of my time in Eclipse, so I went with the former.

To begin with, I started with the default "Open Liberty" project you get when you create a new project with the tooling. As I looked at it, I realized with a bit of relief that there's not too much special about the project itself: it's a normal Maven project with war packaging that brings in some common dependencies. There's no Maven build step that expects Docker at all. The specialized behavior comes (unsurprisingly, if you use Docker already) in the Dockerfile, which goes through the process of building the app, extracting the important build results into a container based on the open-liberty runtime image, bringing in support files from the project, and launching Liberty. Nothing crazy, and the vast majority of the code more shows off MicroProfile features than anything about Docker specifically.

Bringing in Domino

The Docker image that HCL provides is a fully-fledged server, but I don't really care about that: all I really need is the sweet, sweet libnotes.so and associated support libraries. Still, the easiest way to accomplish that is to just copy in the whole /opt/hcl/domino/notes/11000100/linux directory. It's a little wasteful, and I plan to find just what's needed later, but it works to do that.

Once you have that, you need to do the "user side" of it: the ID file and configuration. With a fully-installed Domino server, the data directory balloons in side rapidly, but you don't actually need the vast majority of it if you just want to use the runtime. In fact, all you really need is an ID file, a notes.ini, and a names.nsf - and the latter two can even be massively trimmed down. They do need to be custom for your environment, unfortunately, but at least it's much easier to provide just a few files than spin up and maintain a whole server or run the Notes client locally.

Then, after you've extracted the juicy innards of the Domino image and provided your local resources, you can call NotesInitExtended pointing to your data directory (/local/notesdata in the HCL Docker image convention) and the notes.ini, and voila: you have a running app that can make local and remote Notes native API calls.

Example Project

I uploaded a tiny project to demonstrate this to GitHub: https://github.com/jesse-gallagher/domino-docker-war-example. All it does is provide one JAX-RS resource that emits the server ID, but that shows the Notes API working. In this case, I used the Darwino Domino NAPI (which I really need to refresh from upstream), but Domino JNA would also work. Notes.jar would too, but I think you'll need one of those projects to do the NotesInitExtended call with arguments.

The Dockerfile for the project goes through the steps enumerated above, based on how the original example image does it, and was tweaked to bring in the Domino runtime and support files. I stripped the Liberty-specific stuff out of the pom.xml - I think that the original route the example did of packaging up the whole server and then pulling it apart in Docker image creation has its uses, but isn't needed here.

Much like the pom.xml, the code itself is slim and doesn't explicitly refer to Docker at all. I have a ServletContextListener to init and term the Notes runtime, as well as a Filter implementation to init/term the request thread, but otherwise it just calls the Notes API with no fuss.

Larger Projects

I haven't yet tried this with larger projects, but there's no reason it shouldn't work. The build-deploy-run cycle takes a bit more time with Docker than with just a Liberty server embedded in Eclipse normally, but the consistency may be worth it. I've gotten used to running a killall -KILL java whenever an errant process gloms on to my Notes ID file and causes the server to stop being able to init the runtime, but I'd be glad to be done with that forever. And, for my largest project - the one with the hundreds of XPages and CCs - I don't see why that wouldn't work here too.

Normal Domino Projects

Another route that I've considered for Domino in Docker is to use it to deploy NSFs and OSGi projects. This would involve using the Domino image for its intended purpose of running a full server, but configuring the INI to just serve HTTP, and having the Dockerfile place the built OSGi plugins and NSFs in their right places. This would certainly be much faster than the build-deploy-run cycle of replacing NSF designs and deploying the plugins to an Update Site NSF, though there would be a few hurdles to get over. Not impossible, though.


I figure I'll kick the tires on this some more this week - maybe try deploying the aforementioned giant XPages .war project to it - to see if it will fit into my workflow. There's a chance that the increased deployment times won't be worth it, and I won't really gain the "consistent with production" advantages of Docker when the way I'm developing the app is already a wildly-unsupported configuration. It might be worth it if I try the remote mode of Codewind, though: I have some Liberty servers that Jenkins deploys to, but it'd be even-better to be able to show my running app to co-developers to work on something immediately, instead of waiting for the full build. It's worth some investigation, anyway.