A Notes-Client-Friendly Way To Access JWT-Protected Resources

Fri Oct 02 15:32:19 EDT 2020

Tags: lotusscript

I recently had call to access the Zoom REST API in a Notes client app that will be maintained by other Notes programmers, so I figured it'd be as good an opportunity as any to use the HTTP and JSON classes added in V10 and 11.

The basics there are fine enough - though those classes aren't featureful, they can get the job done. However, the Zoom API needs specialized authentication, beyond the username/password type that you can kind of work your way to in LotusScript alone. Since my needs will be administrative as opposed to multiple users acting as themselves, I decided to go the JWT route instead of OAuth.

JWT

JWT stands for "JSON Web Token", and it's one of the now-common ways to do secure authorization without passing passwords around. It's simple at its core - just some JSON objects to indicate the type of token and the payload of app-specific claims you're going to make, then a cryptographic signature.

It's that last part that moves it out of the realm of LotusScript (barring some way to wrangle the SEC* functions in the C API to do it), so I went to Java and LS2J to bridge the gap.

The Java Side

I lucked out in that the Zoom API uses a pretty simple path for generating the signature - my previous experience with JWT involved public/private key pairs, which is still doable but is more annoying. Additionally, the payload is pretty simple, just asserting that you're logging in, with nothing like the specialized user ID lookups I had to do with SharePoint. This meant I could get away with writing out the token "manually" rather than going through the onerous process of creating script libraries out of one of the available libraries and its dependency tree.

One gotcha is that the JDK doesn't actually ship with JSON support. Fortunately, in this case, the only values going in were JSON-friendly and didn't need escaping, but I'd suggest using even a basic library like the agent-friendly JSON-java for normal uses.

I ended up making a static method in a single-class Java script library:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
package us.iksg;

import java.nio.charset.StandardCharsets;
import java.util.Base64;
import java.util.concurrent.TimeUnit;

import javax.crypto.Mac;
import javax.crypto.spec.SecretKeySpec;

public class JWTGenerator {
    public static final long TIMEOUT = TimeUnit.HOURS.toMillis(1);
    
    public static String generateJWT(String apiKey, String apiSecret) {
        try {
            long now = System.currentTimeMillis();
            long exp = now + TIMEOUT;
            
            // Note to the future: I apologize for writing JSON via string concatenation, but it
            //   _should_ be safe here.
            
            // Header: alg: HS256, typ: JWT
            String headerJson = "{\"alg\": \"HS256\", \"typ\": \"JWT\"}";
            String headerB64 = Base64.getUrlEncoder().encodeToString(headerJson.getBytes(StandardCharsets.UTF_8));
            
            // Payload: iss: API_KEY, exp: exp
            String payloadJson = "{" +
                    "\"iss\": \"" + apiKey + "\"," +
                    "\"exp\": \"" + exp + "\"" +
                "}";
            String payloadB64 = Base64.getUrlEncoder().encodeToString(payloadJson.getBytes(StandardCharsets.UTF_8));
            
            // Codec: HMAC SHA256 (HS256)
            Mac mac = Mac.getInstance("HmacSHA256");
            SecretKeySpec spec = new SecretKeySpec(apiSecret.getBytes(StandardCharsets.UTF_8), "HmacSHA256");
            mac.init(spec);
            byte[] signature = mac.doFinal((headerB64 + "." + payloadB64).getBytes(StandardCharsets.UTF_8));
            String signatureB64 = Base64.getUrlEncoder().encodeToString(signature);
            
            return headerB64 + '.' + payloadB64 + '.' + signatureB64;
        } catch(Throwable t) {
            throw new RuntimeException(t);
        }
    }
}

All of those classes come with the JDK, so it's nice and self-contained.

The LotusScript Side

Back on the LotusScript side, I brought out my trusty old friend LS2J:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
Uselsx "*javacon"
Use "JWT Generator"

Sub Click(Source As Button)
    On Error Goto errorHandler
    
    Dim session As New NotesSession, ws As New NotesUIWorkspace, doc As NotesDocument
    Set doc = ws.CurrentDocument.Document
    
    Dim jsession As New JAVASESSION, jwtGenerator As JavaClass
    Set jwtGenerator = jsession.GetClass("us.iksg.JWTGenerator")
    
    Dim apiKey As String, apiSecret As String
    apiKey = doc.ZoomAPIKey(0)
    apiSecret = doc.ZoomAPISecret(0)
    
    Dim generate As JavaMethod
    Set generate = jwtGenerator.GetMethod("generateJWT", "(Ljava/lang/String;Ljava/lang/String;)Ljava/lang/String;")
    
    Dim token As String
    token = generate.Invoke(Empty, apiKey, apiSecret)
    ' In this case, it's in a "developer playground" form I made for testing.
    ' Do not store JWT tokens long-term - they should be generated for each script.
    doc.ZoomJWTToken = token
    
    Exit Sub
errorHandler:
    Msgbox Erl & ": " & Error
    End
End Sub

The only unusual bit here is that, since I used a static method, I pass Empty as the first parameter to Invoke. I tend to use the reflection-based approach like this out of habit after consistently running into trouble with LS2J's mapping of methods to their Java counterparts, but it'd probably be a little cleaner if I made it an instance method and just called it directly.

Once I had the generated token, I was able to include it in my HTTP requests:

1
2
3
4
5
6
7
8
9
Dim req As NotesHTTPRequest
Set req = session.CreateHTTPRequest()
Call req.SetHeaderField("Authorization", "Bearer " & token)
' Since we just want to plunk this into the field, request a string back
req.PreferStrings = True

Dim result As String
result = req.Get("https://api.zoom.us/v2/users")
doc.Users = result

Not too shabby overall, for the Notes client. I may end up putting all these calls into run-on-server agents regardless just to avoid trouble should the client end up having their users use the Web Assembly or mobile Notes clients, but even then this still ends up very Notes-client-developer-friendly.

Writing Domino Server Addins With GraalVM Native Image

Sun Sep 27 15:35:56 EDT 2020

Tags: graalvm domino

I was thinking the other day about the task of writing a Domino server addin, the kind that you run by typing load foo on the server console. The way this is generally done is via C or the like: you write a program using your dusty old copy of the C API Toolkit and have an AddinMain function as the entrypoint. That's fine enough if you want to write in C, but, even beyond the language, it carries the tremendous overhead of a fiddly compilation chain that differs per-platform.

I got to thinking, then, about GraalVM, and specifically its Native Image capability. Before I get into what I did, I figure this warrants some background.

What is GraalVM?

GraalVM is a project from Oracle that is, roughly, an alternative core Java Virtual Machine. It's designed to serve a number of goals, but the main ways that I've seen it used is to improve the speed and efficiency of Java-based programs. It also has some neat-looking capabilities for running multiple languages in one app space, but I have yet to look into that.

The Native Image capability is a way to compile Java applications to native executables for a given platform. So, instead of having a JAR file that you then run with an installed JVM, you'd have an executable that you run directly, and which effectively acts as its own "VM". This means you end up with just "some executable" on your system, and the lack of bootstrapping needed to run it opens up some possibilities.

Domino Server Addins

Though Domino server addins have their own set of functions within the Notes C API, they're really just an executable that Domino launches as a sub-process. If you have a basic executable named foo in your Domino program directory, you can type load foo and it'll run it, whether or not the executable does anything with the Notes API at all. It won't necessarily be useful if it doesn't use the Notes API, but it'll run.

It's this "just an executable" bit, though, that was a contributing factor to making Java not a practical language for this. That's also where RunJava fit in: the runjava executable just initialized a JVM and loads the named class, which is afterward responsible for everything, but that was nonetheless obligatory work to get a Java app loaded this way.

The Combination

Once I realized these things, it wasn't a far reach to try implementing an addin this way. One of my initial concerns was the way addins use AddinMain as a C-type entrypoint - my knowledge of how that sort of thing works is limited enough that I wasn't sure if GraalVM's annotations would suffice. However, the C API documentation relieved my worry: using that function name is just a convenience that handles some of the bootstrapping for you. If you just use a normal main(...) entrypoint, the only difference is that you're on the hook for managing your status line more (the thing that shows up when you do show tasks).

Fortunately, the addin-related methods in the lotus.notes.addin.JavaServerAddin class in Notes.jar are extremely-thin wrappers around native calls and aren't actually specific to RunJava in any way. You can subclass it and use it in essentially the same way as in a RunJava addin:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
package frostillicus.graalvm;

import lotus.domino.NotesException;
import lotus.notes.addins.JavaServerAddin;

public class Main extends JavaServerAddin {
	static {
		System.setProperty("java.library.path", "/opt/hcl/domino/notes/11000100/linux"); //$NON-NLS-1$ //$NON-NLS-2$
		System.loadLibrary("notes"); //$NON-NLS-1$
		System.loadLibrary("lsxbe"); //$NON-NLS-1$
	}
	
	public static void main(String[] args) {
		new Main().start();
	}
	
	public Main() {
		setName("GraalVM Test");
	}
	
	@Override
	public void runNotes() throws NotesException {
		AddInLogMessageText("GraalVM Test initialized");
		int taskId = AddInCreateStatusLine(getName());
		try {

			// Do your work here

		} catch(Throwable t) {
			t.printStackTrace();
		} finally {
			AddInDeleteStatusLine(taskId);
		}
	}

}

GraalVM-specific configuration

The GraalVM project provides a Maven plugin to do native compilation for you, and I make use of that in the project's pom.xml:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
<plugin>
	<groupId>org.graalvm.nativeimage</groupId>
	<artifactId>native-image-maven-plugin</artifactId>
	<version>20.2.0</version>
	<configuration>
		<imageName>${project.name}</imageName>
		<mainClass>frostillicus.graalvm.Main</mainClass>
		<!-- snip <buildArgs> -->
	</configuration>
	<executions>
		<execution>
			<goals>
				<goal>native-image</goal>
			</goals>
			<phase>package</phase>
		</execution>
	</executions>
</plugin>

Including that in your project will produce a native executable for your current platform in the target folder, alongside the normal JAR file.

The bit I snipped out, though, ends up being important. In a similar way to what happens during Android "Java" compilation, the GraalVM native compiler builds a map of all of the code used in your project to create its native representation. Additionally, it doesn't support reflection as casually as a normal JVM does, and doing a compilation like this shows just how common reflection is in Java.

Reflection and JNI Configuration

What reflection (and JNI) in Java generally needs is a mapping table of class/method/field names to their class representations, and GraalVM doesn't build this for everything by default. Instead, it does its best guess based on your actual code, but then it's up to you to explicitly specify the parts you'll be accessing dynamically.

For the normal case, Oracle wrote a tool that will monitor an actively-running app in Java for such calls. You build your app and run it non-native with this agent, and then it will spit out a configuration file based on the actually-called reflective methods.

However, as with everything else to do with Domino, it's not the normal case: since what I'm running only reasonably exists when launched explicitly from a server, I had to do it the "hard" way. Fortunately, the it's actually just mostly tedious: build the app, launch the Domino Docker container, wait to look for a NoClassDefFoundError or related problem, add that to the config file, and repeat until it stops yelling. Some cases are a little fiddlier, like how JNA's native component misrepresents the class name it was trying to find, but overall it's just time-consuming.

Practicality

So, this is possible, but is it worth doing? Depending on what you want to do, maybe. It's mildly less unsupported than RunJava, and has the huge advantage of not polluting the server's classpath with all of your application code. Additionally, it should be pretty zippy, as GraalVM boasts some impressive performance numbers. Additionally, at least for Java developers, it's much, much easier to use the native-image-maven-plugin than it is to set up cmake or manual makefiles for a C/etc. project.

However, it can also be a real PITA to get working, especially for a reflection-heavy project. Additionally, though you're technically using Addin* functions with a native executable, it's not like HCL would take your call if you run into trouble with a monstrosity like this (I assume). Most importantly, it's restricted to the sort of thing that would make sense as a server addin to begin with - for example, this wouldn't help with building web apps unless you were planning to use it to (again, just as an example) run a web server that's written in Java.

Future Tinkering

I think that this warrants some more investigation. I'd be curious if this process would work for writing other native components, such as DSAPI filters and ExtMgr addins. In those cases, it absolutely would be important to have the right entrypoints, so it wouldn't be quite so easy. Still, it'd be neat if that worked.

And GraalVM and the Native Image component are definitely worth some time even aside from anything Domino-related. I'm curious about what you can do with the "polyglot" features, for example.

Example Project

I've put an example project up on GitHub, which is a basic example that just accepts strings via tell graalvm-test foo and echoes them back. It also includes a Dockerfile for running via HCL's official Domino 11.0.1 image. I haven't actually tested it any other way, so that's the best way to give it a shot.

Getting to Appreciate the Idioms of Docker

Mon Sep 14 09:28:53 EDT 2020

Tags: docker
  1. Jun 28 2020 - Weekend Domino-Apps-in-Docker Experimentation
  2. Aug 13 2020 - Executing a Complicated OSGi-NSF-Surefire-NPM Build With Docker
  3. Sep 14 2020 - Getting to Appreciate the Idioms of Docker

Now that I've been working with Docker more, I'm starting to get used to its way of doing things. As with any complicated tool - especially one as fond of making up its own syntax as Docker is - there's both the process of learning how to do things as well as learning why they're done that way. Since I'm on this journey myself, I figure it could be useful to share what I've learned so far.

What Is Docker?

To start with, it's useful to understand what Docker is both conceptually and technically, since a lot of discussion about it is buried under terms like "cloud native" that obscure the actual topic. That's even before you get to the giant pile of names like "Kubernetes" and "Rancher" that build on top of the core.

Before I get to the technical bits, the overall idea is that Docker is a way to run programs isolated from each other and in a consistent way across deployments. In a Domino context, it's kind of like how an NSF is still its own mostly-consistent app regardless of what OS Domino is on or what version it is - the NSF is its own little world on Domino-the-host. Technically, it diverges wildly from that, but it can be a loose point of reference.

Now, for the nuts and bolts.

Docker (the tool, not the company or service) is a Linux-born toolset for OS-level virtualization. It uses the term "containers", but other systems over time have used terms like "partitions" and "jails" to mean the same thing. In essence, what OS-level virtualization means is that a program or set of programs is put into a box that looks like the whole OS, but is really just a subset view provided by a host OS. This is distinct from virtualization in the sense of VMWare or Parallels in that the app still uses the code of the host OS, rather than loading up a whole additional OS.

Things admittedly get a little muddled on non-Linux systems. Other than Microsoft's peculiar variant of Docker that runs Windows-based apps, "a Docker container" generally means "a Linux container". To accomplish this, and to avoid having a massively-fragmented array of images (more on those in a bit), Docker Desktop on macOS and (usually) Windows uses hardware virtualization to launch a Linux system. In those cases, Docker is using both hardware virtualization and in-OS container virtualization, but the former is just a technical implementation detail. On a Linux host, though, no such second tier is needed.

Beyond making use of this OS service, Docker consists of a suite of tools for building and managing these images and containers, and then other tools (like Kubernetes) operate at a level above that. But all the stuff you deal with with Docker - Dockerfiles, Compose, all that - comes down to creating and managing these walled-off apps.

Docker Images

Docker images are the part that actually contains the programs and data to run and use, which are then loaded up into a container.

A Docker image is conceptually like a disk image used by a virtualization app or macOS - it's a bunch of files ready to be used in a filesystem. You can make your own or - very commonly - pull them from a centralized library like the main Docker Hub. These images are generally components of a larger system, but are sometimes full-on tools to run yourself. For example, the PostgreSQL image is ready to run in your Docker environment and can be used as essentially a quick-start way to set up a Postgres server.

The particular neat trick that Docker images pull is that they're layered. If you look at a Dockerfile (the script used to build these images), you can see that they tend to start with a FROM line, indicating the base image that they stack on top of. This can go many layers deep - for example, the Maven image builds on top of the OpenJDK image, which is based on the Alpine Linux image.

You can think of this as a usually-simple dependency line in something like Maven. Rather than including all of the third-party code needed, a Maven module will just reference dependencies, which are then brought in and woven together as needed in the final app. This is both useful for creating your images and is also an important efficiency gain down the line.

Dockerfiles

The main way to create a Docker image is to use a Dockerfile, which is a text file with a syntax that appears to have come from another dimension. Still, once you're used to the general form of one, they make sense. If you look at one of the example files, you can see that it's a sequential series of commands describing the steps to create the final image.

When writing these, you more-or-less can conceptualize them like a shell script, where you're copying around files, setting environment properties, and executing commands. Once the whole thing is run, you end up with an image either in your local registry or as a standalone file. That final image is what is loaded and used as the operating environment of the container.

The neat trick that Dockerfiles pull, though, is that commands that modify the image actually create a new layer each, rather than changing the contents of a single image. For example, take these few lines from a Dockerfile I use for building a Domino-based project:

1
2
3
COPY docker/settings.xml /root/.m2/
RUN mkdir -p /root
COPY --from=domino-docker:V1101_03212020prod /opt/hcl/domino/notes/11000100/linux /opt/hcl/domino/notes/latest/linux

Each of these lines creates a new layer. The first two are tiny: one just contains the settings.xml file from my project and then the second just contains an empty /root directory. The third is more complicated, pulling in the whole Domino runtime from the official 11.0.1 image, but it's the same idea.

Each of these images is given a SHA-256 hash identifier that will uniquely identify it as a result of an operation on a previous base image state. This lets Docker cache these results and not have to perform the same operation each time. If it knows that, by the time it gets to the third line above, the starting image and the Domino image are both in the same state as they were the last time it ran, it doesn't actually need to copy the bits around: it can just reuse the same unchanged cached layer.

This is the reason why Maven-build Dockerfiles often include a dependency:go-offline line: because the project's dependencies rarely change, you can create a reusable image from the Maven dependency repository and not have to re-resolve them every build.

Wrap-Up

So that's the core of it: managing images and walled-off mini OS environments. Things get even more complicated in there even before you get to other tooling, but I've found it useful to keep my perspective grounded in those basics while I learn about the other aspects.

In the future, I think I'll talk about how and why Docker has been particularly useful for me when it comes to building and running Domino-based apps, in particularly helping somewhat to alleviate several of the long-standing impediments to working with Domino.

NSF ODP Tooling: Setting Up Jenkins Builds

Thu Aug 27 10:50:43 EDT 2020

Tags: nsfodp
  1. Aug 26 2020 - Getting Started with the NSF ODP Tooling
  2. Aug 27 2020 - NSF ODP Tooling: Setting Up Jenkins Builds

In my last post, I talked about the process of setting up a basic NSF ODP project from an NSF without worrying about OSGi plugins or other complicated aspects.

In this post, I'll go over one of the main reasons why you might want to do this: automated builds via Jenkins or other CI server. This process assumes that you're keeping your project in source control of some sort, most likely a Git repository.

Jenkins Setup

The specifics for installing Jenkins are a bit outside the bailiwick of my blog, but they have some good instructions on their site. Those instructions currently start out heavily with Docker, which would work well, but I've found it pretty easy to set up with a Linux VM. That usually involves adding the Jenkins package source and letting the package manager do its thing. You should also install git while you're here.

Once it's configured, the Maven configuration is the same as in the previous post: find the home directory for the user running Jenkins (generally jenkins with those Linux installs or your current user in a simpler local setup) and configure the .m2/settings.xml file the same way.

Beyond the normal Jenkins setup with your default user, there are a few things to configure.

To start out with, we'll add support for Maven projects. Jenkins is trending towards doing everything via "Pipeline" projects, which is a fine idea, but the older Maven support will suit our needs better for now. Go to "Manage Jenkins" and then "Manage Plugins". On the "Available" tab, search for "maven". You should find the "Maven integration plugin" - in my case, it's under "Installed" since I already have it:

Maven Jenkins plugin

Then, make your way back to "Manage Jenkins" and to "Global Tool Configuration". In there, add a JDK if one doesn't already exist. You can either point to an existing Java installation or install one automatically:

JDK Setup

Do similarly for Git. If you installed it in Linux or are running on macOS, you can just write "git" in for the executable path. On Windows, you should install it first.

Git Setup

Finally, do the same for Maven. Like Java, this is one that you can configure automatically. 3.6.3 is a good choice:

Mavan Setup

Project Setup

Now that that's all set up, go back to the main Jenkins page and click on "New Item". Here, you should be able to select "Maven project". In general, I like to give my Jenkins projects names without too many special characters, in particular without spaces - there's always the chance that an odd tool here or there will cause trouble with complicated path names.

Maven item

When you create the item, you'll be presented with an intimidating tower of options, but fortunately only a few are important at the moment.

Our first stop is the "Source Code Management" section, where you should configure the location of your source repository. In my case here, I'm building one of the examples in the public NSF ODP Tooling repository, but you may have to add credentials if you're using a private repository.

Source Code Management

The next important step is the "Build" section. In here, pick your Maven version if you have multiple ones, fill in the path to your root POM file (most likely "pom.xml" if your project is in the root of the repo, but it's within a subdirectory here), and set the goals to be "clean install":

Build config

Finally, go to "Post-build Actions" and add an "Archive the artifacts" action. Set the "Files to archive" to "**/target/*.nsf":

Post-build Actions

Then, hit "Save".

Back on the project page, click "Build Now" on the left:

Build Now

If all goes well, you should see the build churn for a bit below the actions and eventually go blue. Unfortunately, there's also plenty of room here for things to go awry. If they do, your best bet is to hover over the build, click the disclosure triangle next to the timestamp, and click "Console Output". That should hopefully illuminate the trouble.

Console Output

Assuming it went well, though, you should be able to refresh the page and see your NSF in the "Last Successful Artifacts" section.

Last Successful Artifacts

And that's one of the key benefits to the CI/CD process: you can have the server run a repeatable build on command, on a schedule, or on triggers (like when you push a change) and have the result ready for you when it's done.

More In Practice

Once you have these basics working, you can get more complicated from there. The most common next step will be to set up either push notifications from your repository host (if your Jenkins server is visible to your repo) or scheduled polling for changes. That way, this will start to happen automatically without the need to manually trigger it.

You can also set up email notifications on failure, which is handy even when you're the only developer - that can help remove some "works on my machine" trouble.

There are a few more things that I think will be worth covering. In particular, I'll want to demonstrate a multi-NSF build that creates a deployment ZIP - something that's present in the complicated OSGi example, but which can be done just as well in a less-complex project.

Getting Started with the NSF ODP Tooling

Wed Aug 26 10:57:53 EDT 2020

Tags: maven nsfodp
  1. Aug 26 2020 - Getting Started with the NSF ODP Tooling
  2. Aug 27 2020 - NSF ODP Tooling: Setting Up Jenkins Builds

I've mentioned the NSF ODP Tooling project quite a bit here, and a lot of that is just a reflection of how much use I've gotten out of it and how much time it's been saving me in my regular work.

Part of it is also, though, that I think that it should see wider use. I realized that the project can seem off-putting, or reserved only for the sort of lost-in-the-weeds sort of work I do. Generally, when I mention it, it's in the context of a massive project with a bunch of OSGi plugins, or describing the intricate work that went in to implementing it.

So I figured this was as good a time as any to describe the simplest-case scenario to get use out of the project: wrapping a normal ODP, without plugins, and then building it into an NSF outside of Designer.

Environment Setup

Domino Installation

To get started, you'll first need either a local Notes/Domino installation or a remote Domino server. Since it involves slightly-less local configuration, we'll go with the remote Domino path for now. Download the latest distribution ZIP [from the project on OpenNTF](https://openntf.org/main.nsf/project.xsp?r=project/NSF ODP Tooling/releases) and install the update site from the "Domino" directory on your server in the same way you would the OpenNTF Domino API or other XPages library, and restart HTTP.

Maven and Java

The second thing you'll need is a Maven installation locally. If you're running on macOS or Linux, the easiest way to install this is with a package manager, such as Homebrew or apt. On any platform, you can also follow the download and installation instructions from the official Maven site. You'll also need Java installed - nowadays, I use AdoptOpenJDK.

You'll also need a Maven "settings.xml" file to point to your server. If you don't have such a file already, create an ".m2" directory (with the leading dot) in your home directory. This is the same process as in my original Maven setup guide, but with different contents. Configure the contents to look like this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
<?xml version="1.0"?>
<settings xmlns="http://maven.apache.org/SETTINGS/1.0.0"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="http://maven.apache.org/SETTINGS/1.0.0 http://maven.apache.org/xsd/settings-1.0.0.xsd">
    <profiles>
        <profile>
            <id>nsfodp</id>
            <properties>
                <!-- the server name can be anything as long as it matches below -->
                <nsfodp.compiler.server>some-server-name</nsfodp.compiler.server>
                <!-- specify the HTTP/HTTPS URL for your Domino server -->
                <nsfodp.compiler.serverUrl>https://some.server/</nsfodp.compiler.serverUrl>
                
                <!-- set to true if you use a self-signed SSL certificate -->
                <nsfodp.compiler.serverTrustSelfSignedSsl>true</nsfodp.compiler.serverTrustSelfSignedSsl>
            </properties>
        </profile>
    </profiles>
    <activeProfiles>
        <activeProfile>nsfodp</activeProfile>
    </activeProfiles>
    
    <servers>
        <server>
            <id>some-server-name</id>
            <!-- Use a Domino HTTP username and password -->
            <username>builduser</username>
            <password>buildpassword</password>
        </server>
    </servers>
</settings>

NSF Project Setup

The core On-Disk Project you create for your NSF is done using the normal Designer source-control. This process hasn't changed over the years; if you're unfamiliar with creating ODPs and working with source control, resources like the NotesIn9 episode remain very useful (though using Mercurial is an odd choice nowadays).

For this example, I just created a new NSF, but you can start with any simple-to-moderate NSF. For now, avoid anything that uses external XPages libraries or platform-specific things like ODBC in LotusScript. Right-click the NSF and go to "Team Development" ? "Set Up Source Control for this Application":

Set up source control in Designer

In the following wizard, give it a name (your choice) and uncheck "Use default location". Pick a destination for your created project, but make sure to put it within an "odp" subfolder of your main project folder - that'll be important later.

Source control wizard

I also uncheck "Go to Navigator view after project is created" because I use Package Explorer for this. It wouldn't hurt to use the Navigator view, tough - it's basically the same idea.

At this point, you can close out of Designer if you want - it won't be needed for the rest of this.

Maven Project Setup

Create a new text file called "pom.xml" and put it in the project folder, next to the "odp" directory.

pom.xml placement

Set its contents to this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
<?xml version="1.0"?>
<project
    xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"
    xmlns="http://maven.apache.org/POM/4.0.0"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
    <modelVersion>4.0.0</modelVersion>
    <groupId>com.example</groupId>
    <artifactId>nsfodp-example</artifactId>
	<version>1.0.0-SNAPSHOT</version>
    <packaging>domino-nsf</packaging>

    <pluginRepositories>
        <pluginRepository>
            <id>artifactory.openntf.org</id>
            <name>artifactory.openntf.org</name>
            <url>https://artifactory.openntf.org/openntf</url>
        </pluginRepository>
    </pluginRepositories>

    <build>
        <plugins>
            <plugin>
                <groupId>org.openntf.maven</groupId>
                <artifactId>nsfodp-maven-plugin</artifactId>
                <version>3.1.0</version>
                <extensions>true</extensions>
            </plugin>
        </plugins>
    </build>
</project>

In a terminal window, go to the project directory (the one containing this "pom.xml") and run mvn install. After a bit of churning, you should see some output ending like this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
[INFO] --- nsfodp-maven-plugin:3.1.0:compile (default-compile) @ nsfodp-example ---
[INFO] Compiling ODP
[INFO] Installing bundles
[INFO] - Installed no bundles
[INFO] Creating destination NSF
[INFO] Importing DB properties
[INFO] Importing basic design elements
[INFO] Importing file resources
[INFO] Importing LotusScript libraries
[INFO] Uninstalling bundles
[INFO] org.openntf.nsfodp.compiler.equinox.CompilerApplication#end
[INFO] Generated NSF: /Users/jesse/Projects/nsfodp-example/target/nsfodp-example-1.0.0-SNAPSHOT.nsf
[INFO]
[INFO] --- maven-install-plugin:3.0.0-M1:install (default-install) @ nsfodp-example ---
[INFO] Installing /Users/jesse/Projects/nsfodp-example/target/nsfodp-example-1.0.0-SNAPSHOT.nsf to /Users/jesse/.m2/repository/com/example/nsfodp-example/1.0.0-SNAPSHOT/nsfodp-example-1.0.0-SNAPSHOT.nsf
[INFO] Installing /Users/jesse/Projects/nsfodp-example/pom.xml to /Users/jesse/.m2/repository/com/example/nsfodp-example/1.0.0-SNAPSHOT/nsfodp-example-1.0.0-SNAPSHOT.pom
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time:  9.346 s
[INFO] Finished at: 2020-08-26T10:29:10-04:00
[INFO] ------------------------------------------------------------------------

The specifics will change a bit based on your system, but the main things are to see those "Compiling" and "Importing" lines followed by the "BUILD SUCCESS" banner at the end. If you look in your project directory, you'll see some generated support files and, within the "target" directory, the built NSF:

Build results

Conclusion

And that's it! Probably, at least. You can use this with most classic Notes apps and with XPages apps that just use the built-in components and JARs inside the NSF. Things can get more complex from there, and the repository contains an example of an XPages application that uses an OSGi-based library.

I plan to go into some of those details in future posts. In addition, I will demonstrate how to do this compilation in Jenkins, which allows you to have the NSF built automatically whenever you or someone else on your team commits a change to source control.

Executing a Complicated OSGi-NSF-Surefire-NPM Build With Docker

Thu Aug 13 14:42:58 EDT 2020

  1. Jun 28 2020 - Weekend Domino-Apps-in-Docker Experimentation
  2. Aug 13 2020 - Executing a Complicated OSGi-NSF-Surefire-NPM Build With Docker
  3. Sep 14 2020 - Getting to Appreciate the Idioms of Docker

The other month, I got my feet wet with Docker after only conceptually following it for a long time. With that, I focused on getting a basic Jakarta EE app up and running with an active Notes runtime by way of the official Domino-on-Docker image provided by HCL.

Since that time, I'd been mulling over another use for it: having it handle the build process of my client's sprawling app. This started to become a more-pressing desire thanks to a couple factors:

  1. Though I have the build working pretty well on Jenkins, it periodically blocks indefinitely when it tries to launch the NSF ODP Compiler, presumably due to some sort of contention. I can go in and kill the build, but that's only when I notice it.
  2. The project is focusing more on an Angular-based UI, with a distinct set of programmers working on it, and the process of keeping a consistent Domino-side development environment up and running for them is a real hassle.
  3. Setting up a new environment with a Notes runtime is a hassle even for in-the-weeds developers like me.

The Goal

So I set out to use Docker to solve this problem. My idea was to write a script that would compose a Docker image containing all the necessary base tools - Java, Maven, Make for some reason, and so forth - bring in the Domino runtime from HCL's image, and add in a standard Notes ID file, names.nsf, and notes.ini that would be safe to keep in the private repo. Then, I'd execute a script within that environment that would run the Maven build inside the container using my current project tree.

The Dockerfile

Since I'm still not fully adept at Docker, it's been a rocky process, but I've managed to concoct something that works. I have a Dockerfile that looks like this (kindly ignore all cargo-culting for now):

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
FROM maven:3.6.3-adoptopenjdk-8-openj9
USER root

# Install toolchain files for the NPM native components
RUN apt update
RUN apt install -y python make gcc g   openssh-client git

# Configure the Maven environment and permissive root home directory
COPY settings.xml /root/.m2/
COPY build-app.sh /
RUN mkdir -p /root/.m2/repository
RUN chmod -R 777 /root

# Bring in the Domino runtime
COPY --from=domino-docker:V1101_03212020prod /opt/hcl/domino/notes/11000100/linux /opt/hcl/domino/notes/latest/linux
COPY --from=domino-docker:V1101_03212020prod /local/notesdata /local/notesdata

# Some LotusScript libraries use an all-caps name for lsconst.lss
RUN ln -s lsconst.lss /opt/hcl/domino/notes/latest/linux/LSCONST.LSS

# Copy in our stock Notes ID and configuration files
COPY notesdata/* /local/notesdata/

# Prepare a permissive data environment
RUN chmod -R 777 /local/notesdata

The gist here is similar to my previous example, where it starts from the baseline Maven package. One notable difference is that I switched away from the -alpine variant I had inherited from my original Codewind example: I found that I would encounter npm: not found during the frontend build process, and discovered that this had to do with the starting Linux distribution.

The rest of it brings in the core Domino runtime and data directory from the official image, plus my pre-prepared Maven configuration. It also does the fun job of symlinking "lsconst.lss" to "LSCONST.LSS" to account for the fact that some of the LotusScript in the NSFs was written to assume Windows and refers to the include file by that name, which doesn't fly on a case-sensitive filesystem. That was a fun one to track down.

The build-app.sh script is just a shell script that runs several Maven commands specific to this project.

The Executor Script

The other main component is a Bash script, ./build.sh:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
#!/usr/bin/env bash

set -e

mkdir -p ~/.m2/repository
mkdir -p ~/.ssh

# Clean any existing NPM builds
rm -rf ../app-ui/*/node_modules
rm -rf ../app-ui/*/dist

# Set up the Docker workspace
rm -rf scratch
mkdir -p scratch/builder
cp maven/* scratch/builder/
cp -r notesdata-server scratch/builder/notesdata

# Build the image and execute a Maven install
docker build scratch/builder -f build.Dockerfile -t app-build
docker run \
    --mount type=bind,source="$(pwd)/..",target=/build \
    --mount type=bind,source="$HOME/.m2/repository",target=/root/.m2/repository \
    --mount type=bind,source="$HOME/.ssh",target=/root/.ssh \
    --rm \
    --user $(id -u):$(id -g) \
    app-build \
    sh /build-app.sh

This script ensures that some common directories exist for the user, clears out any built Node results (useful for a local dev environment), copies configuration files into an image-building directory, and builds the image using the aforementioned Dockerfile. Then, it executes a command to spawn a temporary container using that image, run the build, and delete the container when done. Some of the operative bits and notes are:

  • I'm using --mount here maybe as opposed to --volume because I don't know that much about Docker. Or maybe it's the right one for my needs? It works, anyway, even if performance on macOS is godawful currently
  • I bring in the current user's Maven repository so that it doesn't have to regenerate the entire world on each build. I'm going to investigate a way to pre-package the dependencies in a cacheable Maven RUN command as my previous example did, but the sheer size of the project and OSGi dependencies tree makes that prohibitive at the moment
  • I bring in the current user's ~/.ssh directory because one of the NPM dependencies references its dependency via a GitHub SSH URL, which is insane and bad but I have to account for it. Looking at it now, I should really mark that one read-only
  • The --rm is the part that discards the container after completing, which is convenient
  • I use --user to specify a non-root user ID to run the build, since otherwise Docker on Linux ends up making the target results root-owned and un-deletable by Jenkins. This is also the cause of all those chmod -R 777 ... calls in the Dockerfile. There are gotchas to keep in mind when doing this

Miscellaneous Other Configuration

To get ODP ? NSF compilation working, I had to make sure that Maven knew about the Domino runtime. Fortunately, since it'll now be consistent, I'm able to make a stock settings.xml file and copy that in:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
<?xml version="1.0"?>
<settings xmlns="http://maven.apache.org/SETTINGS/1.0.0"
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xsi:schemaLocation="http://maven.apache.org/SETTINGS/1.0.0 http://maven.apache.org/xsd/settings-1.0.0.xsd">
	<profiles>
		<profile>
			<id>notes-program</id>
			<properties>
				<notes-program>/opt/hcl/domino/notes/latest/linux</notes-program>
				<notes-data>/local/notesdata</notes-data>
				<notes-ini>/local/notesdata/notes.ini</notes-ini>
			</properties>
		</profile>
	</profiles>
	<activeProfiles>
		<activeProfile>notes-program</activeProfile>
	</activeProfiles>
</settings>

Those three are the by-convention properties I use in the NSF ODP Tooling and my Tycho-run test suites to pass information along to initialize the Notes process.

Future Improvements

The main thing I want to improve in the future is getting the dependencies loaded into the image ahead of time. Currently, in addition to sharing the local Maven repository, the command brings in not only the full project structure but also the app-dependencies submodule we use to store giant blobs of p2 sites needed by the build. The "Docker way" would be to compose these in as layers of the image, so that I could skip the --mount bit for them but have Docker's cache avoid the need to regenerate a large dependencies image each time.

I'd also like to pair this with app-runner Dockerfiles to launch the webapp variants of the XPages and JAX-RS projects in Liberty-based containers. Once I get that clean enough, I'll be able to hand that off to the frontend developers so that they can build the full app and have a local development environment with the latest changes from the repo, and no longer have to wonder whether one of the server-side developers has updated the Domino server with some change. Especially when that server-side developer is me, and it's Friday afternoon, and I just want to go play Baba Is You in peace.

In the mean time, though, it works, and works in a repeatable way. Once I figure out how to get Jenkins to read the test results of a freestyle project after the build, I hope to replace the Jenkins build process with this script, which should both make the process more reliable and allow me to run multiple simultaneous builds per node without worry about deadlocking contention.

NSF ODP Tooling 3.1.0: Dynamically Including Web Resources

Fri Jul 17 14:10:24 EDT 2020

  1. Jun 17 2020 - XPages: The UI Toolkit and the App Framework
  2. Jun 18 2020 - The RuntimeEnvironment Idiom
  3. Jul 17 2020 - NSF ODP Tooling 3.1.0: Dynamically Including Web Resources

I just released version 3.1.0 of the NSF ODP Tooling project and, while I entirely forgot to make a blog post about 3.0 the other week, I think that one the additions in this one deserves some special mention.

In one of my client projects, we're replacing an old XPages-based UI with an Angular UI backed by our set of JAX-RS resources. This is part of the same sprawling client app I've mentioned a few times so far, but this is a new module within it and doesn't face the same "convert from XPages mid-flight" remit. Since the UI itself is just going to be a bunch of static resource files, that freed up our options for presenting it to the user. In order to keep the benefits of using Domino ACLs, I figured that wrapping it up in an NSF would be the way to go.

The way to do this is to bring your (potentially-transpiled) HTML/JS/CSS files into the WebContent folder in the NSF's Package Explorer representation, either manually or by coaxing Designer to sync it in for you.

My purpose in life is to eliminate Designer from existence, though, so I certainly couldn't be content with that. Instead, I adapted a Maven-based technique for building WAR-packaged JS apps to emit an NSF.

The Project Structure

From that "Targeting Domino for Webapps Incidentally" post, the pertinent part is the use of maven-frontend-plugin to kick off an NPM build of the web app. In that post, I put the JavaScript project files inside a Maven project of their own, but that's optional. In my client's case, the JS team is separate from the Java team, so I didn't want to force them to have to dig through the Maven project tree to get to their files, and the JS apps are in a separate top-level folder in the repository. The simplified structure looks like this:

  • Repository Root
    • ui-projects
      • someuiproject
    • nsfodp-project

My goal is to be able to kick off a Maven build, have it run the NPM build of the JS project in its separate directory, and then pull in the results for the final NSF, all automatically.

The Maven Configuration

By combining frontend-maven-plugin and the NSF ODP Tooling, that's exactly what I get. Here's the <build> section of the ODP project's pom:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
<build>
  <plugins>
    <plugin>
      <groupId>com.github.eirslett</groupId>
      <artifactId>frontend-maven-plugin</artifactId>
      <version>1.10.0</version>
  
      <configuration>
        <nodeVersion>v14.3.0</nodeVersion>
        <npmVersion>6.14.4</npmVersion>
        <installDirectory>target</installDirectory>
      </configuration>
        
      <executions>
        <execution>
          <?m2e ignore?>
          <id>install node and npm</id>
          <goals>
            <goal>install-node-and-npm</goal>
          </goals>
          <phase>generate-resources</phase>
        </execution>
        
        <execution>
          <?m2e ignore?>
          <id>jsapp install</id>
          <goals>
            <goal>npm</goal>
          </goals>
          <phase>generate-resources</phase>
          <configuration>
            <workingDirectory>${project.basedir}/../ui-projects/someuiproject</workingDirectory>
          </configuration>
        </execution>
        <execution>
          <?m2e ignore?>
          <id>jsapp build</id>
          <goals>
            <goal>npm</goal>
          </goals>
          <phase>generate-resources</phase>
          <configuration>
            <workingDirectory>${project.basedir}/../ui-projects/someuiproject</workingDirectory>
            <arguments>run build</arguments>
          </configuration>
        </execution>
      </executions>
    </plugin>
  
    <plugin>
      <groupId>org.openntf.maven</groupId>
      <artifactId>nsfodp-maven-plugin</artifactId>
      <version>3.1.0</version>
      
      <configuration>
        <webContentResources>
          <webContentResource>
            <directory>${project.basedir}/../ui-projects/someuiproject/dist/app</directory>
          </webContentResource>
        </webContentResources>
      </configuration>
    </plugin>
  </plugins>
</build>

Now, the final result will be an NSF with whatever other design elements are needed, ready to be deployed with a design replace/refresh. In my client's case, that ends up also getting bundled up into the distribution ZIP, but in a basic case the NSF would be enough.

Writing the XSP Transpiler Maven Plugin

Thu Jul 09 10:33:44 EDT 2020

Tags: maven xpages

When I was first getting my XPages webapp support project into workable shape, I was faced with the immediate problem of translating XSP source into a usable form. Though the XPages core contains both the code for translating XSP source to Java and the loader that executes the compiled Java classes, they're best thought of as two disjoint components in a larger toolchain. Designer uses the translator to create Java source, which it then compiles into .class files like any other Java source. At runtime, Java uses the CompiledPageDriver implementation of the FacesPageDriver to look for these compiled classes based on translating page names like Foo.xsp to class names like xsp.Foo, loading them with the active classloader, and calling their methods to emit the UIComponent tree.

The fact that XSP is transformed to Java and then bytecode is incidental, though: the FacesPageDriver interface only requires outputting some object that can build page trees. I've tinkered a bit with building on the Bazaar's existing dynamic-interpretation code to go directly from XSP to the tree of UIComponents, but there are a lot of fiddly details. Onerous as it may be, the translation+compilation process covers all of the edge cases that may show up.

The translation process requires a classpath populated with both the XPages core code and any libraries you have, since libraries are defined as dynamic Java classes and not, for example, statically-readable XML configuration files (there are XML files in there, but they're only identified by the Java class). Designer deals with this by making you install XPages libraries into your runtime: the classes have to be present in the Eclipse environment for Designer to be able to identify and load them. That works, but it's onerous and not practical for my uses.

Runtime Compilation

The tack I took initially with the webapp support was to write a FacesPageDriver implementation that translates XSP to Java and then compiles those classes on the fly. This has the distinct advantage of having the entire running app going, so all libraries and control definitions are available. There's overhead on first load for each page, especially for complicated ones, but subsequent loads are as speedy as the precompiled route.

Incidentally, this is basically how JSPs work in normal app servers: the JSP source is included in the .war file, and then it's translated into a Servlet implementation Java class and compiled on the fly.

Maven Compilation

Still, I really wanted to avoid having the app have to translate and compile on the fly. While it works, it's wasteful and adds noticeably to the initial load time of a freshly-deployed instance.

My goal was to do this compilation process during Maven compilation - independent of any particular IDE. The trouble there is that there's still a hard requirement on having the actual app class environment available so that library classes can be resolved. It's not enough to just solve the problem of including XPages artifacts as Maven dependencies, since that wouldn't account for using e.g. ODA in an app.

My original tack for this was to do what I do in the NSF ODP Tooling: create an Equinox environment containing the app and its dependencies, and then execute the transpilation in there. I event went so far as to implement it, though it's essentially an undocumented feature of 3.0 and above. This didn't sit quite right with me, though. For one, it's kind of outside of the Tooling's bailiwick: while it certainly does XSP compilation as part of the overall NSF assembly, it's really a distinct activity. Moreover, though, loading a whole Equinox environment is fiddly and unnecessarily requires a Notes runtime to be configured along with it.

So I took a second pass at it in the xpages-runtime project, and this has been working out well. I realized that I didn't need to have all of the app's classes available to the Maven plugin, nor did I need to spawn a whole second process. I could instead construct something of a jail ClassLoader to house the process. I build a ClassLoader based on the project's dependency tree (which inherently includes the required XSP core classes), copy in a transpiler implementation, and execute the process reflectively. This means that the whole thing can happen in-process and without a special Notes runtime, just like a normal Maven plugin.

Better still, I use the BuildContext apparatus to identify changes, and Eclipse's m2e hooks into this. In this way, I can do essentially incremental compilation that fires off whenever you modify a .xsp file inside Eclipse, essentially giving the same kind of experience that you get in Designer (with less crashing). In both Maven and Eclipse, the actual Java ? bytecode compilation happens with the normal compiler: I just drop class files in the right place, tell the project about the generated source folder, and let it do its thing.

All in all, I'm pretty pleased with how this turned out. It's still primarily useful for the type of development workflow I've set up personally, but it's definitely had a noticeable impact on the modify-deploy-run cycle. For a moribund stack that I'm actively working away from, I've built myself a pretty-respectable toolshed.

Weekend Domino-Apps-in-Docker Experimentation

Sun Jun 28 18:37:19 EDT 2020

  1. Jun 28 2020 - Weekend Domino-Apps-in-Docker Experimentation
  2. Aug 13 2020 - Executing a Complicated OSGi-NSF-Surefire-NPM Build With Docker
  3. Sep 14 2020 - Getting to Appreciate the Idioms of Docker

For a couple of years now, first IBM and then HCL have worked on and adapted community work to get Domino running in Docker. I've observed this for a while, but haven't had a particular need: while it's nice and all to be able to spin up a Domino server in Docker, it's primarily an "admin" thing. I have my suite of development Domino servers in VMs, and they're chugging along fine.

However, a thought has always gnawed at the back of my mind: a big pitch of Docker is that it makes not just deployment consistent, but also development, taking away a chunk of the hassle of setting up all sorts of associated tools around development. It's never been difficult, per se, to install a Postgres server, but it's all the better to be able to just say that your app expects to have one around and let the tooling handle the specifics for you. Domino isn't quite as Docker-friendly as Postgres or other tools, but the work done to get the official image going with 11.0.1 brought it closer to practicality. This weekend, I figured I'd give it a shot.

The Problem

It's worth taking a moment to explain why it'd be worth bothering with this sort of setup at all. The core trouble is that running an app with a Notes runtime is extremely annoying. You have to make sure that you're pointing at the right libraries, they're all in the right place to be available in their internal dependency tree, you have to set a bunch of environment variables, and you have to make sure that you provide specialized contextual info, like an ID file. You actually have the easiest time on Windows, though it's still a bit of a hurdle. Linux and macOS have their own impediments, though, some of which can be showstoppers for certain tasks. They're impediments worth overcoming to avoid having to use Windows, but they're impediments nonetheless.

The Setup

But back to Docker.

For a little while now, the Eclipse Marketplace has had a prominent spot for Codewind, an IBM-led Eclipse Foundation project to improve the experience of development with Docker containers. The project supplies plugins for Eclipse, IntelliJ, and VS Code / Eclipse Che, but I still spend most of my time in Eclipse, so I went with the former.

To begin with, I started with the default "Open Liberty" project you get when you create a new project with the tooling. As I looked at it, I realized with a bit of relief that there's not too much special about the project itself: it's a normal Maven project with war packaging that brings in some common dependencies. There's no Maven build step that expects Docker at all. The specialized behavior comes (unsurprisingly, if you use Docker already) in the Dockerfile, which goes through the process of building the app, extracting the important build results into a container based on the open-liberty runtime image, bringing in support files from the project, and launching Liberty. Nothing crazy, and the vast majority of the code more shows off MicroProfile features than anything about Docker specifically.

Bringing in Domino

The Docker image that HCL provides is a fully-fledged server, but I don't really care about that: all I really need is the sweet, sweet libnotes.so and associated support libraries. Still, the easiest way to accomplish that is to just copy in the whole /opt/hcl/domino/notes/11000100/linux directory. It's a little wasteful, and I plan to find just what's needed later, but it works to do that.

Once you have that, you need to do the "user side" of it: the ID file and configuration. With a fully-installed Domino server, the data directory balloons in side rapidly, but you don't actually need the vast majority of it if you just want to use the runtime. In fact, all you really need is an ID file, a notes.ini, and a names.nsf - and the latter two can even be massively trimmed down. They do need to be custom for your environment, unfortunately, but at least it's much easier to provide just a few files than spin up and maintain a whole server or run the Notes client locally.

Then, after you've extracted the juicy innards of the Domino image and provided your local resources, you can call NotesInitExtended pointing to your data directory (/local/notesdata in the HCL Docker image convention) and the notes.ini, and voila: you have a running app that can make local and remote Notes native API calls.

Example Project

I uploaded a tiny project to demonstrate this to GitHub: https://github.com/jesse-gallagher/domino-docker-war-example. All it does is provide one JAX-RS resource that emits the server ID, but that shows the Notes API working. In this case, I used the Darwino Domino NAPI (which I really need to refresh from upstream), but Domino JNA would also work. Notes.jar would too, but I think you'll need one of those projects to do the NotesInitExtended call with arguments.

The Dockerfile for the project goes through the steps enumerated above, based on how the original example image does it, and was tweaked to bring in the Domino runtime and support files. I stripped the Liberty-specific stuff out of the pom.xml - I think that the original route the example did of packaging up the whole server and then pulling it apart in Docker image creation has its uses, but isn't needed here.

Much like the pom.xml, the code itself is slim and doesn't explicitly refer to Docker at all. I have a ServletContextListener to init and term the Notes runtime, as well as a Filter implementation to init/term the request thread, but otherwise it just calls the Notes API with no fuss.

Larger Projects

I haven't yet tried this with larger projects, but there's no reason it shouldn't work. The build-deploy-run cycle takes a bit more time with Docker than with just a Liberty server embedded in Eclipse normally, but the consistency may be worth it. I've gotten used to running a killall -KILL java whenever an errant process gloms on to my Notes ID file and causes the server to stop being able to init the runtime, but I'd be glad to be done with that forever. And, for my largest project - the one with the hundreds of XPages and CCs - I don't see why that wouldn't work here too.

Normal Domino Projects

Another route that I've considered for Domino in Docker is to use it to deploy NSFs and OSGi projects. This would involve using the Domino image for its intended purpose of running a full server, but configuring the INI to just serve HTTP, and having the Dockerfile place the built OSGi plugins and NSFs in their right places. This would certainly be much faster than the build-deploy-run cycle of replacing NSF designs and deploying the plugins to an Update Site NSF, though there would be a few hurdles to get over. Not impossible, though.


I figure I'll kick the tires on this some more this week - maybe try deploying the aforementioned giant XPages .war project to it - to see if it will fit into my workflow. There's a chance that the increased deployment times won't be worth it, and I won't really gain the "consistent with production" advantages of Docker when the way I'm developing the app is already a wildly-unsupported configuration. It might be worth it if I try the remote mode of Codewind, though: I have some Liberty servers that Jenkins deploys to, but it'd be even-better to be able to show my running app to co-developers to work on something immediately, instead of waiting for the full build. It's worth some investigation, anyway.

Managed Beans to CDI

Fri Jun 19 13:50:44 EDT 2020

  1. Jun 04 2020 - Java Services (Not the RESTful Kind)
  2. Jun 05 2020 - Java ClassLoaders
  3. Jun 19 2020 - Managed Beans to CDI
  4. Oct 18 2022 - The Myriad Idioms For Finding Implementations In Java

When I was getting familiar with modern Java server development, one of the biggest conceptual stumbling blocks by far was CDI. Part of the trouble was that I kind of jumped in the deep end, by way of JNoSQL's examples. JNoSQL is a CDI citizen through and through, and so the docs would just toss out things like how you "create a repository" by just making an interface with no implementation.

Moreover, CDI has a bit of the "Maven" problem, where, once you do the work of getting familiar with it, the parts that are completely baffling to newcomers become more and more difficult to remember as being unusual.

Fortunately, like how coming to Maven by way of Tycho OSGi projects is "hard mode", coming to CDI by way of a toolkit that uses auto-created proxy objects is a more difficult path than necessary. Even better, XPages developers have a clean segue into it: managed beans.

JSF Managed Beans

XPages inherited the original JSF concept of managed beans, where you put definitions for your beans in faces-config.xml like so:

1
2
3
4
5
6
7
8
9
<managed-bean>
	<managed-bean-name>someBean</managed-bean-name>
	<managed-bean-class>com.example.SomeBeanClass</managed-bean-class>
	<managed-bean-scope>application</managed-bean-scope>
	<managed-property>
		<property-name>database</property-name>
		<value>#{database}</value>
	</managed-property>
</managed-bean>

Though the syntax isn't Faces-specific, the fact that it is defined in faces-config.xml demonstrates what a JSF-ism it is. Newer versions of JSF (not XPages) let you declare your beans inline in the class, skipping the XML part:

1
2
3
4
5
6
7
8
package com.example;
// ...
@ManagedBean(name="someBean")
@ApplicationScoped
public class SomeBeanClass {
	@ManagedProperty(value="#{database}")
	private Database someProp;
}

These annotations were initially within the javax.faces package, highlighting that, while they're a new developer convenience, it's still basically the same JSF-specific thing.

While all this was going on (and before it, really), the Enterprise JavaBeans (EJB) spec was chugging along, serving some similar concepts but it really is kind of its own, all-consuming beast. I won't talk about it much here, in large part because I've never used it, but it has an important part in this history, especially when we get to the "dependency injection" parts.

Move to CDI

Since it turns out that managed beans are a terrifically-useful concept beyond just JSF, Java EE siphoned concepts from JSF and EJB to make the obtusely named Contexts and Dependency Injection spec, or CDI. CDI is paired with some associated specs like Common Annotations and Inject to make a new bean system. With a switch to CDI, the bean above can be tweaked to something like:

1
2
3
4
5
6
7
8
package com.example;
// ...
@Named(name="someBean")
@ApplicationScoped
public class SomeBeanClass {
	@Inject @Named("database")
	private Database someProp;
}

Not wildly different - some same-named annotations in a different package, and some semantic switches, but the same basic idea. The difference here is that this is entirely divorced from JSF, and indeed from web apps in general. CDI specifically has a mode that works outside of a JEE/Servlet container and could work in e.g. a command-line program.

Newer versions of JSF (and other UI engines) deprecated their own version of this to allow for CDI to be the consistent pool of variable resolution and creation for the UI and for the business logic.

The Conceptual Leap

One of the things blocking me from properly grasping CDI at first was that @Inject annotation on a property. If it's just some Java object, how would that property ever be set? Certainly, CDI couldn't be so magical that I could just do new SomeBeanClass() and have someProp populated, right? Well, yes, that's right. No matter how gussied up your class definition is with CDI annotations, constructing an instance with new will pay no attention to any of it.

What got me over the hurdle is realizing that, in a modern web app in particular, almost everything you do runs through CDI. JSP request? That can resolve CDI. JAX-RS resource? That's managed by CDI. Filters? CDI. And, because those objects are all being instantiated by CDI, the CDI runtime can do whatever the heck it wants with them. That's why the managed property in the original example is so critical: it's the same idea, just managed by the JSF runtime instead of CDI.

That's how you can get to a class like the controller that manages the posts in this blog. It's annotated with all sorts of stuff: the JAX-RS @Path, the MVC spec @Controller, the CDI @RequestScoped, and, importantly, the @Inject'ed properties. Because the JAX-RS environment instantiates its resource classes through CDI in a JEE container, those will be populated from various sources. HttpServletRequest comes from the servlet environment itself, CommentRepository comes from JNoSQL as based on an interface in my non-JEE project (more on that in a bit), and UserInfoBean is a by-the-numbers managed bean in the CDI style.

There's certainly more indirect "magic" going on here than in the faces-config.xml starting point, but it's a clear line from there to here.

The Weird Stuff

CDI covers more ground, though, and this is the sort of thing that tripped me up when I saw the JNoSQL examples. Among CDI's toolset is the creation of "proxy" objects, which are dynamic objects that intercept normal method calls with new behavior. This is a language-level Java feature that I didn't even know this was a thing in this way, but it's been there since 1.3.

Dynamic scripting languages do this sort of thing as their bread and butter. In Ruby, you can define method_missing to be called when code calls a method that wasn't already defined, and that can respond however you'd like. Years ago, I used this to let you do doc.foo to get a document item value, for example. In Java, you get a mildly-less-loosey-goosey version of this kind of behavior with a proxy's InvocationHandler.

CDI does this extensively, even when you might think it's not. With CDI, all instances are dynamic proxy objects, which allows it to not only inject field values, but also add wrapper code around method calls. This allows tools like MicroProfile Metrics to do things like count invocations, measure timings, and so forth without requiring explicit code beyond the annotations.

And then there are the whole-cloth new objects, like the JNoSQL repositories. To take one of the examples from jnosql.org, here's a full definition of a JNoSQL repository as far as the app developer is concerned:

1
2
3
4
5
6
public interface PersonRepository extends Repository<Person, Long> {

  List<Person> findByName(String name);

  Stream<Person> findByPhones(String phone);
}

Without knowledge of CDI, this is absolute madness. How could it possibly work? There's no code! The trick to it is that CDI ends up creating a dynamic proxy implementation of the interface, which is in turn backed by an InvocationHandler instance. That instance receives the incoming method call as a string and array of parameters, parses the method to look for a concept it handles, and either generates a result or throws an exception. Once you see the capabilities the stack has, the process to get from a JAX-RS class using @Inject PersonRepository foo to having that actually work makes more sense:

  • The JAX-RS servlet receives a request for the resource
  • It asks the CDI environment to create a new instance of the resource class
  • CDI runs through the fields and methods of the class to look for annotations it can handle, where it finds @Inject
  • It looks through its contributed extensions and finds JNoSQL's ServiceLoader-provided extension
  • One of the beans from that extension can handle creating Repository instances
  • That bean creates a proxy object, which handles method calls via invoke

Still pretty weird, but at least there's a path to understanding.

The Overall Importance

The more I use modern JEE, the more I see CDI as the backbone of the whole development experience. It's even to the point where it feels unsafe to not have it present, managing objects, like everything is held together by shoestring. And its importance is further driven home by just how many specs depend on it. In addition to many existing technologies either switching to or otherwise supporting it, like JSF above, pretty much any new Jakarta EE or MicroProfile technology at least has it as the primary mechanism of interaction. Its importance can't be overstated, and it's worth taking some time either building an app with it or at least seeing some tutorials of it in action.