Showing posts for tag "liberty"

Generating Archive Files On The Fly In Java

Thu Dec 09 10:30:36 EST 2021

Tags: java liberty

When working on version 3.0 of the Domino Open Liberty Runtime, I had occasion to do something I've done in other situations, but it occurred to me that it'd make a good post on its own. Specifically, part of one of the new features involved creating archive data on the fly, purely in-memory, and that's something that comes in handy quite a bit.

Background: The Task

The task at hand in that project involved the way the runtime will deploy custom extension features for Liberty when creating the server. There are a few of these, all centered around adding integration with Domino in one way or another. For the previously-existing Liberty features, this was done in three parts:

  • The actual Liberty extension code, which is a Java project that produces a Liberty-compatible OSGi bundle.
  • A "subsystem" module, which is a code-less Maven project that uses esa-maven-plugin to embed the above bundle and generate a "SUBSYSTEM.MF" file to describe it. This ESA/subsystem bit is a mechanism for distributing packaged features from the OSGi spec.
  • A "deployment" module, which is a small Java project that provides an extension for the Domino-side runtime to house and deploy the above ESA file.

For 3.0, I wanted to make a feature that would provide Notes.jar and the NAPI to application. Since those files are proprietary and non-distributable, I couldn't include them in the actual runtime distribution and would instead have to look them up from Domino's environment at runtime. Additionally, since all I wanted to do was provide the existing API and not add any new code, there was no particular need to make a code project like the first one above.

More Background: Java Streams

The way these extensions are registered to Domino is by classes that provide some metadata about the feature and then a method called getEsaData() that returns an InputStream. Though InputStreams aren't the only way to represent arbitrary binary blobs like this, they're used everywhere by virtue of them arriving with Java 1, and they're extremely adaptable.

Basically, the idea of an InputStream is that it's just a mechanism to read a sequence of bytes from somewhere. In Domino terms, they're like NotesStream, but good.

Their utility comes from their simplicity and adaptability. Because the abstract class only deals with reading bytes and a few operations for skipping around, they can be used for all sorts of things. The prototypical use is for reading a file. For example:

1
2
3
4
Path someFile = Paths.get("/foo/bar/baz.txt");
try(InputStream is = Files.newInputStream(someFile)) {
  // read file data from the stream
}

They're not limited to that, though: the JDK comes with all sorts of InputStream variants like ByteArrayInputStream, which lets you read from a byte[] in memory.

In addition to being arbitrary as to where the byes are coming from, streams are also very composable. Many types of streams either must or may wrap an existing stream to alter it in some way. One of the more-common cases where you'd do this is when reading ZIP file data. Taking something similar to above:

1
2
3
4
5
6
7
8
Path someZipFile = Paths.get("/foo/bar/baz.zip");
try(
  InputStream is = Files.newInputStream(someFile);
  ZipInputStream zis = new ZipInputStream(is, StandardCharsets.UTF_8)
) {
  ZipEntry entry = zis.getFirstEntry();
  // work with the ZIP entries
}

The thing to note here is that, while this happens to be coming from a ZIP file on disk, it doesn't have to be: that first is could just as easily be a stream coming from HttpURLConnection or a ByteArrayInputStream.

Along with InputStream, Java also has OutputStream. Luckily, OutputStream is similarly simply designed, and has uses that are a direct mirror for everything above: there exist ByteArrayOutputStream, ZipOutputStream, and all sorts of others.

Putting It Together

Back to the original goal, my task was to create a class that would provide an InputStream containing ESA data - that is to say, a ZIP file - to the runtime, which could then deploy it as a Liberty feature. The previous extensions did this by embedding the ESA in their JAR and then returning an InputStream to that. Now, though, I wanted to do it all dynamically.

Now, I talked a big game above about how streams didn't have to have anything to do with files, and it could all be done in memory. That's still all true, but technically here I ended up using files for caching purposes. The above is still good to know, though!

So anyway, my goal was to deliver an InputStream to the runtime that represented an ESA that looks like this:

Contents of a generated ESA file

Of those entries, "corba.jar" is the CORBA API from Maven Central to make Notes.jar work on Java 9+, while "Notes.jar" comes from jvm/lib/ext and "lwpd.commons.jar" and "lwpd.domino.napi.jar" come from the OSGi framework in the running Domino server. The remaining entries - the two "MF" files and the embedded JAR - are composed on the fly.

The starting point here is that I identify a cache location within my working directory based on the current Domino build number, and I name that path out. Then, I open it up as a ZIP to fill with contents, like above:

1
2
3
4
5
6
try(
  OutputStream os = Files.newOutputStream(out, StandardOpenOption.CREATE);
 ZipOutputStream zos = new ZipOutputStream(os, StandardCharsets.UTF_8)
) {
  // work happens here
}
SUBSYSTEM.MF

The next part is to build the "SUBSYSTEM.MF" file. As implied by the extension, this file has the same syntax as "MANIFEST.MF" files, and so I can use the java.util.zip.Manifest class to handle encoding and formatting. I start out by loading a template from the current bundle's resources:

1
2
3
4
Manifest subsystem;
try(InputStream is = getClass().getResourceAsStream("/subsystem-template.mf")) { //$NON-NLS-1$
  subsystem = new Manifest(is);
}

There, I'm using the constructor from Manifest that reads from an existing stream. Often, that would be reading a "MANIFEST.MF" from an existing JAR, but it'll work with any stream.

Then, I fill it in with some details, with lines like:

1
2
3
4
5
Attributes attrs = subsystem.getMainAttributes();
attrs.putValue("Subsystem-Name", getShortName()); //$NON-NLS-1$
String featureName = getShortName() + "-" + getFeatureVersion(); //$NON-NLS-1$
attrs.putValue("IBM-ShortName", featureName); //$NON-NLS-1$
// etc.

Finally, I create an entry in the ZipOutputStream and write the contents. The way ZipOutputStream works is that its "stream-iness" counts towards whatever the most-recently-added entry is.

1
2
zos.putNextEntry(new ZipEntry("OSGI-INF/SUBSYSTEM.MF")); //$NON-NLS-1$
subsystem.write(zos);
Embedded Bundle

Alright, so far, so good. Up until now, this is the "normal" case for working with ZIP files, where you make a new entry and pour in some text data. What's neat, though, is that the encapsulation capabilities of these streams can be stacked, which is what comes up next.

Specifically, I wanted to put a ZIP file (the .jar) within this surrounding ZIP (the .esa). The way this is done is by just composing the same tools we've been working with again. Here, esa is what zos was above: the outermost package ZIP contents. I just renamed it in this method for clarity inside the code itself.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
// This will be a shell bundle that in turn embeds the API JARs
esa.putNextEntry(new ZipEntry(BUNDLE_NAME + "_" + dominoVersion + ".0.0.jar")); //$NON-NLS-1$ //$NON-NLS-2$
		
// Build the embedded JAR contents
try(ZipOutputStream zos = new ZipOutputStream(esa, StandardCharsets.UTF_8)) {
  Manifest manifest;
  try(InputStream is = getClass().getResourceAsStream("/manifest-template.mf")) { //$NON-NLS-1$
    manifest = new Manifest(is);
  }
  
  // Finish manifest
  // More work here
}

So there, I'm doing basically the same thing as I did originally, to make a ZipOutputStream. Since the ZipOutputStream really doesn't care what the stream is writing to is, it works just as well when writing to a file as when writing to another ZIP stream - the cascading streams handle their own encoding and it works out in the end.

Once I write the manifest, I can make use of the Files utility class to embed each of the JARs from the filesystem:

1
2
3
4
for(Path jar : embeds) {
  zos.putNextEntry(new ZipEntry(jar.getFileName().toString()));
  Files.copy(jar, zos);
}

Finally, I download the CORBA JAR on the fly, so for that one I use a utility function to download from the remote URL:

1
2
3
4
5
OpenLibertyUtil.download(new URL(URL_CORBA), is -> {
  zos.putNextEntry(new ZipEntry("corba.jar")); //$NON-NLS-1$
  IOUtils.copy(is, zos);
  return null;
});

Here, I use IOUtils from Apache Commons IO because it's not copying from a filesystem path, but the idea is basically the same, and exactly the same as far as the destination ZIP is concerned.

The Final Result

Once this is all written to the cached file on the filesystem, the final result is just to return a stream from it:

1
return Files.newInputStream(out);

Since the job of this extension class is only to return an InputStream, the consuming code doesn't care that the extension did all this work, as opposed to the other ones that just return a stream of an embedded resource: everything else is the same.

So, all in all, this isn't a groundbreaking new technique, but that's the point: the way these lower-level JDK components work, you get a tremendous amount of flexibility from just a few common parts.

Version 3.0 of the Domino Open Liberty Runtime

Thu Dec 02 16:46:10 EST 2021

When I last talked about my Domino Open Liberty Runtime project, I mentioned the various main tasks I'd been doing in gearing up for a 3.0 release.

Well, it's been a while since then, but so it goes with open-source projects sometimes. Fortunately, I've had some time here and there to dust it off some more and take care of a lot of lingering tasks I had assigned to the 3.0 release, enough to make it to the finish line. Some of these I found interesting to note, and so I'll note them here.

Java Runtime Shifts

One of the neat tricks the project does is to auto-download a specified Java runtime for you: you can pick your version (e.g. 8, 11, or 17) and your flavor (HotSpot, OpenJ9, or GraalVM CD) and the tooling will do the work of downloading it on your behalf.

Originally, this used just AdoptOpenJDK, the previous go-to home of open-source Java builds. When I added GraalVM, I generalized the code to handle those downloads as well. Like AdoptOpenJDK's binaries, they're hosted on GitHub, but under a different organization and using a different naming scheme.

That refactoring paid off now: AdoptOpenJDK moved to the Eclipse Foundation and became the Adoptium project: much the same idea, but a new home. During that move, though, IBM stopped pushing OpenJ9 builds to that organization and instead now host them themselves, under the name Semeru. I assume this was all due to some wrangling over the use of "JDK" and other political scuffling, but either way I had to deal with it.

Fortunately, though the Semeru home page is standard IBM fare, the binaries themselves are still hosted on GitHub, and the organization still mirrors the original AdoptOpenJDK/Adoptium style. So I made another class for that, but the logic is largely the same.

Notes API Feature

Since it's extremely likely that apps deployed adjacent to Domino this way will want to access Domino data, I originally set up deployment to copy the server's Notes.jar into the global "extra JARs" directory for deployed Liberty instances. This would allow apps to make use of lotus.domino classes without having to package Notes.jar inside the WAR.

This works well enough, but it always felt a little wrong: I don't like polluting the server's classpath that way, and it got all the worse because I needed to also bring in an open-source copy of CORBA classes for cases where the JVM in use is newer than 8.

So I decided to take another swing at it, this time packaging it up as a feature to be loaded as desired in Liberty. The project already has a few of these, generally consisting of the Liberty-specific OSGi bundle, the ESA feature file to define it in Liberty, and then an extension on Domino to deploy that to new servers.

In this case, I'd want to do it a little differently: the other ones included custom code on my part, but this one will just want to grab JARs from the Domino installation and make them available to apps. So, for this one, I didn't need the first two components - instead, I'd build the feature JARs on the fly. I went about doing that, the the result is a class that creates a notesApi-1.0 feature automatically. It makes use of the way streams work in Java to automatically build a feature container based on the current Domino build version (in case the API changes), creating the necessary manifests programmatically and then locating Notes.jar, the NAPI, and their CORBA and IBM Commons dependencies.

This new route is nicer all around: it's explicitly opt-in, which I like, it now also provides the NAPI, and it hides the non-API dependencies from the apps.

Generic Tooling

For a while now, I've been toying with the idea of making this less Open-Liberty-specific and adding the ability to deploy different kinds of apps. After all, all Liberty is here is a forked process, and that could be anything: a different app server, a single JAR file, or something not Java-related at all. Though the tooling will still only support Open Liberty in 3.0, I've been gradually reorganizing the structure to make such a change practical to implement in, say, 4.0.

Rather than the core runtime assuming it's using Liberty, now it uses generic interfaces to represent any kind of server, and then that further drills down into servers that use a JVM, and from there down to a Liberty server.

This sort of genericizing has made the runtime much more service-based internally - it's more indirect and it takes more discipline, but I feel good about it. I picture it now as a central core coordinator (still named OpenLibertyRuntime internally) receiving notifications of changes in the admin NSF and then sending out appropriate messages to the various deployment services. The fact that there's only one such deployment service currently doesn't diminish the splendidness of my mental model.

Reverse Proxy

The reverse proxy implementations that I'd mentioned before haven't really changed much since I introduced them, but that's, I think, a good sign: they do what they're supposed to and don't necessarily need dramatic changes. The Domino-side reverse proxy - the one that lets your Liberty apps look like they're running inside Domino's normal HTTP server - has risen in my estimation since I made it, since it makes for a very-reasonable story. It's essentially like if you use the Equinox extension points to deploy a servlet or web application, except you trade the ability to run within a database context for a much-more-modern development environment.

Release

The full list of closed issues for this release is available on GitHub, and I've uploaded the distribution to OpenNTF.

Journeys Debugging Open Liberty and MVC

Tue Nov 30 16:40:24 EST 2021

I mentioned in my last post that I've been tinkering with a modern structure for OpenNTF's web site as a side project. In that, I talked about how I've been going with Jakarta MVC for the front end, but ran into an odd problem with the latest versions in Open Liberty, and that was the impetus to tinkering with ERB.

Well, I decided to go back and take a swing at trying to make JSP work in this case, since it's (still) a good engine for this purpose, and it could be a fun experiment. I was indeed able to do it, and I think the path I took is worth chronicling here.

Context

In previous projects, such as this blog, I've used older versions of the software stack involved - basically, Jakarta 8, which is before the "big bang" switch from the javax.* to jakarta.* package namespace. Since this is a clean new app, I really want to lean to the newest versions across the board, so I pegged my plans to that.

Though Jakarta EE 9 and 9.1 (the Java 11 official version) have been out for a bit, the switchover comes with the sort of turbulence one would expect. Until just this past week, Open Liberty supported JEE 9 only in beta releases - these have historically been plenty stable for me, but it's always asking for trouble. Even with that non-beta version out, I found myself still on the beta track: I'm addicted to using MicroProfile Config, and MicroProfile's move to JEE 9 support is still itself in the RC stage.

So, okay, betas it is.

The Problem

Once I set everything up on JEE 9 and MP 5, I hit this exception when trying to render a JSP via an MVC Controller object:

java.lang.RuntimeException: SRV.8.2: RequestWrapper objects must extend ServletRequestWrapper or HttpServletRequestWrapper
  at com.ibm.wsspi.webcontainer.util.ServletUtil.unwrapRequest(ServletUtil.java:89)
  at [internal classes]
  at org.eclipse.krazo.engine.ServletViewEngine.forwardRequest(ServletViewEngine.java:135)
  at org.eclipse.krazo.engine.JspViewEngine.processView(JspViewEngine.java:58)
  at org.eclipse.krazo.core.ViewableWriter.writeTo(ViewableWriter.java:159)
  at org.eclipse.krazo.core.ViewableWriter.writeTo(ViewableWriter.java:1)
  at org.jboss.resteasy.core.interception.jaxrs.ServerWriterInterceptorContext.lambda$writeTo$1(ServerWriterInterceptorContext.java:79)
  ... 4 more

The short of it is that Krazo (the MVC implementation) passes JSP rendering along to the app container, rather than doing its own JSP work, which makes perfect sense. This, however, hits trouble within Liberty's ServletUtil class, which attempts to "unwrap" the incoming HttpServletRequest object to find the core Liberty-specific object to use extended methods on.

Normally, this sort of thing would work fine: every app server has its own variant of HttpServletRequest for its own uses, and it's perfectly reasonable to do this kind of unwrapping. However, for some reason, this was going awry.

The specific code from Krazo that calls down into Liberty code does do new HttpServletRequestWrapper(request), but that's also legal: the unwrapRequest method is intended specifically to unwrap spec-standard HttpServletRequestWrapper objects like that. So that's not our culprit, and I had to dig deeper.

Investigation

To start my investigation, I knew I'd need to work with the Krazo layer. Fortunately, though many of the moving parts here are baked into the Liberty server, Krazo is not - I include it as a Maven dependency in my app. So I cloned the Krazo source, added it to my workspace, and set my dependency on the SNAPSHOT version, allowing me to do my work inside Krazo's classes.

Context

So what was going on? At first, I thought that maybe something had snuck in a javax.* class somewhere - old code that wasn't fully migrated to JEE 9. That would certainly cause the trouble: javax.servlet.ServletRequestWrapper and jakarta.servlet.ServletRequestWrapper are, to the JVM's eyes, entirely-unrelated classes with no compatibility whatsoever. And, indeed, looking at the source of ServletUtil could give one that impression right away, since the code uses javax.*.

That's not the trouble, however. Though the class is written to javax.*, I gather that it's run through Eclipse Transporter during packaging of the app server, and the actual class that's involved uses jakarta.*. Okay, that's good to know and makes sense, but it also doesn't get us any closer to the root problem.

For my next step, I wanted to figure out what, specifically, it was looking for. The unwrapRequest method takes a Class<?> parameter to find the needed request type, but the stack trace above hid the path it took to get there. By attaching a debugger to the server, I gleaned that it was being called by the unwrapRequest variant above it that looks specifically for a com.ibm.wsspi.webcontainer.servlet.IExtendedRequest.

Okay, so I have the name of the interface it's looking for - I can work with this. My next step was to try to get a programmatic handle on it. The basic approach, when you don't have the class as part of your project, it to look it up via:

1
Class<?> requestClass = Class.forName("com.ibm.wsspi.webcontainer.servlet.IExtendedRequest");

That doesn't work here, though: though the app server definitely has that class, it (properly) shields the running application from accessing the class directly, so that the app runtime isn't contaminated by the surrounding server code.

What I needed to do next was find a ClassLoader that does know about it, and the best way to do that is to find a class provided from outside the running app and ask that. Fortunately, the incoming request is exactly that. So:

1
Class<?> requestClass = Class.forName("com.ibm.wsspi.webcontainer.servlet.IExtendedRequest", false, request.getClass().getClassLoader());

What that does is ask whatever ClassLoader the request object comes from - that is to say, the container's loader - to find the class. And it worked! Now I could test to verify whether the core request matches the type needed:

1
2
Class<?> requestClass = Class.forName("com.ibm.wsspi.webcontainer.servlet.IExtendedRequest", false, request.getClass().getClassLoader());
System.out.println("does it match? " + requestClass.isInstance(request));

As expected, that resolved to false. In this case, that's good, since it'd have been a much-worse problem if it hadn't. But what is the request object, anyway? Well:

1
2
System.out.println(request.getClass());
// output: jdk.proxy15.$Proxy68

Right, okay, that makes sense: all sorts of stuff uses proxy objects in Java, not the least of which being the CDI environment running the whole show.

Cracking Open Proxies

So I had some good information at this point: the HttpServletRequest that Krazo is handed is a proxy object, but Liberty has a hard requirement that its dispatcher is given an instance of IExtendedRequest, which this is not. That means that something in the stack is taking the original Liberty request object and making a proxy for it - fair enough, but inconvenient for me.

My next thought was that maybe I could track down the type of proxy object it is and, with that knowledge, get the underlying delegate request. That's a common-enough pattern: have an instance property in your proxy class that contains the delegate, and (if I'm lucky) have it accessible via a getter. Java's java.lang.reflect.Proxy class has a static method for determining the object that actually handles called methods:

1
2
System.out.println(Proxy.getInvocationHandler(request));
// output: org.jboss.resteasy.core.ContextParameterInjector$GenericDelegatingProxy@fc04422d

This was starting to come together all the more. Liberty recently switched from Apache CXF to RESTEasy for its JAX-RS implementation, and that could explain why this is trouble now when it wasn't before. More importantly for my immediate needs, that also gave me a lead to track down the proxy class.

However, though it was easy enough to find, my heart sank a bit at what I found: rather than having an easy instance property to get the real request, it uses an object from its container class and in turn asks that for the request. Maybe I'd be able to get to that via reflection, but the prospect of figuring out how to work with nested class contexts caused me to try to look around elsewhere instead.

CDI

Another potential answer came to me in a flash: CDI! Access to the CDI environment is standardized, and maybe I could fetch the original request from there. It'd be extremely likely that it'd just hand me back a similar proxy, but it'd be worth a shot. So here we go:

1
2
3
HttpServletRequest cdiReq = CDI.current().select(HttpServletRequest.class).get();
System.out.println(cdiReq);
// output: com.ibm.ws.webcontainer40.srt.SRTServletRequest40@1a22712c

Oh! Good! That's one of Liberty's internal types! Is it the object I need, though? Well... no. Crap. requestClass.isInstance(cdiReq) is false, so this didn't really get me very far. That's a shame, too, since that solution could have involved no implementation-specific code at all.

Internal Liberty Classes

My next thought was that I should try to find another way to get around to finding the true request object. I looked back through the debug stack to find where it was originally calling unwrapRequest to get a bit more context:

1
2
3
WebContainerRequestState reqState = WebContainerRequestState.getInstance(false);

wasReq = (IExtendedRequest) ServletUtil.unwrapRequest(request);

Okay, so what's this with WebContainerRequestState? That sure smells like an object that's meant to be a request-wide object to get to all sorts of state. If I were to write such a class, I'd use it to stash the incoming request as well as any other incidental data that I wouldn't want to ferret away in a way that might leak into the app. I was a little wary that WebAppRequestDispatcher didn't use it to get the IExtendedRequest, but maybe I'd luck out.

And boy, did I! Looking down the source of the file, I found my mark: public IExtendedRequest getCurrentThreadsIExtendedRequest().

The (Provisional) Solution

Now, I had all the tools I needed. Towards the bottom of Krazo's ServletViewEngine class, I conjured up this reflective incantation:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
try {
  Class<?> requestStateClass = Class.forName("com.ibm.wsspi.webcontainer.WebContainerRequestState", false, request.getClass().getClassLoader());
  Method getInstance = requestStateClass.getDeclaredMethod("getInstance", boolean.class);
  Object requestState = getInstance.invoke(null, false);
  Method getCurrentThreadsIExtendedRequest = requestStateClass.getDeclaredMethod("getCurrentThreadsIExtendedRequest");
  request = (HttpServletRequest)getCurrentThreadsIExtendedRequest.invoke(requestState);
} catch (Throwable e1) {
  // Not on WAS
}
rd.forward(new HttpServletRequestWrapper(request), new HttpServletResponseWrapper(response));

So what I'm doing here is breaking into the container's ClassLoader in order to get a handle on WebContainerRequestState. From there, I'm able to call the method to get the current instance, and then in turn call the method to get the IExtendedRequest. I overwrite the request variable we're working with, and then pass that along to the dispatcher. If any of that were to fail, I just throw up my hands, assume I'm not on Liberty, and continue on as before.

And... it works! It actually works! The pages now render properly, with all the niceness of modern JSP at my fingertips! It was fun to toy with the idea of ERB, but I like this better for an otherwise-pure-Java app for sure.

Next Steps

So I have a solution that works for me, but it's so ugly and implementation-specific that I can't exactly be comfortable with it. Still, the trouble comes from an implementation-specific source, so that may be required. Maybe I'll have to leave it like this.

More responsibly, though, what I should do is narrow this down into a reproducible case without all the other moving parts in the app to make sure that it's actually a bug/incompatibility, and thus something that I can report. This is all open-source software, after all, and it'd do nobody any good for me to let a potential actual problem linger. I'll just have to properly identify where the true culprit is first. Is it because Krazo uses RequestDispatcher in a somewhat-unusual way? Is it because RESTEasy is too aggressive about wrapping requests with no proper way to get to the delegate? Is it that Liberty should handle this case better internally? Or maybe it's just some side effect of the other things I have going on. Research is warranted.

In the mean time, that was a fun one. I don't know that I'll have a need for this specific solution again, but it was good to find, and it's always good to get some troubleshooting practice like this for sure.

Implementing Custom Token-Based Auth on Liberty With Domino

Sat Apr 24 12:31:00 EDT 2021

This weekend, I decided to embark on a small personal side project: implementing an RSS sync server I can use with NetNewsWire. It's the delightful sort of side project where the stakes are low and so I feel no pressure to actually complete it (I already have what I want with iCloud-based syncing), but it's a great learning exercise.

Fair warning: this post is essentially a travelogue of not-currently-public code for an incomplete side app of mine, and not necessarily useful as a tutorial. I may make a proper example project out of these ideas one day, but for the moment I'm just excited about how smoothly this process has gone.

The Idea

NetNewsWire syncs with a number of services, and one of them is FreshRSS, a self-hosted sync tool that uses PHP backed by an RDBMS. The implementation doesn't matter, though: what matters is that that means that NNW has the ability to point at any server at an arbitrary URL implementing the same protocol.

As for the protocol itself, it turns out it's just the old Google Reader protocol. Like Rome, Reader rose, transformed the entire RSS ecosystem, and then crumbled, leaving its monuments across the landscape like scars. Many RSS sync services have stuck with that language ever since - it's a bit gangly, but it does the job fine, and it lowers the implementation toll on the clients.

So I figured I could find some adequate documentation and make a little webapp implementing it.

Authentication

My starting point (and all I've done so far) was to get authentication working. These servers mimic the (I assume antiquated) Google ClientLogin endpoint, where you POST "Email" and "Passwd" and get back a token in a weird little properties-ish format:

1
2
3
4
POST /accounts/ClientLogin HTTP/1.1
Content-Type: application/x-www-form-urlencoded

Email=ffooson&Passwd=secretpassword

Followed by:

1
2
3
4
5
6
HTTP/1.1 200 OK
Content-Type: text/html; charset=UTF-8

SID=null
LSID=null
Auth=somename/8e6845e089457af25303abc6f53356eb60bdb5f8

The format of the "Auth" token doesn't matter, I gather. I originally saw it in that "name/token" pattern, but other cases are just a token. That makes sense, since there's no need for the client to parse it - it just needs to send it back. In practice, it shouldn't have any "=" in it, since NNW parses the format expecting only one "=", but otherwise it should be up to you. Specifically, it will send it along in future requests as the Authorization header:

1
2
GET /reader/api/0/stream/items/ids?n=1000&output=json&s=user/-/state/com.google/starred HTTP/1.1
Authorization: GoogleLogin auth=somename/8e6845e089457af25303abc6f53356eb60bdb5f8

This is pretty standard stuff for any number of authentication schemes: often it'll start with "Bearer" instead of "GoogleLogin", but the idea is the same.

Implementing This

So how would one go about implementing this? Well, fortunately, the Jakarta EE spec includes a Security API that allows you to abstract the specifics of how the container authenticates a user, providing custom user identity stores and authentication mechanisms instead of or in addition to the ones provided by the container itself. This is as distinct from a container like Domino, where the HTTP stack handles authentication for all apps, and the only way to extend how that works is by writing a native library with the C-based DSAPI. Possible, but cumbersome.

Identity Store

We'll start with the identity store. Often, a container will be configured with its own concept of what the pool of users is and how they can be authenticated. On Domino, that's generally the names.nsf plus anything configured in a Directory Assistance database. On Liberty or another JEE container, that might be a static user list, an LDAP server, or any number of other options. With the Security API, you can implement your own. I've been ferrying around classes that look like this for a couple of years now:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
/* snip */

import javax.security.enterprise.credential.Credential;
import javax.security.enterprise.credential.UsernamePasswordCredential;
import javax.security.enterprise.identitystore.CredentialValidationResult;
import javax.security.enterprise.identitystore.IdentityStore;

@ApplicationScoped
public class NotesDirectoryIdentityStore implements IdentityStore {
    @Inject AppConfig appConfig;

    @Override public int priority() { return 70; }
    @Override public Set<ValidationType> validationTypes() { return DEFAULT_VALIDATION_TYPES; }

    public CredentialValidationResult validate(UsernamePasswordCredential credential) {
        try {
            try(DominoClient client = DominoClientBuilder.newDominoClient().build()) {
                String dn = client.validateCredentials(appConfig.getAuthServer(), credential.getCaller(), credential.getPasswordAsString());
                return new CredentialValidationResult(null, dn, dn, dn, getGroups(dn));
            }
        } catch (NameNotFoundException e) {
            return CredentialValidationResult.NOT_VALIDATED_RESULT;
        } catch (AuthenticationException | AuthenticationNotSupportedException e) {
            return CredentialValidationResult.INVALID_RESULT;
        }
    }

    @Override
    public Set<String> getCallerGroups(CredentialValidationResult validationResult) {
        String dn = validationResult.getCallerDn();
        return getGroups(dn);
    }

    /* snip */
}

There's a lot going on here. To start with, the Security API goes hand-in-hand with CDI. That @ApplicationScoped annotation on the class means that this IdentityStore is an app-wide bean - Liberty picks up on that and registers it as a provider for authentication. The AppConfig is another CDI bean, this one housing the Domino server I want to authenticate against if not the local runtime (handy for development).

The IdentityStore interface definition does a little magic for identifying how to authenticate. The way it works is that the system uses objects that implement Credential, an extremely-generic interface to represent any sort of credential. When the default implementation is called, it looks through your implementation class for any methods that can handle the specific credential class that came in. You can see above that validate(UsernamePasswordCredential credential) isn't tagged with @Override - that's because it's not implementing an existing method. Instead, the core validate looks for other methods named validate to take the incoming class. UsernamePasswordCredential is one of the few stock ones that comes with the API and is how the container will likely ask for authentication if using e.g. HTTP Basic auth.

Here, I use some Domino API to check the username+password combination against the Domino directory and inform the caller whether the credentials match and, if so, what the user's distinguished name and group memberships are (with some implementation removed for clarity).

Token Authentication

That's all well and good, and will allow a user to log into the app with HTTP Basic authentication with a Domino username and password, but I'd also like the aforementioned GoogleLogin tokens to count as "real" users in the system.

To start doing that, I created a JAX-RS resource for the expected login URL:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
@Path("accounts")
public class AccountsResource {
    @Inject TokenBean tokens;
    @Inject IdentityStore identityStore;

    @PermitAll
    @Path("ClientLogin")
    @POST
    @Consumes(MediaType.APPLICATION_FORM_URLENCODED)
    @Produces(MediaType.TEXT_HTML)
    public String post(@FormParam("Email") @NotEmpty String email, @FormParam("Passwd") String password) {
        CredentialValidationResult result = identityStore.validate(new UsernamePasswordCredential(email, password));
        switch(result.getStatus()) {
        case VALID:
            Token token = tokens.createToken(result.getCallerDn());
            String mangledDn = result.getCallerDn().replace('=', '_').replace('/', '_');
            return MessageFormat.format("SID=null\nLSID=null\nAuth={0}\n", mangledDn + "/" + token.token()); //$NON-NLS-1$ //$NON-NLS-2$
        default:
            // TODO find a better exception
            throw new RuntimeException("Invalid credentials");
        }
    }

}

Here, I make use of the IdentityStore implementation above to check the incoming username/password pair. Since I can @Inject it based on just the interface, the fact that it's authenticating against Domino isn't relevant, and this class can remain blissfully unaware of the actual user directory. All it needs to know is whether the credentials are good. In any event, if they are, it returns the weird little format in the response and the RSS client can then use it in the future.

The TokenBean class there is another custom CDI bean, and its job is to create and look up tokens in the storage NSF. The pertinent part is:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
@ApplicationScoped
public class TokenBean {
    @Inject @AdminUser
    Database adminDatabase;

    public Token createToken(String userName) {
        Token token = new Token(UUID.randomUUID().toString().replace("-", ""), userName); //$NON-NLS-1$ //$NON-NLS-2$
        adminDatabase.createDocument()
            .replaceItemValue("Form", "Token") //$NON-NLS-1$ //$NON-NLS-2$
            .replaceItemValue("Token", token.token()) //$NON-NLS-1$
            .replaceItemValue("User", token.user()) //$NON-NLS-1$
            .save();
        return token;
    }

    /* snip */
}

Nothing too special there: it just creates a random token string value and saves it in a document. The token could be anything; I could have easily gone with the document's UNID, since it's basically the same sort of value.

I'll save the @Inject @AdminUser bit for another day, since we're already far enough into the CDI weeds here. Suffice it to say, it injects a Database object for the backing data DB for the designated admin user - basically, like opening the current DB with sessionAsSigner in XPages. The @AdminUser is a custom annotation in the app to convey this meaning.

Okay, so great, now we have a way for a client to log in with a username and password and get a token to then use in the future. That leaves the next step: having the app accept the token as an equivalent authentication for the user.

Intercepting the incoming request and analyzing the token is done via another Jakarta Security API interface: HttpAuthenticationMechanism. Creating a bean of this type allows you to look at an incoming request, see if it's part of your custom authentication, and handle it any way you want. In mine, I look for the "GoogleLogin" authorization header:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
@ApplicationScoped
public class TokenAuthentication implements HttpAuthenticationMechanism {
    @Inject IdentityStore identityStore;
    
    @Override
    public AuthenticationStatus validateRequest(HttpServletRequest request, HttpServletResponse response,
            HttpMessageContext httpMessageContext) throws AuthenticationException {
        
        String authHeader = request.getHeader("Authorization"); //$NON-NLS-1$
        if(StringUtil.isNotEmpty(authHeader) && authHeader.startsWith(GoogleAccountTokenHandler.AUTH_PREFIX)) {
            CredentialValidationResult result = identityStore.validate(new GoogleAccountTokenHeaderCredential(authHeader));
            switch(result.getStatus()) {
            case VALID:
                httpMessageContext.notifyContainerAboutLogin(result);
                return AuthenticationStatus.SUCCESS;
            default:
                return AuthenticationStatus.SEND_FAILURE;
            }
        }
        
        return AuthenticationStatus.NOT_DONE;
    }

}

Here, I look for the "Authorization" header and, if it starts with "GoogleLogin auth=", then I parse it for the token, create an instance of an app-custom GoogleAccountTokenHeaderCredential object (implementing Credential) and ask the app's IdentityStore to authorize it.

Returning to the IdentityStore implementation, that meant adding another validate override:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
@ApplicationScoped
public class NotesDirectoryIdentityStore implements IdentityStore {
    /* snip */

    public CredentialValidationResult validate(GoogleAccountTokenHeaderCredential credential) {
        try {
            try(DominoClient client = DominoClientBuilder.newDominoClient().build()) {
                String dn = client.validateCredentialsWithToken(appConfig.getAuthServer(), credential.headerValue());
                return new CredentialValidationResult(null, dn, dn, dn, getGroups(dn));
            }
        } catch (NameNotFoundException e) {
            return CredentialValidationResult.NOT_VALIDATED_RESULT;
        } catch (AuthenticationException | AuthenticationNotSupportedException e) {
            return CredentialValidationResult.INVALID_RESULT;
        }
    }
}

This one looks similar to the UsernamePasswordCredential one above, but takes instances of my custom Credential class - automatically picked up by the default implementation. I decided to be a little extra-fancy here: the particular Domino API in question supports custom token-based authentication to look up a distinguished name, and I made use of that here. That takes us one level deeper:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
public class GoogleAccountTokenHandler implements CredentialValidationTokenHandler<String> {
    public static final String AUTH_PREFIX = "GoogleLogin auth="; //$NON-NLS-1$
    
    @Override
    public boolean canProcess(Object token) {
        if(token instanceof String authHeader) {
            return authHeader.startsWith(AUTH_PREFIX);
        }
        return false;
    }

    @Override
    public String getUserDn(String token, String serverName) throws NameNotFoundException, AuthenticationException, AuthenticationNotSupportedException {
        String userTokenPair = token.substring(AUTH_PREFIX.length());
        int slashIndex = userTokenPair.indexOf('/');
        if(slashIndex >= 0) {
            String tokenVal = userTokenPair.substring(slashIndex+1);
            Token authToken = CDI.current().select(TokenBean.class).get().getToken(tokenVal)
                .orElseThrow(() -> new AuthenticationException(MessageFormat.format("Unable to find token \"{0}\"", token)));
            return authToken.user();
        }
        throw new AuthenticationNotSupportedException("Malformed token");
    }

}

This is the Domino-specific one, inspired by the Jakarta Security API. I could also have done this lookup in the previous class, but this way allows me to reuse this same custom authentication in any API use.

Anyway, this class uses another method on TokenBean:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
@ApplicationScoped
public class TokenBean {    
    @Inject @AdminUser
    Database adminDatabase;

    /* snip */

    public Optional<Token> getToken(String tokenValue) {
        return adminDatabase.openCollection("Tokens") //$NON-NLS-1$
            .orElseThrow(() -> new IllegalStateException("Unable to open view \"Tokens\""))
            .query()
            .readColumnValues()
            .selectByKey(tokenValue, true)
            .firstEntry()
            .map(entry -> new Token(entry.get("Token", String.class, ""), entry.get("User", String.class, ""))); //$NON-NLS-1$ //$NON-NLS-2$ //$NON-NLS-3$ //$NON-NLS-4$
    }
}

There, it looks up the requested token in the "Tokens" view and, if present, returns a record indicating that token and the user it was created for. The latter is then returned by the above Domino-custom GoogleAccountTokenHandler as the authoritative validated user. In turn, the JEE NotesDirectoryIdentityStore considers the credential validation successful and returns it back to the auth mechanism. Finally, the TokenAuthentication up there sees the successful validation and notifies the container about the user that the token mapped to.

Summary

So that turned into something of a long walk at the end there, but the result is really neat: as far as my app is concerned, the "GoogleLogin" tokens - as looked up in an NSF - are just as good as username/password authentication. Anything that calls httpServletRequest.getUserPrincipal() will see the username from the token, and I also use this result to spawn the Domino session object for each request.

Once all these pieces are in place, none of the rest of the app has to have any knowledge of it at all. When I implement the API to return the actual RSS feed entries, I'll be able to just use the current user, knowing that it's guaranteed to be properly handled by the rest of the system beforehand.

Bonus: Java 16

This last bit isn't really related to the above, but I just want to gush a bit about newer techs. My plan is to deploy this app using my Open Liberty Runtime, which means I can use any Open Liberty and Java version I want. Java 16 came out recently, so I figured I'd give that a shot. Though I don't think Liberty is officially supported on it yet, it's worked out just fine for my needs so far.

This lets me use the features that have come into Java in the last few years, a couple of which moved from experimental/incubating into finalized forms in 16 specifically. For example, I can use records, a specialized type of Java class intended for immutable data. Token is a perfect case for this:

1
2
public record Token(String token, String user) {
}

That's the entirety of the class. Because it's a record, it gets a constructor with those two properties, plus accessor methods named after the properties (as used in the examples above). Neat!

Another handy new feature is pattern matching for instanceof. This allows you to simplify the common idiom where you check if an object is a particular type, then cast it to that type afterwards to do something. With this new syntax, you can compress that into the actual test, as seen above:

1
2
3
4
5
6
7
@Override
public boolean canProcess(Object token) {
    if(token instanceof String authHeader) {
        return authHeader.startsWith(AUTH_PREFIX);
    }
    return false;
}

Using this allows me to check the incoming value's type while also immediately creating a variable to treat it as such. It's essentially the same thing you could do before, but cleaner and more explicit now. There's more of this kind of thing on the way, and I'm looking forward to the future additions eagerly.

Rapid Progress in Open-Liberty-Runtime Land

Tue Mar 16 18:07:30 EDT 2021

Tags: liberty
  1. Options for the Future of the Domino Open Liberty Runtime
  2. Next Steps With the Open Liberty Runtime
  3. Rapid Progress in Open-Liberty-Runtime Land

After my work implementing a reverse proxy the other day, my mental gears kept churning, and I've made some great progress on some new ideas and some ones I had had kicking around for a while.

Domino-Hosted Reverse Proxy

In my last post, I described the new auto-configuring reverse proxy I added, which uses Undertow on a separate port, supporting HTTP/2 and WebSocket. This gives you a unified layout that points to your configured webapps first and then, for all other URLs, points to Domino.

After that, though, I realized that there'd be some convenience value in doing that kind of thing in Domino's HTTP stack itself. The HttpService classes that hook into the XspCmdManager class are designed for just this sort of purpose: listen for designated URLs and handle them in a custom way. I realized that I could watch for incoming requests in the webapps' context roots and direct to them from Domino itself. So that's just what I did. When enabled, you can go to a URL for a configured webapp path (say, "/exampleapp") right on Domino's HTTP/HTTPS port like normal and it'll proxy transparently to the backing app. Better still, it picks up on the mechanisms that Liberty provides to work with X-Forwarded-* headers and $WS* headers to pass along incoming request information and authenticated-user context.

The way I'm describing this may sound a bit dry and abstract, but I think this has a lot of potential, at least when you don't need HTTP/2. With this setup, you can attach fully-modern WAR files in an NSF, configure a server with the very latest Java server technologies and any Java version of your choosing, and have it appear like any other web app on Domino. /foo.nsf goes to your NSF, /fancyapp goes to a modern Java app. Proper webapps, no OSGi dependency nightmare, no Domino-toolchain miasma (well, less of one), deployed seamlessly via NSF - I think it's pretty neat.

Mix-and-Match Runtimes and Java Versions

Historically, the project has had a single configuration document where you specify the version of Open Liberty and your Java version and flavor of choice for all configured apps. Now, though, I've added the ability to pick those on a per-server basis. This can come in handy if you want to use Java 11 (the current LTS version) for complicated apps, but try out the just-released Java 16 for a new app.

Progress on Genericizing the Tooling

Though the project is named after Open Liberty, there's not really anything about the concept that's specific to Liberty as such. Liberty is extremely good and it's particularly well-suited to this purpose, but there's no reason I couldn't adapt this to run any app server, or really any generic process.

Actually supporting anything else is a big task - every server or task would have its own concept of what an "app" is, how configuration is done, how to monitor logs, how to identify open ports, etc. - but the first step is to at least lay the groundwork. So that I did: I've embarked on the path of separating the core runtime loop (start/stop/restart/refresh/etc.) from the specifics of Liberty.

There's still tons of work to do there, and I'm not fully convinced that it'd be worth it (since you should really be writing Java webapps anyway), but that potential future path is smoother now.

Overall

I think this is getting close to the point where it'll be a proper 3.0 release, and it's also getting to a point where the "why is this good?" pitch should be an easier sell for people who aren't already me. I still have vague plans to do a video or webcast on this, and this should make for a less-arcane time of that. So, we'll see! In the mean time, this should all make my own uses all the better.

Next Steps With the Open Liberty Runtime

Fri Mar 12 11:37:39 EST 2021

Tags: liberty
  1. Options for the Future of the Domino Open Liberty Runtime
  2. Next Steps With the Open Liberty Runtime
  3. Rapid Progress in Open-Liberty-Runtime Land

About a year and a half ago, I wrote a post musing about my options with the future of the Domino Open Liberty Runtime project. It's been serving me well - I still use it here and with a client CI setup - but it hasn't quite hit its full potential yet.

Its short-term goal was easy enough to accomplish: I wanted a good way to run modern Servlet apps using an active Domino runtime, and that works great. Its long-term goal takes more work, though: becoming the clear best way to do "web stuff" on Domino. There are a lot of definitions for what that might be, and that "on Domino" bit may not even be the way one would want to go about doing it. Still, I think there's potential there.

So, this week, I decided to go back in and see if I could spruce it up a bit.

The Core of Domino Java HTTP

This started with me musing a bit on Twitter about the true lowest-level entrypoint in Domino's HTTP stack is, and where the border between native and Java lies. After overthinking it a bit, I found that the answer was obvious from any stack trace: XspCmdManager.

From what I gather, the native HTTP task (which is much more opaque than the Java part) loads up its JVM, uses the code in xsp.http.bootstrap.jar to initialize the OSGi environment, asks that environment for the com.ibm.domino.xsp.bridge.http bundle, and uses XspCmdManager in there to handle the layering.

That class has a couple public methods, but two are of immediate interest: isXspUrl and service. The isXspUrl is called for each incoming HTTP request. If that returns false, then nHTTP goes about its normal business like it always did; if it returns true, then nHTTP calls service with a bunch of handle parameters and lets the Java runtime take it from there.

That got me to tinkering. Since that class is in an OSGi bundle, you can readily "outrank" it by having another bundle with the same name and a higher version available. Then, since the class and its methods are just called by strings (more or less), you can have other classes with the same names and APIs in place to do whatever you want. And, such as it is, that works well: you can pretty readily inject whatever code you want into the isXspUrl and service methods and have it take over.

However, that doesn't actually buy you much. What I'd really want to do would be to improve on the actual HTTP server - HTTP/2 support, web sockets, all that - and the Java layer only comes into play after nHTTP has received and started interpreting the connection as an HTTP request. You're not given the raw incoming stream. Additionally, there's not actually any real need to override this low level: the HttpService classes you can register via the com.ibm.xsp.adapter.serviceFactory extension point can choose to handle any incoming URL directly at essentially the same low level as XspCmdManager.

So, while that was fun to poke around with, I don't think there's anything really to be gained there.

Reverse Proxy Improvements

So I went back to an older idea I had kicking around for spawning an all-encompassing reverse proxy. The project has had a lesser version of this for a good while, originally as a WAR file you could add to a Liberty server and then later as a lower-level Liberty feature. However, the way that worked was limited: it would allow you to proxy non-matched requests to a Liberty server to Domino, but didn't do anything to coordinate multiple servers beyond that. Additionally, being a Liberty feature, it limited my future options, such as genericizing the project to work with other app servers.

For my next swing at the problem, I went with Undertow, which is an embeddable Java web server in many ways similar to Jetty, and which is (I gather) the core HTTP part of Wildfly. What made Undertow appealing to me was its modern standards compliance, its relatively-low dependency footprint, and its built-in reverse proxy handler. Additionally, since it's Java, that meant I could embed it in the running JVM without spawning yet another process, hopefully making things all the more reliable.

To go with this, the config DB sprouted some more configuration options:

Reverse proxy config

Along with configuration you explicitly set there and in the individual Liberty server configurations, I have the proxy pick up Domino connection information from names.nsf, allowing it to avoid inconvenient extra environment variables or flags.

And, so far, this has been working splendidly. Undertow's configuration is pretty straightforward, and it wasn't too bad to configure it with prefix matching for the context roots of opted-in apps.

The Next Overall Goal

There's more work to do, beyond just finishing the basic implementation here. I'd really like to get it to a point where you can use this to deploy (at least) WAR-based apps "to Domino" without having to think too much about it, like how you don't have to think about deploying an NSF-based app. It should be thoroughly doable to have the reverse proxy pick up its certificate chain from Domino if desired (especially with the revamped capabilities coming in V12), and some recent changes I made make app deployment noticeably smoother than previously.

Certainly, this sort of project has some inherent limitations compared to nHTTP, but this feels like it's getting a lot closer to a direct upgrade and less like a janky proof-of-concept.

Weekend Domino-Apps-in-Docker Experimentation

Sun Jun 28 18:37:19 EDT 2020

  1. Weekend Domino-Apps-in-Docker Experimentation
  2. Executing a Complicated OSGi-NSF-Surefire-NPM Build With Docker
  3. Getting to Appreciate the Idioms of Docker

For a couple of years now, first IBM and then HCL have worked on and adapted community work to get Domino running in Docker. I've observed this for a while, but haven't had a particular need: while it's nice and all to be able to spin up a Domino server in Docker, it's primarily an "admin" thing. I have my suite of development Domino servers in VMs, and they're chugging along fine.

However, a thought has always gnawed at the back of my mind: a big pitch of Docker is that it makes not just deployment consistent, but also development, taking away a chunk of the hassle of setting up all sorts of associated tools around development. It's never been difficult, per se, to install a Postgres server, but it's all the better to be able to just say that your app expects to have one around and let the tooling handle the specifics for you. Domino isn't quite as Docker-friendly as Postgres or other tools, but the work done to get the official image going with 11.0.1 brought it closer to practicality. This weekend, I figured I'd give it a shot.

The Problem

It's worth taking a moment to explain why it'd be worth bothering with this sort of setup at all. The core trouble is that running an app with a Notes runtime is extremely annoying. You have to make sure that you're pointing at the right libraries, they're all in the right place to be available in their internal dependency tree, you have to set a bunch of environment variables, and you have to make sure that you provide specialized contextual info, like an ID file. You actually have the easiest time on Windows, though it's still a bit of a hurdle. Linux and macOS have their own impediments, though, some of which can be showstoppers for certain tasks. They're impediments worth overcoming to avoid having to use Windows, but they're impediments nonetheless.

The Setup

But back to Docker.

For a little while now, the Eclipse Marketplace has had a prominent spot for Codewind, an IBM-led Eclipse Foundation project to improve the experience of development with Docker containers. The project supplies plugins for Eclipse, IntelliJ, and VS Code / Eclipse Che, but I still spend most of my time in Eclipse, so I went with the former.

To begin with, I started with the default "Open Liberty" project you get when you create a new project with the tooling. As I looked at it, I realized with a bit of relief that there's not too much special about the project itself: it's a normal Maven project with war packaging that brings in some common dependencies. There's no Maven build step that expects Docker at all. The specialized behavior comes (unsurprisingly, if you use Docker already) in the Dockerfile, which goes through the process of building the app, extracting the important build results into a container based on the open-liberty runtime image, bringing in support files from the project, and launching Liberty. Nothing crazy, and the vast majority of the code more shows off MicroProfile features than anything about Docker specifically.

Bringing in Domino

The Docker image that HCL provides is a fully-fledged server, but I don't really care about that: all I really need is the sweet, sweet libnotes.so and associated support libraries. Still, the easiest way to accomplish that is to just copy in the whole /opt/hcl/domino/notes/11000100/linux directory. It's a little wasteful, and I plan to find just what's needed later, but it works to do that.

Once you have that, you need to do the "user side" of it: the ID file and configuration. With a fully-installed Domino server, the data directory balloons in side rapidly, but you don't actually need the vast majority of it if you just want to use the runtime. In fact, all you really need is an ID file, a notes.ini, and a names.nsf - and the latter two can even be massively trimmed down. They do need to be custom for your environment, unfortunately, but at least it's much easier to provide just a few files than spin up and maintain a whole server or run the Notes client locally.

Then, after you've extracted the juicy innards of the Domino image and provided your local resources, you can call NotesInitExtended pointing to your data directory (/local/notesdata in the HCL Docker image convention) and the notes.ini, and voila: you have a running app that can make local and remote Notes native API calls.

Example Project

I uploaded a tiny project to demonstrate this to GitHub: https://github.com/jesse-gallagher/domino-docker-war-example. All it does is provide one JAX-RS resource that emits the server ID, but that shows the Notes API working. In this case, I used the Darwino Domino NAPI (which I really need to refresh from upstream), but Domino JNA would also work. Notes.jar would too, but I think you'll need one of those projects to do the NotesInitExtended call with arguments.

The Dockerfile for the project goes through the steps enumerated above, based on how the original example image does it, and was tweaked to bring in the Domino runtime and support files. I stripped the Liberty-specific stuff out of the pom.xml - I think that the original route the example did of packaging up the whole server and then pulling it apart in Docker image creation has its uses, but isn't needed here.

Much like the pom.xml, the code itself is slim and doesn't explicitly refer to Docker at all. I have a ServletContextListener to init and term the Notes runtime, as well as a Filter implementation to init/term the request thread, but otherwise it just calls the Notes API with no fuss.

Larger Projects

I haven't yet tried this with larger projects, but there's no reason it shouldn't work. The build-deploy-run cycle takes a bit more time with Docker than with just a Liberty server embedded in Eclipse normally, but the consistency may be worth it. I've gotten used to running a killall -KILL java whenever an errant process gloms on to my Notes ID file and causes the server to stop being able to init the runtime, but I'd be glad to be done with that forever. And, for my largest project - the one with the hundreds of XPages and CCs - I don't see why that wouldn't work here too.

Normal Domino Projects

Another route that I've considered for Domino in Docker is to use it to deploy NSFs and OSGi projects. This would involve using the Domino image for its intended purpose of running a full server, but configuring the INI to just serve HTTP, and having the Dockerfile place the built OSGi plugins and NSFs in their right places. This would certainly be much faster than the build-deploy-run cycle of replacing NSF designs and deploying the plugins to an Update Site NSF, though there would be a few hurdles to get over. Not impossible, though.


I figure I'll kick the tires on this some more this week - maybe try deploying the aforementioned giant XPages .war project to it - to see if it will fit into my workflow. There's a chance that the increased deployment times won't be worth it, and I won't really gain the "consistent with production" advantages of Docker when the way I'm developing the app is already a wildly-unsupported configuration. It might be worth it if I try the remote mode of Codewind, though: I have some Liberty servers that Jenkins deploys to, but it'd be even-better to be able to show my running app to co-developers to work on something immediately, instead of waiting for the full build. It's worth some investigation, anyway.

Options for the Future of the Domino Open Liberty Runtime

Mon Nov 18 10:57:24 EST 2019

Tags: java liberty
  1. Options for the Future of the Domino Open Liberty Runtime
  2. Next Steps With the Open Liberty Runtime
  3. Rapid Progress in Open-Liberty-Runtime Land

I've been thinking lately about what I can do with the Open Liberty Runtime project. If you're not familiar with it, the gist of it is that it gloms the Open Liberty Java/Jakarta EE app server onto Domino to allow deploying up-to-date WAR-based apps "to Domino", sharing the server's access to NSFs.

In its current form, it's serving me reasonably well: this blog is running on it, for example, and writing the SFTP server to target it has been a delight. I've been tracking a set of issues about it, mostly relating to improving the administration/deployment side as it is in ways that would improve my current use. There are some more-out-there ideas like running apps from NSFs, but for the most part it's been focused on "a JEE server attached to Domino".

Slightly-Tweaked Model

But what I've been thinking about lately is tweaking the model a bit to be less like "a new HTTP runtime" and more like a series of related servers, which would then be proxied to. This is in line with what Liberty has been trending towards anyway. Though it started out as essentially a cleaned-up WebSphere monolith, the fact that it's been targeted at cloud-service use has meant that it's been aggressively tuned to allow you to include only the specific features you need, with the idea that there will be many instances of Liberty running, one per app. This is also seen in their relentless push for faster startup times, something that only matters if you plan to start up instances of the server very frequently.

My Runtime project technically supports this model currently. Though it uses a single Liberty runtime download, the main Servers list is geared towards creating multiple running servers with their own app sets, which end up being distinct processes:

Open Liberty Admin Servers view

So one could hypothetically put dozens of servers in here and use it as I've been describing. Other than providing that capability, though, the tooling doesn't really help you much: in particular, it would do nothing to alleviate the problems of conflicting port mappings or unifying all of these apps into a single front. I did a little work making a reverse-proxy webapp, but it's really meant for the case of having one Liberty server with all of your apps, and wouldn't have any meaning in a many-server setup.

One option I've been considering here is giving the server side of this some knowledge of and control over the HTTP ports used by the individual Liberty servers, so that it could assign random ones automatically. Then, I'd also be able to add on a dynamically-configured reverse proxy that would know about these different running servers and could route requests automatically.

Other Runtimes

On top of that, there's nothing about the project that's really inherently tied to Open Liberty as such. It currently assumes that that's the runtime, but the only way it interacts with it is by pouring files out to the filesystem and then running some known shell scripts. There's nothing stopping such a setup from also gaining a bit of knowledge about, say, Node, Swift, or any number of other runtimes. Things would get progressively weirder the further you get from Java when you want to access the Domino runtime, but hey, that's what the C API is for!

Reinventing the Wheel

While mulling this over, though, I did realize one thing: I'm essentially describing a jankier version of Kubernetes. For the most part, the problem of running disparate apps with their own runtimes and needs managed by a central server is solved by that and Docker. This project would be a bit different in that the apps would automatically inherit a Domino runtime and would also (usefully) maintain clustered/replicated configuration by way of the Domino server backbone, so it's not exactly the same thing. And, while HCL is pushing for a Dockerized future, the pieces aren't there yet and won't fully be for a while: while Domino-on-Docker is a thing technically, "many Notes-runtime apps with many server IDs" for NRPC use is a licensing minefield, and the gRPC bindings are currently painfully limited in both capability and language support.

So I likely would be best off just letting time (which is to say, HCL) solve the fancier problems and just focusing on the Runtime's original job of being a better Java app server for Domino. I do think it would be useful to better support the one-server-per-app approach, especially for a hypothetical case of wanting to deploy an XPages app in one Liberty server but then a JSP or JSF app in another, and that'll take a better reverse proxy.

I think it's good to mull over, though. This already provides a good path to better app dev with Domino, and smoothing that out more will make my life all the better and will hopefully be useful to others.

Writing Domino JEE Apps Outside Domino

Fri Nov 08 13:47:13 EST 2019

In my last post, I mentioned that I decided to write the File Store app as a Jakarta EE application running in a web server alongside Domino, instead of as an app running on the server itself. In the comments, Fredrik Norling asked the natural question of whether it'd have been difficult to run this on the server, which in turn implies a lot of questions about deployment, toolkits, and other aspects.

Why Not

My immediate answer is that it would certainly be doable to run this on Domino, but the targeted nature of the app meant that I had some leeway in how I structured it. There are a couple reasons why I went this route, but most of them just boil down to not having to deal with all the gotchas, big and small, of doing development on top of Domino.

As a minor example, I wanted to add some configuration parameters to the app, and for this I used MicroProfile Config. MicroProfile Config is a small spec that standardizes the process of doing key-value-based configuration, allowing me to just say that I want a named value and let the runtime pick it up from the environment, system properties, or a config file as necessary. It wouldn't be difficult to write a configuration checker, but why bother reinventing the wheel when it's right there?

Same goes for having the SFTP server load at startup and be running consistently. If I did this on Domino, I could either put it in an NSF-based XPages app with an ApplicationListener and depend on the preload notes.ini parameter, or I could go my usual route and use an HttpService that manages the lifecycle. Neither route is difficult either, but they're both weirder and more fiddly than using servlet context listeners in a normal JEE app.

And then there's just the death by a thousand cuts: needing to use Tycho to build, having to deal with Eclipse Target Platforms, making sure anything using reflection is wrapped in an AccessController block to avoid Java policy issues, the nightmare of dependency management, the requirement to use either Designer or the okay-but-still-cumbersome Domino HTTP restart development cycle, and on and on. With a JEE app, all of those problems disappear into thin air.

The Main Hurdle

All that said, there's still the hurdle of actually implementing Notes native API access outside of Domino. At its core, what I'm doing is the same as what has been possible for years and years, initializing a Notes/Domino runtime in a secondary process. The setup for this varies platform by platform but generally involves either setting a couple environment variables for your process or (as is the case with the Domino Open Liberty Runtime) spawning the process from Domino itself.

Beyond setting up your process's environment, there's also the matter of initializing each thread of your app. On Domino, all threads are inherently NotesThreads, but outside of that there's some specific management to be done. You can call NotesThread.sinitThread() and NotesThread.stermThread() manually or run your code in specifically-spawned NotesThreads. I largely do the latter, making use of an ExecutorService to handle maintaining a thread pool for me. I then added some supporting code on top of that to let me run blocks of code as an arbitrary named user while retaining a cached set of sessions. That part wouldn't really be necessary if you're writing an app that doesn't need to act as different user names, but it's handy for something like this, where incoming connections must run as a user to enforce security fields properly.

Philosophical Advantages

Beyond my desire to avoid hassles and get to use modern Jakarta EE goodies, I think there are also some more philosophical advantages to writing applications this way, and specifically advantages that line up with some of HCL's stated long-term goals as well. Domino has long been a monolith, and that has largely served it well, but keeping everything from the DB all the way up through to the app-dev stack in the same bag means you're often constrained in your toolkits and deployment choices. By moving things just outside of the main Domino tower, you're freed up to use different languages and techniques that don't have to be integrated and maintained in the core. This could be a much larger jump than I'm doing here, and that's just what HCL has been pushing with the AppDev Pack and the associated Node.js domino-db module.

I think it's beneficial to picture Domino more as a dominant central core with ancillary servers and apps running just adjacent to it - not a full-on Microservices architecture, but just a little decentralized to keep areas of concern nice and separate. Done well, this setup is a lot more flexible and fault-tolerant, while still being fairly straightforward and performant. It's also a perfect match for a project like this that's geared towards implementing a new protocol - it doesn't even have to worry about HTTP SSO or reverse proxies yet.

So I think that this is where things are heading anyway, and it's just a nice cherry on top that it also happens to be a much (much, much) more pleasant way to write Java apps than OSGi plugins.

Another Side Project: NSF SFTP File Store

Tue Nov 05 16:12:58 EST 2019

When I Know Some Guys kicked off, we bought a couple of Transporter devices to handle our file-syncing needs without having to rely on Dropbox or another hosted service. Unfortunately, Nexsan killed off Transporters a couple years back, and, though they still kind of work, it's been a back-burnered project for us to find a good replacement.

Ideally, we'd find something that would handle syncing data from our various locations transparently while also allowing for normal file access through some common protocol. Aside from the various hosted commercial services, there are various software packages you can run locally, like OwnCloud and NextCloud. I even got a Raspberry Pi with a USB hard drive to tinker with those, though I never got around to actually doing so.

Yesterday, though, I realized that we already have a fleet of privately-owned servers that replicate seamlessly in the form of our Domino domain. They also, conveniently, have nice capabilities for blob storage, shared user authentication, and fine-grained access control. What they didn't have, though, was any good form of file protocol. I'm pretty sure that Domino still has WebDAV built-in, but that's just for design elements. Years ago, Stephan Wissel created a project that works with file attachments, but that didn't cover all the bases I wanted and I didn't want to adopt the code base to extend it myself. There's also Karsten Lehmann's Mindoo FTP Server from around the same time, but that was non-SSH FTP and targeted at the local filesystem.

So that meant it was time for a new project!

The Plan

I initially looked at WebDAV, since it's commonly supported, but it's also very long in the tooth, and that has led to all of the projects implementing that being pretty old and cumbersome as well.

Then, I found the Apache Mina project, which implements a number of server protocols, SSH included, and is actively maintained. Looking into how its SFTP support works, I found that it's shockingly simple and well-designed. All of the filesystem access is based on the Java NIO packages added in Java 7, which is a pluggable system for making arbitrary filesystems.

Using SFTP and SCP means that it'll work with common tools like Transmit and - critically - rsync. That means that, even in the absence of an custom app like Dropbox, mobile access and syncing with a local filesystem come "for free".

The Project

So out of this was born a new project, NSF File Server. Thanks to how good Mina is, I was able to get a NIO filesystem implementation and SFTP+SCP server up and running in very little time:

Screen shot of the SFTP server in Transmit

In its current form, there aren't a lot of tricks: the files are stored as attachments to normal documents in a "filestore.nsf" database with two views, which allow for directory-contents and individual-file lookup while also being pretty self-explanatory to a Notes client. I have some ideas about other ways to structure this, but there's an advantage to having it be pretty basic:

Screen shot of the File Store NSF

Similarly simple are the authentication mechanisms, which allow for both password and public-key authentication based on the HTTPPassword and sshPublicKey fields in a person document, respectively (and maybe via LDAP in directory assistance? I never remember the mechanics of @NameLookup).

The App

Because this is a scratch-our-itch project and I'm personally tired of dealing with Domino's OSGi environment, the app itself is implemented as a WAR file, expected to be deployed to a modern Jakarta EE server like my precious Open Liberty. Conveniently, I have just the project for that as well, making deploying NSF-accessing WARs to Domino a bit more reasonable.

For now, the app is faceless: the only "web app" bits are some listeners that initialize the Notes environment and then spawn the SSH server. I plan to add at least a basic web UI, though.

Future Plans

My immediate plan is to kick the tires on this enough to get it to a point where it can serve as its original goal of a syncing SFTP server. I do have other potential ideas in mind for the future, though, if I feel so inspired. Most of my current logged issues are optional enhancements like POSIX attribute support, more-efficient handling with the C API, and better security handling.

It's also a good foundation for any number of other interfaces. A normal web UI is the natural next step, but it could easily provide, for example, S3 API compatibility.

For now, though I haven't gotten around to uploading a build to OpenNTF yet, feel free to poke around the code and let me know if any ideas strike your fancy.

New Project: Domino Open Liberty Runtime

Thu Jan 03 17:23:50 EST 2019

Tags: liberty

The end of the year is often a good time to catch up on some side projects, and this past couple weeks saw me back to focusing on what to do about our collective unfortunate situation. I started by expanding the org.openntf.xsp.jakartaee project to include several additional JEE standards, but then my efforts took a bit of a turn.

Specifically, I thought about Sven Hasselbach's series on dropping Domino's HTTP stack while still keeping API access to Domino data, and decided to take a slightly-different approach. For one, instead of the plucky-but-not-feature-rich Jetty, my eye turned to Open Liberty, the open-source variant of WebSphere Liberty, which in turn is the surprisingly-pleasant trimmed-down counterpart to WebSphere. Using Liberty instead of Jetty means getting a top-tier Java EE runtime, supporting the full Java EE 8 and MicroProfile 2.1 specs, developed by a team chomping at the bit to support all the latest goodies.

Additionally, I decided to try launching Liberty from a Domino plugin, and this bore fruit immediately: with this association, the Liberty runtime is able to fire up sessions and access databases as the Domino server without causing the panic halt that Sven ran into.

So, in short, what this project does is add a fully-capable Java EE server with all the fixings - the latest JEE spec, HTTP/2, Servlet 4, WebSockets, and so forth - running with native access to Domino data alongside a normal server, and with the ability to manage configuration and app deployment via NSFs. Essentially, it's like a second HTTP stack.

Why?

I made some good progress in bringing individual JEE technologies to XPages, but I was still constrained by the core capabilities of the XPages runtime, not the least of which was its use of Servlet 2.4, a standard that went obsolete in two-thousand-freaking-five. Every step of the project involves fighting against the whole underlying stack, just to get some niceties that come for free if you start with a modern web container.

Additionally, while Domino has the ability to run Java web applications, this support is similarly limited, providing very few of the standards that make up Java EE and even apparently lacking a JSP compiler set up on the server. It's also, by virtue of necessarily wrapping the app in an OSGi bundle, much fiddlier to develop than a normal WAR file.

And, in a general sense, I'm tired of waiting for this stack to get better. Maybe HCL has grand plans for Java development on Domino in the future - they haven't said. I still doubt it, in part because of the huge amount of work it would entail and in part because I'm not sure that improving XPages would even be strategically wise for them. And say they did improve XPages in a lot of ways people have been clamoring for - WebSockets and whatnot. Would they cover all of the desired features? What about newly-emerging technologies from outside? Their Node.JS strategy makes me think they've thought better of being the vendor of a full-stack web technology.

This route, though, provides a route to making web apps with current standards regardless of what HCL does with XPages. This way, you can work with the entire Java web community at your back, rather than cloistered off with unknown technology. If you want to make an app with Spring, you can, following all of their examples. If you'd rather use PrimeFaces, or just JAX-RS, or JSP, you can do so just as easily. And if your chosen technologies go out of favor, you'll be in the same boat as countless others, and the new preferred choices will be open to you.

Finally, there's just the fact that Java EE 8 is really, really good. The platform made tremendous strides since the bad old days, and developing an app with it is a revitalizing experience.

How?

To set this up, I deliberately chose a very low-integration path: the task in Domino unzips a normal Open Liberty distribution and then runs it using Domino's JVM, just using the default bin/server script. No embedding, no shared runtime. This way, it doesn't have to fight against any constraints that Domino's environment imposes (such as the fact that both Domino and Liberty want to run an OSGi environment), and it doesn't lead to a situation where a crash in Liberty would bring down Domino's HTTP.

The rest kind of comes along for the ride. Since it's running with the Domino JVM, it already has the trappings needed to use Notes.jar, so it's really just a matter of using the classes and making sure you run inside a NotesThread or otherwise initialize and terminate your thread.

Future

Assuming I keep with this project (and I think I have some for-work uses for it, which dramatically increases its odds), I have some ideas for future improvements.

I've added a basic HTTP reverse proxy servlet, and I plan to make it more integrated. The idea there is to allow Liberty to be the primary HTTP entrypoint for Domino, with anything not handled by a web app it's hosting to pass through transparently to Domino.

In time, I aim to add some more integration, such as CrossWorlds and general utilities. I've started by adding in a basic user registry, allowing JEE-standard apps to authenticate against Domino without extra configuration (though it doesn't currently do groups). That could be expanded a good deal - Liberty could read SSO tokens using the C API (or share LTPA as WebSphere normally does), and it'd be nice to have a reasonable method for sharing non-SSO DomAuthSessId cookies.

The Project

I set up on the project on GitHub: https://github.com/OpenNTF/openliberty-domino . I think there's some definite promise with this, especially once there are a couple example apps that could show off the possibilities.