My 2021 Open-Source Year

Fri Dec 31 16:34:27 EST 2021

For the last few weeks, I had a minor flurry of work in a couple of the open-source projects I maintain, and I figured this would be as good a time as any to give an overview of my active work in these projects and how they relate.

Overview

I had a few minor contributions and picked-up projects through the year, but most of my currently-public work went towards four main projects:

I do find it interesting to consider how these relate. Some aspects are easy: they're all Domino-related for sure, and they all at one time or another have played a significant infrastructural role in my client work. Beyond that, though, they form a nebulous message: though I don't know for sure what to do with all the XSP markup we have, I know it can't be the status quo and I'm fairly confident that Jakarta EE is the best route forward.

Domino Open Liberty Runtime

This project allows you to run instances of Open Liberty as a spawned process from Domino, which in turn means both that you can readily(-ish) access Domino data and also that you can deploy these apps in an NSF-based way to your servers, without having to have particular mastery of Liberty administration as such.

The big-ticket news this year was my addition of a Domino-hosted reverse proxy and arbitrary JVM selection. With these additions, the project ended up being a particularly-compelling way to glom modern apps on to Domino without even necessarily worrying about pointing to a different port. I also added in the standalone proxy to both the apps and Domino - which would gain you Web Sockets and HTTP/2 - which is another nice way to get better app toolkits without having to bother an admin.

XPages Jakarta EE Support

This one saw a burst of activity in just this past month. For a while, it had sat receiving only minor tweaks: I use it for EL, CDI, and JAX-RS in my client project, and the changes I made were generally just to add features or fix bugs needed there.

This month saw the big switch from Java/Jakarta EE 8 (javax.* packages) to Jakarta EE 9 (jakarta.* packages). This was a very-interesting prospect: though it on paper just involved switching class names around, it necessitated adding some Servlet 5 shims around Domino's irreponsibly-old Servlet 2.4/2.5 hybrid layer. While this didn't bring full Servlet 5 features, it does mean I'm suddenly much less bound by the strictures of the older version: a lot of Servlet-based software casually depends on at least 3 even for just convenience methods (like getting a ServletContext from a ServletRequest).

I also took the opportunity to go back and add some features I've long wanted - JSP and MVC - to the NSF side. These have less immediate call in my client work (which primarily involves additions on the OSGi servlet layer and less in the NSF), but suddenly created a surprisingly-compelling update to in-NSF development. It's stymied by, naturally, a lack of support in Designer, but the idea of writing something that approaches a true modern Jakarta app inside an NSF is intriguing indeed.

NSF ODP Tooling

The NSF ODP Tooling has proven to be my workhorse. The ODP-to-NSF compilation alone has saved me countless hours from the previous laborious task of syncing two dozen NSFs with their ODPs and the fault-prone process of trying to get clean NTF copies of them for each build. Now, the former is done with a single script I can run in the background and the latter happens automatically every single push to our Git repository. Delicious.

It also provides an invaluable part of my normal development process for this client. Alongside the next project, it lets me do my XPages development outside of Designer, meaning I only need to schlep my way back to that IDE to look at legacy elements in context or to troubleshoot something with the Notes or Domino OSGi view of the world.

The work in this project this year has primarily been around edge cases, bug fixes, and scrambling through the rocky shoals of the ever-changing macOS Notes client. It's been a tough time here and there: certain parts of the NSF that I use less frequently have their own edge-case needs (like SSJS sort of existing in two places and the CD storage being surprisingly difficult to work with. I also had some fun combat with filesystems and charsets, which was fortunately even-more enlightening than it was annoying.

XPages Runtime

The XPages Runtime project admittedly had a slow year, but it's nonetheless a critical component in my CI/CD workflow, and gets periodic fixes for trouble I run into. The good news there is that it generally does what it promises: I run XPages outside of Domino constantly with this thing. Though it still requires more coordination on the app side than I'd like, it's gradually approaching a state where it feels like a peer to other server-side toolkits that one can bring into a WAR file, and that's nice.

It will likely have some work coming up in the near future, though: if I'm to move my client's app over to the jakarta.* namespace, that will require at least some level of cooperation with this project. While I can't change the source of XPages to accept these coordinates itself, it should be doable to do much like what I did with the XPages Jakarta EE Support project and use my shims to translate back and forth between old and new classes. The main difference here will be that the surrounding container will speak the new form natively, but that should be fine.

I expect a certain amount of annoying trouble with things like XPages-internal expectations about JAX-B and JavaMail, and it's certainly possible that such dependencies will end up proving to be debilitating, but I'm optimistic. If I'm successful, it'll be one more way that I'm crafting a whole workflow where modern technologies are the primary target and XPages can remain a component in the lineup.

Miscellaneous Grab Bag

Beyond those big ones, I had a handful of other contributions here and there. I'm sure there were a few others, but I'll close on two that I found pleasing.

The other week, I got a Pull Request merged into the Eclipse Krazo project - while not a huge deal, it does always give me a little thrill when my code goes into a project where I'm not the primary or sole contributor.

I also adopted POI4XPages, which was for more-practical reasons. I've used POI4XPages for a couple clients for a while, but it was certainly showing its age (sitting at 3.x since 2017). Moreover, Notes 11's corruption of its classpath with POI 4.x made working with it annoying beyond just being out-of-date and lacking some breaking changes in the mean time. Since I had moved one of my clients to POI 5.0 a bit ago, I decided to break that code out and adapt it into POI4XPages. Then, of course, along came Log4Shell and I scrambled out three subsequent patch versions just to update log4j. So it goes.

JSP and MVC Support in the XPages JEE Project

Mon Dec 20 11:20:06 EST 2021

Tags: jakartaee java
  1. Updating The XPages JEE Support Project To Jakarta EE 9, A Travelogue
  2. JSP and MVC Support in the XPages JEE Project
  3. Migrating a Large XPages App to Jakarta EE 9
  4. XPages Jakarta EE Support 2.2.0
  5. DQL, QueryResultsProcessor, and JNoSQL
  6. Implementing a Basic JNoSQL Driver for Domino
  7. Video Series On The XPages Jakarta EE Project
  8. JSF in the XPages Jakarta EE Support Project
  9. So Why Jakarta?
  10. XPages Jakarta EE 2.5.0 And The Looming Java-Version Wall
  11. Adding Concurrency to the XPages Jakarta EE Support Project
  12. Adding Transactions to the XPages Jakarta EE Support Project
  13. XPages Jakarta EE 2.9.0 and Next Steps
  14. XPages JEE 2.11.0 and the Javadoc Provider
  15. The Loose Roadmap for XPages Jakarta EE Support
  16. XPages JEE 2.12.0: JNoSQL Views and PrimeFaces Support
  17. XPages JEE 2.13.0
  18. XPages JEE 2.14.0
  19. XPages JEE 2.15.0 and Plans for JEE 10 and 11

Over the weekend, I wrapped up the transition to jakarta.* for my XPages JEE Support project and uploaded it to OpenNTF.

With that in the bag, I decided to investigate adding some other things that I had been itching to get working for a while now: JSP and MVC.

JSP? Isn't That, Like, A Billion Years Old?

Okay, first: shut up.

Expanding on that point, it is indeed pretty old - arriving in 1999 - and its early form was pretty bad. It was designed as an answer to things like PHP and ASP and bore all those scars: it used actual Java syntax on the page to control output, looping, conditionals, and the like. It even had special directives to import Java classes for the page! All that stuff is still in there, too, which isn't great.

However, JSP used judiciously - focusing on JSTL tags for control/looping and EL references to CDI beans for data access - is a splendid little thing, and it has the advantage that it remains part of the JEE spec.

Domino flirted with JSP for a long time. It's what Garnet was all about and was part of how OpenNTF got off the ground. IBM did eventually ship the custom tags, and they ship with Domino to this day, sitting in the data/domino/java directory, gathering dust. Domino also inherited JSP from WebSphere as part of XPages... kind of. It has hooks for using JSP files in Expeditor-container webapps, but the implementation is conspicuously missing - present only in Notes, presumably for some sort of Social nonsense reason.

For better or for worse, none of that matters now anyway: it's all crusty and old and, critically, uses javax.*. I had to go a different route.

JSP Implementation

From what I gather, there's basically only one real open-source JSP implementation: Jasper, which is a part of Tomcat. Basically everyone just uses that, and that works well enough. There are various re-bundlings of it to remove the Tomcat dependencies, and I went with the GlassFish one, since it was pretty clean.

Diving into it, there were a few things that were potential and actual problems.

First, JSP files aren't evaluated directly. Instead, they're compiled into Servlet class implementations, either on the fly or ahead of time. This process is basically the same as how XPages work: the JSP is translated into a Java file, which is then compiled into a class, which is then reused by the runtime for subsequent requests. Jasper has a dependency on Eclipse JDT, which worried me: when I looked into this in the past, I found that JDT (at least how it was used for JSP) makes a lot of assumptions about working with the actual filesystem. I lucked out here, though: Jasper actually uses the JavaCompiler API, which is more flexible. The JDT dependency seems like either a vestige of an older version or a fallback option.

However, despite the fact that JavaCompiler can work purely in memory, Jasper does do a lot of filesystem-bound work when it comes to loading tag libraries, such as JSTL. I ended up having to deploy a bunch of stuff to the filesystem. Ideally, I'll find a better way around this.

Hooking It Up To Domino

Having a JSP interpreter is one thing, but having it respond to URLs like "http://example.com/foo.nsf/bar.jsp" is another, especially if that should also participate in the XPages class space of the NSF.

I originally considered an HttpService implementation that would accept incoming *.jsp URLs. This could work, but it would be less than ideal: the HttpService, while working in the XPages OSGi layer, wouldn't know about the internal layout of the NSF. I'd have to either reinvent it or wrangle my way over to the active NSFService (the one that runs XPages), find or load the NSF's module, and root around in there. Possible, but not ideal.

Fortunately, I lucked out tremendously: the NSFService class has an addHandledExtensions static method that I can just call to tell it that incoming ".jsp" requests should go to the XPages runtime. This looks like it was added for more Social-nonsense reasons, but I'm happy it's there regardless. Better still, when the runtime finds a URL it was told to handle, it queries IServletFactory implementations like those you can use in an NSF for custom servlets. I already had one in place for JAX-RS, so I made another one (refactored since that commit) for JSP.

Once that was in place (plus some other fiddly details), I got to what I wanted: writing JSPs inside an NSF and having them share the XPages class space:

Screenshot of Designer and a browser showing an in-NSF JSP

Next Up: MVC

Once I had JSP in place (and after some troublesome fiddling with JSF), I decided to take a swing at adding my beloved MVC to the stack.

This had its own complications, this time for the inverse problem as JSP. While Jasper is a creature of the early 2000s and uses older, less-flexibile Java APIs that I had to write around, MVC is the opposite. It's a pure child of the modern, CDI-based world and thus does everything through CDI and ServiceLoaders. However, though I've had CDI support in the project for a long time, actually tying together anything to do with CDI or ServiceLoaders in OSGi is eternally difficult, especially on Domino.

Service Loading

I had to wrangle for this for a while, but I eventually came up with a functional-but-odd workaround: I made use of my own custom ServiceParticipant extension capability that lets me have an object perform pre/post behavior around each JAX-RS request in order to futz with the ClassLoader. I had trouble where the NSF ClassLoader didn't find classes from the MVC implementation, though it should have, so I ended up overriding the ClassLoader to first look explicitly there. It's not pretty, but it works and at least it doesn't require filesystem stuff.

Servlets and Request Dispatchers

Another aspect of being a more-modern child than Jasper is that Krazo makes ready use of Servlet capabilities that have been there for a while but which don't exist on Domino.

For example, Krazo uses a ServletContainerInitializer instance to do pre-research in the app to find classes that should get MVC behavior. Without this scan, MVC won't be applied. This is a Servlet 3.0 feature dating to 2009 and Domino doesn't support it - or any kind of annotation-based classpath scanning, for that matter.

Fortunately, I didn't really need to fully support this concept - I really just needed to make sure this ran whenever the JAX-RS support was being loaded for an NSF. So I made it possible to contribute these via an extension point and added my own scanning implementation to gather the applicable types. Essentially, a backport of this feature to apply in an NSF. With that in place, I was able to register the initializer and have it do its work.

My next hurdle was to do with the way Krazo delegates to JSPs. Specifically, it queries the ServletContext (essentially, the app container) for Servlet registrations that can handle the desired extensions (".jsp" and ".jspx" here) and routes to that using a RequestDispatcher. Well, Domino supports none of this. Trying to get a RequestDispatcher is hard-coded to throw an exception saying "Domino doesn't support this" and the bit about getting ServletRegistrations was new in 3.0. Originally, I stubbed these out, but I decided to give a swing at backporting this as well.

While an NSF doesn't have "Servlet registrations" as such, it does have a list of the aforementioned IServletFactory instances, so I decided to write my own. I wrote a getRequestDispatcher implementation that queries the current module's Servlet factories for a match and, when found, return a basic implementation. Then, I wrote a custom subtype of IServletFactory to provide additional information and made use of that to emulate the Servlet 3+ behavior, at least well enough to let Krazo do what it needs.

Seeing It Together

Once I figured out all these hurdles, I got to what I wanted: I can make a JAX-RS service in an NSF that acts as an MVC controller:

Screenshot of Designer and a terminal showing an MVC controller in an NSF

Neat! There are still some rough edges to clean, but it's great to see in action.

Conclusion and Next Steps

So why is this good? Well, there's a certain amount of box-checking going on: the more JEE specs I can get going, the better.

But beyond that, this is helping to crystallize some of my thinking about what Domino (web) developers are even supposed to freaking do nowadays. This remains an extremely-vexing problem, but I know the answer isn't XPages as it exists now. Maybe the answer is to move XPages to a better container or maybe it's to add a better container to Domino (or both of those, I guess). This is another option, one that preserves the "just fire up Designer and edit some code" niceties of the XPages experience while gaining better, more modern capabilities. I could see writing an app with this, doing all my work in CDI beans and using JSP as the front end - pure open-source solutions with active developers - all inside the NSF. Is it the real best answer? I don't know. Maybe. It's something, though, and specifically something worth trying.

Updating The XPages JEE Support Project To Jakarta EE 9, A Travelogue

Tue Dec 14 16:41:59 EST 2021

  1. Updating The XPages JEE Support Project To Jakarta EE 9, A Travelogue
  2. JSP and MVC Support in the XPages JEE Project
  3. Migrating a Large XPages App to Jakarta EE 9
  4. XPages Jakarta EE Support 2.2.0
  5. DQL, QueryResultsProcessor, and JNoSQL
  6. Implementing a Basic JNoSQL Driver for Domino
  7. Video Series On The XPages Jakarta EE Project
  8. JSF in the XPages Jakarta EE Support Project
  9. So Why Jakarta?
  10. XPages Jakarta EE 2.5.0 And The Looming Java-Version Wall
  11. Adding Concurrency to the XPages Jakarta EE Support Project
  12. Adding Transactions to the XPages Jakarta EE Support Project
  13. XPages Jakarta EE 2.9.0 and Next Steps
  14. XPages JEE 2.11.0 and the Javadoc Provider
  15. The Loose Roadmap for XPages Jakarta EE Support
  16. XPages JEE 2.12.0: JNoSQL Views and PrimeFaces Support
  17. XPages JEE 2.13.0
  18. XPages JEE 2.14.0
  19. XPages JEE 2.15.0 and Plans for JEE 10 and 11

I think it's been a little while since I talked about the XPages Jakarta EE Support project of mine. The goal of that is sort of the inverse of the XPages Runtime project: rather than bringing XPages to a proper modern app server, the JEE Support project brings a handful of current Jakarta EE specs to XPages. It started out a few years ago as a sort of proof-of-concept, but I've since been using it for client work to do things like use newer Jakarta REST (n?e JAX-RS), CDI, and newer EL in XPages and OSGi bundles.

The Specification Move

Originally, I targeted a set of specifications from Java/Jakarta EE 8. Some of these were new to Domino outright, while some (such as JAX-RS) existed in the XPages stack already but in very old forms. I implemented those and for a good while just used the project as a source of parts for client work, tweaking it here and there as needed.

However, the long-prophesized package-name switch from javax.* to jakarta.* came to fruition in Jakarta EE 9, released a bit over a year ago. In the intervening year, most implementations of the specs made the switch, and the versions I was using started to show their age (for example, I was using RESTEasy 3, which was already old when I adopted it, and it's going to 6 now). Beyond just the philosophical sadness of my project being behind, I started to grow specific needs to upgrade components: we switched to JSON-B a while ago, but then some new bug fixes in Yasson were coming only to post-jakarta.* builds.

The Initial Work

I first gave a shot to this in August, initially planning to move only JSON-P and JSON-B over to the new namespace. However, I quickly hit the limits of that, since a lot of these specs are interdependent. JAX-RS using JSON-P and JSON-B to emit JSON content, Yasson has some ties to CDI, and so forth. I realized that it was going to have to be all-or-nothing.

So I rolled up my sleeves and assessed the task ahead of me. At a basic level, there was the job of updating my dependencies, which immediately had some good aspects and bad aspects:

  • Previously, I was using a hodgepodge of spec packages like the JBoss bundling of JAX-RS in order to get something that would work and be license-friendly. Now that it was all over at Eclipse, I could switch to the nice, clean official versions and have no license worries.
  • I also used to have all sorts of OSGi rule overrides to account for Domino-specific oddities like ancient versions of various specs being supplied by the default classpath or other, conflicting bundles, all with no versioning. Once I was looking for e.g. jakarta.annotation instead of javax.annotation, I was no longer bound to that particular nightmare.
  • Not all of my dependencies were ready. When I first started, RESTEasy (my JAX-RS provider of choice) had not yet uploaded a JEE-9-compatible version. My main choices were to try using Eclipse Transformer, which would add a whole new layer to the task, or to switch to another provider.

Then there's the elephant in the room: the freaking Servlet API, which much of this depends on. Since the Servlet API is the job of the web container, I can't realistically upgrade it. Fortunately, that's only half true: I can't give it new capabilities (like Web Sockets), but I can wrap the old stuff with the new. And, like the other specs, the switch of the package name was a tremendous blessing, allowing me to deploy the official Servlet 5 API unchanged. Then, I did the tedious work of writing a slew of adapter classes, half wrapping a javax.servlet component and pretending it's jakarta.servlet and half going the other direction. Since the methods are either direct analogs, optional features, or can be emulated, this was actually much easier than I thought it would be. And there: Servlet 5 on Domino! Kind of!

The Showstopper

However, I soon hit what seemed to be a show-stopper: a LinkageError problem when using CDI that didn't show up previously. My search for the topic found only one hit: an issue in Open Liberty referencing almost exactly the same problem. My heart sank when I read that their fix was to upgrade the Equinox runtime - something that's outside my powers on Domino (probably).

So, disheartened, I set it aside for a couple months. I figured there was a small chance that Weld (the CDI implementation at the heart of the trouble) would put out an update that fixed it - after all, an older version worked.

Resuming Work

After setting it aside, it kept eating away at the back of my mind, and two things kept pushing me to go back to it:

  • I'll need to do it eventually. I (and my client projects) can't just be stuck at the old style forever.
  • I really didn't want to admit defeat and switch back to Gson for JSON processing.

So I went back to it. My initial hope - that a new version of Weld would magically fix the problem - proved to not have come to fruition. Still, though, I wasn't sure that it was the exact same problem Liberty encountered. For one, my use of CDI studiously avoids actually telling it about OSGi, since I've had little luck making use of that with Domino's OSGi stack. That was enough cause to make me think I could work around it.

And work around it I did! The trouble turned out to be, unsurprisingly, a bit esoteric, but boiled down to the runtime re-registering proxy classes for the same core components. My guess is that, somewhere along the line, Weld changed some sort of internal cache in a way that would break when using a bunch of ephemeral per-NSF containers as I do. I implemented my own (since it's an intended extension point) and added a bit of a cache, and I was back to the races.

As a convenient blessing, RESTEasy released 6.0.0.Beta1 just days before I got back to it, a major release targeted at JEE 9. That meant that I could save a ton of work by not having to re-work everything for another JAX-RS implementation. I had been looking into Jersey, which I'm sure would have done the job, but it's fiddly work trying to put all these pieces together on Domino, and I was all the happier to not have to re-do it all.

JavaMail

But then I hit a new problem: the javax.mail API, now jakarta.mail. The first part of this is easy enough: bring in the new spec bundle and everything will point to it. Great! I hit an immediate problem, and one I had been dreading dealing with. Though the spec changed package names, the implementation didn't. That brought me face-to-face again with an old nemesis of mine, sitting there in Domino's classpath, corrupting it:

A screenshot of Domino's ndext directory

The way the Mail API works is that there's a file, called "mailcap", that lists implementations for common data types, like:

1
2
3
4
text/plain;;		x-java-content-handler=com.sun.mail.handlers.text_plain
text/html;;		x-java-content-handler=com.sun.mail.handlers.text_html
text/xml;;		x-java-content-handler=com.sun.mail.handlers.text_xml
multipart/*;;		x-java-content-handler=com.sun.mail.handlers.multipart_mixed; x-java-fallback-entry=true

So, while all the entrypoint classes are jakarta.mail.* now, the implementations remain com.sun.mail.*, all with the same class names. And, since this little jerk of a JAR is sitting in the system classpath, it has a way of showing up all the time, complaining that com.sun.mail.handlers.text_plain is incompatible with jakarta.activation.DataContentHandler.

This is extremely fiddly to deal with, potentially involving writing a special ClassLoader implementation that blocks calls down to the lower-level JAR. While maybe possible, I'm not sure it'd be possible in a way that would be practical for normal use in an app.

And so, with a heavy heart, I forked the thing and added an "org.openntf" in front of all the package names. And that... works! It works just fine. It means that I'm on the hook for manually integrating any upstream changes, but at least it works without having to worry about any conflicts.

That wasn't the end of my trouble with this spec, though. The spec package itself, in jakarta.mail.Session uses ServiceLoader to look for services, and it uses it in the form that looks them up with the current thread's ClassLoader. Because I'm working in OSGi, that ClassLoader - the XPage app's loader - won't know about the implementation classes directly, and this call fails. And, while there's a whole sub-spec in OSGi for dealing with this, I've never had success actually getting it working in Domino.

So I forked that freaking thing too and modified the calls to use its own ClassLoader, which could find the implementation by way of it being a fragment bundle attached to it.

And, with that, finally, I had Jakarta Mail properly hooked up and working without having to jump through too many hoops. I'd still prefer to not have forked the source, but it was the best of a bad lot of choices.

The Final Tally

That brings the specs updated/added in this project to:

  • Expression Language 4.0
  • Contexts and Dependency Injection 3.0
    • Annotations 2.0
    • Interceptors 2.0
    • Dependency Injection 2.0
  • RESTful Web Services (JAX-RS) 3.0
  • Bean Validation 3.0
  • JSON Processing 2.0
  • JSON Binding 2.0
  • XML Binding 3.0
  • Mail 2.1
    • Activation 2.1

Not too shabby, if I say so myself. Technically, Servlet 5.0 is in there, but it doesn't actually bring any newer-than-2.4 powers to the Servlet container, so it's really just infrastructural details.

Now I'll just have the work of updating my client project and finally getting to use whatever that Yasson bug fix was that prompted this in the first place.

Java's Shakier Old APIs

Fri Dec 10 11:24:25 EST 2021

Tags: java

In my last post, I sang the praises of InputStream and OutputStream: two classes from Java 1 that, while not perfect, remain tremendously useful and used everywhere.

Then, a tweet by John Curtis got me thinking about the opposite cases: APIs from the early Java days that are still with us, are still used relatively frequently, but which are best avoided or used very sparingly.

There are a handful of APIs from the early days that may or may not still exist, but which aren't regularly encountered in most of our work: the Applet API, for example, was only recently actually removed, and it was clear for a long time that it wasn't something to use. Some other APIs are more insidious, though. They're right there alongside newer counterparts, and they're not marked as @Deprecated, so you just have to kind of magically know why you shouldn't use them.

Old Collections

One of these troublesome holdovers is a "freebie" for Domino developers: java.util.Vector. This is paired with other "first revision" collection classes like Hashtable, classes that predate the Collections Framework in 1.2 and which were retrofitted into it.

These classes aren't incorrect as such: they do what they're supposed to do and function as working implementations of List and Map. The trouble comes in that they're sub-optimal compared to other options. In particular, they're very-heavily synchronized in a way that hurts performance in the normal cases and isn't even really ideal in the complex multi-threaded case.

Unfortunately, since these classes aren't deprecated, an IDE would only warn you about it if it's using some stylistic validation above normal compilation. Such classes are identified best by looking for a warning paragraph like this at the bottom of their Javadoc:

Javadoc 'old class' warning for Hashtable

java.util.Date

The java.util.Date class has a simple concept: represent a point in time. However, it's a neverending font of limitations and caveats:

  • It's essentially a wrapper for a Unix timestamp in milliseconds precision, and doesn't get more precise
  • It's not immutable even though it'd make sense to be. Effective Java includes repeated examples of why this is bad
  • Though it's called "Date", it's always a single timestamp, and can't represent a day in the abstract
  • In Java 1, it also was responsible for parsing date strings, and this functionality remains (though at least deprecated)
  • As mentioned in the prompting tweet, the DateFormat classes that go with this are not thread-safe, even though one could reasonably assume they would be based on their job
  • There's no concept of time zone, though the string representation would lead one to think there might be
  • The related Calendar class is a little more structured, but in a weird way and having a lot of the same limitations

Nonetheless, Date is the obvious go-to for date/time-related operations due to its age and alluring name. And, in fact, it wasn't even until Java 8 that there was a first-party better option. That's when Java basically adopted Joda Time outright and brought it into Java as the java.time package. This system has what's required: the notion of dates and times as separate entities, time zones both as named entities (like "America/New_York") and just as offsets (like UTC-5:00), and full immutability and thread-safety, and tons else.

Unfortunately, it will be a long time for old habits to die and longer for older code to fade away, so we're stuck with Date for a while, even if only to always call #toInstant on it.

java.io.File

The java.io.File class is kind of similar to Date: it was created in Java 1 as a basic way to work with files on the filesystem. It still does that, and (as far as I know) it's not as outright bad as the above, but it's limited and non-optimal.

In Java 7, the NIO Path API was added, which replaces File in a more-generic and -adaptable way. Whereas File refers specifically to the filesystem, the Path API is adaptable to whatever you'd like while sharing the same semantics. It can also participate in the NIO ecosystem properly.

Much like how Date has a #toInstant method, File has a #toPath method to work with the transition. I make a habit of doing this almost all the time when I'm working with existing code that still uses File. And there is... a lot of this code. Even APIs that can take Path arguments will potentially turn them into Files internally to keep working with their older implementation.

There are also a bunch of related APIs where the replacements exist but aren't quite as straightforward. ZipFile is a perfect example of this: it (and its child class JarFile) has constructors that take either a File or a String representing a file path, and that's alarming. However, the ZIP File System Provider that works with Paths is neat, but it's not as clear of a replacement for ZipFile as Path is for File. That's actually one of the reasons I use ZipInputStream even in a case where ZipFile would also work.

Conclusion

I'm sure there are other similar traps around, but those are the main ones I can think of off the top of my head. It's a bit of a shame that Sun/Oracle have been so historically reticent to mark classes wholesale as deprecated. While IDEs and and toolchains have gotten better at providing "stylistic" recommendations like this, it's been slow going, and it's not universal. The best thing you can do for now is to just know about the newer alternatives and use them enough that the old kinds immediately read as "code smell" when you come across them.

Generating Archive Files On The Fly In Java

Thu Dec 09 10:30:36 EST 2021

Tags: java liberty

When working on version 3.0 of the Domino Open Liberty Runtime, I had occasion to do something I've done in other situations, but it occurred to me that it'd make a good post on its own. Specifically, part of one of the new features involved creating archive data on the fly, purely in-memory, and that's something that comes in handy quite a bit.

Background: The Task

The task at hand in that project involved the way the runtime will deploy custom extension features for Liberty when creating the server. There are a few of these, all centered around adding integration with Domino in one way or another. For the previously-existing Liberty features, this was done in three parts:

  • The actual Liberty extension code, which is a Java project that produces a Liberty-compatible OSGi bundle.
  • A "subsystem" module, which is a code-less Maven project that uses esa-maven-plugin to embed the above bundle and generate a "SUBSYSTEM.MF" file to describe it. This ESA/subsystem bit is a mechanism for distributing packaged features from the OSGi spec.
  • A "deployment" module, which is a small Java project that provides an extension for the Domino-side runtime to house and deploy the above ESA file.

For 3.0, I wanted to make a feature that would provide Notes.jar and the NAPI to application. Since those files are proprietary and non-distributable, I couldn't include them in the actual runtime distribution and would instead have to look them up from Domino's environment at runtime. Additionally, since all I wanted to do was provide the existing API and not add any new code, there was no particular need to make a code project like the first one above.

More Background: Java Streams

The way these extensions are registered to Domino is by classes that provide some metadata about the feature and then a method called getEsaData() that returns an InputStream. Though InputStreams aren't the only way to represent arbitrary binary blobs like this, they're used everywhere by virtue of them arriving with Java 1, and they're extremely adaptable.

Basically, the idea of an InputStream is that it's just a mechanism to read a sequence of bytes from somewhere. In Domino terms, they're like NotesStream, but good.

Their utility comes from their simplicity and adaptability. Because the abstract class only deals with reading bytes and a few operations for skipping around, they can be used for all sorts of things. The prototypical use is for reading a file. For example:

1
2
3
4
Path someFile = Paths.get("/foo/bar/baz.txt");
try(InputStream is = Files.newInputStream(someFile)) {
  // read file data from the stream
}

They're not limited to that, though: the JDK comes with all sorts of InputStream variants like ByteArrayInputStream, which lets you read from a byte[] in memory.

In addition to being arbitrary as to where the byes are coming from, streams are also very composable. Many types of streams either must or may wrap an existing stream to alter it in some way. One of the more-common cases where you'd do this is when reading ZIP file data. Taking something similar to above:

1
2
3
4
5
6
7
8
Path someZipFile = Paths.get("/foo/bar/baz.zip");
try(
  InputStream is = Files.newInputStream(someFile);
  ZipInputStream zis = new ZipInputStream(is, StandardCharsets.UTF_8)
) {
  ZipEntry entry = zis.getFirstEntry();
  // work with the ZIP entries
}

The thing to note here is that, while this happens to be coming from a ZIP file on disk, it doesn't have to be: that first is could just as easily be a stream coming from HttpURLConnection or a ByteArrayInputStream.

Along with InputStream, Java also has OutputStream. Luckily, OutputStream is similarly simply designed, and has uses that are a direct mirror for everything above: there exist ByteArrayOutputStream, ZipOutputStream, and all sorts of others.

Putting It Together

Back to the original goal, my task was to create a class that would provide an InputStream containing ESA data - that is to say, a ZIP file - to the runtime, which could then deploy it as a Liberty feature. The previous extensions did this by embedding the ESA in their JAR and then returning an InputStream to that. Now, though, I wanted to do it all dynamically.

Now, I talked a big game above about how streams didn't have to have anything to do with files, and it could all be done in memory. That's still all true, but technically here I ended up using files for caching purposes. The above is still good to know, though!

So anyway, my goal was to deliver an InputStream to the runtime that represented an ESA that looks like this:

Contents of a generated ESA file

Of those entries, "corba.jar" is the CORBA API from Maven Central to make Notes.jar work on Java 9+, while "Notes.jar" comes from jvm/lib/ext and "lwpd.commons.jar" and "lwpd.domino.napi.jar" come from the OSGi framework in the running Domino server. The remaining entries - the two "MF" files and the embedded JAR - are composed on the fly.

The starting point here is that I identify a cache location within my working directory based on the current Domino build number, and I name that path out. Then, I open it up as a ZIP to fill with contents, like above:

1
2
3
4
5
6
try(
  OutputStream os = Files.newOutputStream(out, StandardOpenOption.CREATE);
 ZipOutputStream zos = new ZipOutputStream(os, StandardCharsets.UTF_8)
) {
  // work happens here
}
SUBSYSTEM.MF

The next part is to build the "SUBSYSTEM.MF" file. As implied by the extension, this file has the same syntax as "MANIFEST.MF" files, and so I can use the java.util.zip.Manifest class to handle encoding and formatting. I start out by loading a template from the current bundle's resources:

1
2
3
4
Manifest subsystem;
try(InputStream is = getClass().getResourceAsStream("/subsystem-template.mf")) { //$NON-NLS-1$
  subsystem = new Manifest(is);
}

There, I'm using the constructor from Manifest that reads from an existing stream. Often, that would be reading a "MANIFEST.MF" from an existing JAR, but it'll work with any stream.

Then, I fill it in with some details, with lines like:

1
2
3
4
5
Attributes attrs = subsystem.getMainAttributes();
attrs.putValue("Subsystem-Name", getShortName()); //$NON-NLS-1$
String featureName = getShortName() + "-" + getFeatureVersion(); //$NON-NLS-1$
attrs.putValue("IBM-ShortName", featureName); //$NON-NLS-1$
// etc.

Finally, I create an entry in the ZipOutputStream and write the contents. The way ZipOutputStream works is that its "stream-iness" counts towards whatever the most-recently-added entry is.

1
2
zos.putNextEntry(new ZipEntry("OSGI-INF/SUBSYSTEM.MF")); //$NON-NLS-1$
subsystem.write(zos);
Embedded Bundle

Alright, so far, so good. Up until now, this is the "normal" case for working with ZIP files, where you make a new entry and pour in some text data. What's neat, though, is that the encapsulation capabilities of these streams can be stacked, which is what comes up next.

Specifically, I wanted to put a ZIP file (the .jar) within this surrounding ZIP (the .esa). The way this is done is by just composing the same tools we've been working with again. Here, esa is what zos was above: the outermost package ZIP contents. I just renamed it in this method for clarity inside the code itself.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
// This will be a shell bundle that in turn embeds the API JARs
esa.putNextEntry(new ZipEntry(BUNDLE_NAME + "_" + dominoVersion + ".0.0.jar")); //$NON-NLS-1$ //$NON-NLS-2$
		
// Build the embedded JAR contents
try(ZipOutputStream zos = new ZipOutputStream(esa, StandardCharsets.UTF_8)) {
  Manifest manifest;
  try(InputStream is = getClass().getResourceAsStream("/manifest-template.mf")) { //$NON-NLS-1$
    manifest = new Manifest(is);
  }
  
  // Finish manifest
  // More work here
}

So there, I'm doing basically the same thing as I did originally, to make a ZipOutputStream. Since the ZipOutputStream really doesn't care what the stream is writing to is, it works just as well when writing to a file as when writing to another ZIP stream - the cascading streams handle their own encoding and it works out in the end.

Once I write the manifest, I can make use of the Files utility class to embed each of the JARs from the filesystem:

1
2
3
4
for(Path jar : embeds) {
  zos.putNextEntry(new ZipEntry(jar.getFileName().toString()));
  Files.copy(jar, zos);
}

Finally, I download the CORBA JAR on the fly, so for that one I use a utility function to download from the remote URL:

1
2
3
4
5
OpenLibertyUtil.download(new URL(URL_CORBA), is -> {
  zos.putNextEntry(new ZipEntry("corba.jar")); //$NON-NLS-1$
  IOUtils.copy(is, zos);
  return null;
});

Here, I use IOUtils from Apache Commons IO because it's not copying from a filesystem path, but the idea is basically the same, and exactly the same as far as the destination ZIP is concerned.

The Final Result

Once this is all written to the cached file on the filesystem, the final result is just to return a stream from it:

1
return Files.newInputStream(out);

Since the job of this extension class is only to return an InputStream, the consuming code doesn't care that the extension did all this work, as opposed to the other ones that just return a stream of an embedded resource: everything else is the same.

So, all in all, this isn't a groundbreaking new technique, but that's the point: the way these lower-level JDK components work, you get a tremendous amount of flexibility from just a few common parts.

New Adventures in Administration: Docker Compose and One-Touch Setup

Sat Dec 04 14:23:58 EST 2021

Tags: admin docker

As I do from time to time, this weekend I dipped a bit into the more server-admin-focused side of Domino development. This time, I had set out to improve the deployment experience for one of my client's apps. This is the sprawling multi-NSF-plus-OSGi one, and I've long been blessed to not have to worry about actually deploying anything. However, I knew in the back of my head the whole time that it must be fairly time-consuming between installing Domino, getting all the Java code in place, deploying the DBs, and configuring all the documents that associate them.

I had also had a chance this past week to give Docker Compose a swing. Though I'd certainly known of it for a good while, I hadn't actually used it for anything - all my Docker scripting involved really fiddly operations that I ended up using Bash scripts to launch a single container for anyway, so Compose didn't bring so much to the table. However, using it to tie together the process of launching a Postgres server with pre-populated user info and schema scripts whetted my appetite.

So today I set out to tinker with the Domino side of things.

Deploying To The Domino Data Directory

Some parts of this were the same as what I've done before: I wanted to deploy some JARs to jvm/lib/ext for legacy purposes and then drop ".java.policy" into the notes user's home directory. That was accomplished easily enough with some COPY operations in the Dockerfile:

1
2
COPY program/ /opt/hcl/domino/notes/latest/linux/
COPY --chown=notes java.policy /local/notes/.java.policy

What wouldn't be accomplished so easily, though, would be getting files into the data directory: the app's NTFs and the OSGi plugins. This is because of the way the Domino Docker image works, where it deploys the contents of a ZIP to /local/notesdata on launch, in order to let you work properly with mounted volumes. Because of this, I couldn't just copy the files there in the Dockerfile, since it would conflict with the volume mount; however, I still wanted to do this in an automated way.

This was my impetus to switch away from the official Docker images on Flexnet and over to the "community-ish" Domino-on-Docker build script maintained at https://github.com/IBM/domino-docker. This script is generally more feature-rich than the official one, and one feature in particular caught my eye: the ability to add your own ZIP file (or URL, I believe) to deploy to the data directory at first launch.

So I downloaded the repo, build the image, bundled the OSGi plugins and NTFs into a ZIP, and altered my Dockerfile:

1
2
3
4
5
FROM hclcom/domino:12.0.0

COPY program/ /opt/hcl/domino/notes/latest/linux/
COPY --chown=notes java.policy /local/notes/.java.policy
COPY --chown=notes data.zip /tmp/

Then, I set the environment variable in my "docker-compose.yaml" file: CustomNotesdataZip=/tmp/data.zip. That worked like a charm.

One-Touch Setup

Next up, I wanted to automate the initial server setup. I knew that Domino had been gaining some automated setup capabilities recently, and that they really came of age in V12. What I hadn't appreciated until today is how much capability this system has. I'd figured it would let you configure the server either as a new domain or additional and to create an admin user, but I hadn't noticed that it also has the ability to declaratively create and modify databases and documents. Looking over the sample file that Daniel Nashed put up, I realized that this would cover essentially all of my remaining needs.

The file there was most of what I needed: other than tweaking the server and user names, the main things I'd want to change in the basic config were to set HTTP_AllowAnonymous/HTTP_SSLAnonymous to "1" and also add a line to set OnBehalfOfInvokerLst to "LocalDomainAdmins" (which allows XPages to run properly).

Then, I got to the meat of the app deployment. That's all done in the $.appConfiguration.databases object near the bottom, and I set out to adding entries to deploy each of the NTFs I'd copied to the data directory, and adding the required documents to tie them together. This also went smoothly.

The Final Scripts

The final form of the setup is pretty clean. The Dockerfile is very similar to the above, with just an added line to copy in the config file:

1
2
3
4
5
6
FROM hclcom/domino:12.0.0

COPY program/ /opt/hcl/domino/notes/latest/linux/
COPY --chown=notes java.policy /local/notes/.java.policy
COPY --chown=notes domino-config.json /tmp/
COPY --chown=notes data.zip /tmp/

The docker-compose.yaml file is longer, but I think pretty explicable. It maps the ports, sets up some volumes for the various persistent-data portions of Domino, and configures the environment variables for setup:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
services:
  clientapp:
    build: .
    ports:
      - "1352:1352"
      - "80:80"
      - "443:443"
    volumes:
      - data:/local/notesdata
      - ft:/local/ft
      - nif:/local/nif
      - translog:/local/translog
      - daos:/local/daos
    restart: always
    environment:
      - LANG=en_US.UTF-8
      - CustomNotesdataZip=/tmp/data.zip
      - SetupAutoConfigure=1
      - SetupAutoConfigureParams=/tmp/domino-config.json
volumes:
  data: {}
  ft: {}
  nif: {}
  translog: {}
  daos: {}

Miscellaneous Notes

In doing this, I came across a few things that are worth noting for anyone diving into it clean as I was:

  • Docker Compose (intentionally) doesn't rebuild images on docker compose up when they already exist. Since I was changing all sorts of stuff, I switched to docker compose build && docker compose up.
  • Errors during server autoconfig don't show up in the console output from docker compose up: if the server doesn't come up like you expect, check in "/local/notesdata/IBM_TECHNICAL_SUPPORT/autoconfigure.log". It's easy for a problem to gum up the whole works, such as when using "computeWithForm": true on a created document throws an exception.
  • Daniel's example autoconf file above places the admin user ID in "/local/notesdata/domino/html/admin.id", so it will be accessible via http://servername/admin.id after the server comes up. Alternatively, you could snag it by copying it via Docker commands.
  • This really drives home the desperate need for a full web-based admin app for Domino.

All in all, this was a delight to work with. Next, I should be able to make a script that generates the config JSON for me based on all the app's NTFs, and then include that whole thing as part of the Maven build in a distribution ZIP. That will be pretty neat.

Version 3.0 of the Domino Open Liberty Runtime

Thu Dec 02 16:46:10 EST 2021

When I last talked about my Domino Open Liberty Runtime project, I mentioned the various main tasks I'd been doing in gearing up for a 3.0 release.

Well, it's been a while since then, but so it goes with open-source projects sometimes. Fortunately, I've had some time here and there to dust it off some more and take care of a lot of lingering tasks I had assigned to the 3.0 release, enough to make it to the finish line. Some of these I found interesting to note, and so I'll note them here.

Java Runtime Shifts

One of the neat tricks the project does is to auto-download a specified Java runtime for you: you can pick your version (e.g. 8, 11, or 17) and your flavor (HotSpot, OpenJ9, or GraalVM CD) and the tooling will do the work of downloading it on your behalf.

Originally, this used just AdoptOpenJDK, the previous go-to home of open-source Java builds. When I added GraalVM, I generalized the code to handle those downloads as well. Like AdoptOpenJDK's binaries, they're hosted on GitHub, but under a different organization and using a different naming scheme.

That refactoring paid off now: AdoptOpenJDK moved to the Eclipse Foundation and became the Adoptium project: much the same idea, but a new home. During that move, though, IBM stopped pushing OpenJ9 builds to that organization and instead now host them themselves, under the name Semeru. I assume this was all due to some wrangling over the use of "JDK" and other political scuffling, but either way I had to deal with it.

Fortunately, though the Semeru home page is standard IBM fare, the binaries themselves are still hosted on GitHub, and the organization still mirrors the original AdoptOpenJDK/Adoptium style. So I made another class for that, but the logic is largely the same.

Notes API Feature

Since it's extremely likely that apps deployed adjacent to Domino this way will want to access Domino data, I originally set up deployment to copy the server's Notes.jar into the global "extra JARs" directory for deployed Liberty instances. This would allow apps to make use of lotus.domino classes without having to package Notes.jar inside the WAR.

This works well enough, but it always felt a little wrong: I don't like polluting the server's classpath that way, and it got all the worse because I needed to also bring in an open-source copy of CORBA classes for cases where the JVM in use is newer than 8.

So I decided to take another swing at it, this time packaging it up as a feature to be loaded as desired in Liberty. The project already has a few of these, generally consisting of the Liberty-specific OSGi bundle, the ESA feature file to define it in Liberty, and then an extension on Domino to deploy that to new servers.

In this case, I'd want to do it a little differently: the other ones included custom code on my part, but this one will just want to grab JARs from the Domino installation and make them available to apps. So, for this one, I didn't need the first two components - instead, I'd build the feature JARs on the fly. I went about doing that, the the result is a class that creates a notesApi-1.0 feature automatically. It makes use of the way streams work in Java to automatically build a feature container based on the current Domino build version (in case the API changes), creating the necessary manifests programmatically and then locating Notes.jar, the NAPI, and their CORBA and IBM Commons dependencies.

This new route is nicer all around: it's explicitly opt-in, which I like, it now also provides the NAPI, and it hides the non-API dependencies from the apps.

Generic Tooling

For a while now, I've been toying with the idea of making this less Open-Liberty-specific and adding the ability to deploy different kinds of apps. After all, all Liberty is here is a forked process, and that could be anything: a different app server, a single JAR file, or something not Java-related at all. Though the tooling will still only support Open Liberty in 3.0, I've been gradually reorganizing the structure to make such a change practical to implement in, say, 4.0.

Rather than the core runtime assuming it's using Liberty, now it uses generic interfaces to represent any kind of server, and then that further drills down into servers that use a JVM, and from there down to a Liberty server.

This sort of genericizing has made the runtime much more service-based internally - it's more indirect and it takes more discipline, but I feel good about it. I picture it now as a central core coordinator (still named OpenLibertyRuntime internally) receiving notifications of changes in the admin NSF and then sending out appropriate messages to the various deployment services. The fact that there's only one such deployment service currently doesn't diminish the splendidness of my mental model.

Reverse Proxy

The reverse proxy implementations that I'd mentioned before haven't really changed much since I introduced them, but that's, I think, a good sign: they do what they're supposed to and don't necessarily need dramatic changes. The Domino-side reverse proxy - the one that lets your Liberty apps look like they're running inside Domino's normal HTTP server - has risen in my estimation since I made it, since it makes for a very-reasonable story. It's essentially like if you use the Equinox extension points to deploy a servlet or web application, except you trade the ability to run within a database context for a much-more-modern development environment.

Release

The full list of closed issues for this release is available on GitHub, and I've uploaded the distribution to OpenNTF.