Showing posts for tag "jakartaee"

New Release: XPages Jakarta EE 3.3.0

Fri Dec 20 16:12:24 EST 2024

Tags: jakartaee

As part of finishing my holiday gift shopping, I published version 3.3.0 of the XPages Jakarta EE project today.

This release contains a number of bug fixes to do with asynchronous and scheduled tasks based on some edge-case and intermittent trouble I ran into while developing some apps with it. Additionally, it has some consistency fixes for the Jakarta NoSQL support - in particular, it improves mapping of object properties to columns, matching the item names case-insensitively and matching special fields like FIELD_CDATE to matching columns with formulas like @Created.

Additionally, I took the occasion to bump some dependencies. While Jakarta EE 11 was pushed to next year, MicroProile 7.0 was released a few months back. This brings some version bumps to specs included in this project, including Rest Client, Open API, and Fault Tolerance. While the changes aren't dramatic, there are some nice refinements in there. I was also able to drop the Apache HttpClient in favor of an implementation that uses URLConnection, and it's always nice to lessen the number of dependencies.

However, there will be some more work to do in future versions when it comes to MicroProfile. The Metrics spec was dropped from MP 7.0 in favor of Telemetry, which I glean hews more closely to common practices in other tools. In XPages JEE 3.3.0, Metrics remains and I have yet to add Telemetry. It may end up being a breaking change, but I'm not sure that alone would warrant a major-version bump for this project, in large part because I don't know how much use the existing Metrics implementation gets here. In any event, my plan is to at least add Telemetry to 3.4.0 or so.

After that, while waiting for JEE 11 for some large updates, I'm a little tempted to dive into the "Better NSF Webapps" idea from my post the other day. That's one I've wanted to do for a while, and it would be really nice to get rid of the "xsp/app" part of URLs for full-Jakarta apps. I kind of doubt that that feature, even if I start working on it, would make it into the next version in a proper way, but it'd at least be interesting to take a swing at.

Large Features I'd Like To Add To XPages JEE

Sun Dec 08 12:05:28 EST 2024

Tags: jakartaee

Lately, the XPages Jakarta EE project is in a very good place: the move to Jakarta EE 10 cleaned up a lot of the codebase, there aren't currently any more looming brick walls, and the app development using it I've been doing has remained exceedingly productive. The product has a long list of issues to work on - a few of them are difficult-to-reproduce bugs, but most are small-to-medium-sized features. There are a handful of things, though, that would be big projects on their own that I'd love to find the time to do, but don't currently have enough of an impetus to devote the time to.

In no particular order:

Jakarta Batch (Or Scheduled Jobs Generally)

The Jakarta Batch spec is a way to define data-processing tasks in a standardized way - think agents but more explicit. It's a bit of a staid spec - it came from IBM and sure looks like it was specifically designed to be a very mainframe-y way of doing things - but that may be fine.

The advantages of this over agents would be that it would run in the OSGi class space and could use app code fully and that the definition language is pretty flexible. Additionally, though the XPages JEE project already has programmatically-defined scheduled tasks by way of Jakarta Concurrency, those tasks are pretty opaque from outside, while Batch jobs could theoretically be described usefully in a server-wide way for administrative purposes.

Jakarta Messaging

The Jakarta Messaging spec defines a consistent way to work with message queues and pub/sub systems, which helps when making an app as part of a larger system. There are a bunch of pretty enterprise-y implementations, but this looks like it could be a pretty good fit for local use on a Domino server, potentially with Domino's own message queues. Being able to have apps send messages to each other (or to tasks outside the HTTP JVM) could have a lot of uses, and having it baked in to the framework would make it worth considering much more than it's currently used.

Code Generation For NoSQL

When using NoSQL entity classes, it's not strictly necessary to have an actual Form design element, but it's a very common case that you'd have one - either because you make a quick-and-dirty Notes UI first or you're building a JEE app on top of an existing Notes app. Accordingly, it'd be neat to have an option in Designer to automatically generate Java classes for existing forms, saving some tedium of manually defining each property.

There are a couple things that would make doing this sort of a PITA, though. For one, it'd involve writing a UI plugin for Designer, which sounds like it'd just be a miserable process. I could probably work primarily or entirely with the Eclipse VFS and project APIs instead of Designer-specific classes, but still. Beyond that, there'd be the matter of trying to correctly describe a form. Text and number fields would be easy enough, but things would get tricky quickly. Should a multi-option field be presented as an Enum if the options are compatible? Should it try to guess boolean-storage fields? What system-type fields - like SaveOptions - should be included in the model as opposed to handled via compute-with-form? Should the display options for date/time fields map exactly to java.time classes? Not impossible, especially since "good enough" would be significantly better than nothing, but still a deep well of potential work.

Quality-of-Life Extension Projects

For the most part, the XPages JEE project focuses on implementing the specs as they are, with only a few things that are Domino-specific. Over time, I've been building up ideas for little features to add: a bean to handle Domino name formatting, supporting alternative HTML templating tools like Thymeleaf, packaged libraries for common tasks like Markdown formatting and RSS feeds, and so forth. These wouldn't really fit in the core project because I don't want it to bloat out of scope, but I could see them being their own additions or a general "extension library" for it.

Better OSGi Webapps

The JEE project has a couple capabilities in the direction of being used in OSGi-based webapps, but constantly bumps into ancient limitations with it. Something I've considered doing is making an alternative extension point to basically do this but better, presenting a Servlet 6 environment for OSGi-wrapped webapps, avoiding the need for wrappers and translation layers between old and new.

This would be a big undertaking, though. A chunk of it is handled by the existing HttpService/ComponentModule system, so it wouldn't be like writing a Java app server from scratch, but there'd be a lot to do as far as managing application lifecycles and so forth. Still, having control over the ComponentModule level would let me handle things that are finicky or impractical currently, like annotation-based Servlets, ServletContainerInitializers, and various listeners that just aren't implemented in the existing stack.

To go along with this, I'd want to look into what I can do with the Jakarta Data/NoSQL support to make it practical to not use Domino-specific interfaces all the time. The idea there would be that you could write an app that uses NoSQL with one of the other supported databases but then, when running on Domino, would store in an NSF instead. This would make it possible to develop entirely outside of Domino (and thus not have to worry about Designer) in a way that's basically the same. There'd be some trouble there in that it's pretty easy to hit a point with Domino as a data store where you need to give hints for views or data storage to make it practical, so it wouldn't always be doable, but it's worth considering.

Better NSF Webapps

Which leads to the last big potential feature: doing a custom ComponentModule but for NSFs. The idea here would be that you would, one way or another, register your NSF as a webapp container, and then it would be handled by a new ComponentModule type that eschews the XPages and legacy parts in favor of exerting full control over events, listeners, and URLs. This would allow apps to skip the "xsp/app" stuff in the URL, make it easier to do things like Filters, and have proper hooks for all app lifecycle listeners.

Like OSGi webapps, this would be a real 80/20 sort of thing, where some of the early steps would be fairly straightforward but would quickly get into the weeds (for example, having to write a custom resource provider for stylesheets, files, etc.). Still, I keep running into limitations of the current container, and this would potentially be a way out. It's probably the one I'd want to do most, but would also be the most work. We'll see.

I may get to some or all of these on my own time anyway, and any of them may end up cropping up as a real client need to bump them up the priority list. It's also just kind of nice having a good stable of "rainy day" projects to tickle the mind.

Quick Tip: Records With MicroProfile Rest Client

Mon Nov 11 15:42:47 EST 2024

One of my favorite features of the XPages JEE Support project is the inclusion of the MicroProfile Rest Client. This framework makes consuming remote APIs (usually JSON, but not always) much, much better than in traditional XPages or even in most other frameworks.

MicroProfile Rest Client

If you're not familiar with it, the way it works is that you can use Jakarta REST annotations to describe the remote API. For example, say you want to connect to a vendor's service at "https://api.example.com/v1/users", which allows for GET requests that list existing users and POST requests to create new ones. With the MP Rest Client, you'd create an interface like this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
package com.example.api;

import java.util.List;

import com.example.api.model.User;
import jakarta.ws.rs.Consumes;
import jakarta.ws.rs.GET;
import jakarta.ws.rs.POST;
import jakarta.ws.rs.Path;
import jakarta.ws.rs.Produces;
import jakarta.ws.rs.core.MediaType;

@Path("v1")
public interface ExampleApi {
	@Path("users")
	@GET
	@Produces(MediaType.APPLICATION_JSON)
	List<User> getUsers();

	@Path("users")
	@POST
	@Consumes(MediaType.APPLICATION_JSON)
	@Produces(MediaType.APPLICATION_JSON)
	User createUser(User newUser);
}

Then, in your Java code, you can use it like this (among other potential ways):

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
ExampleApi api = RestClientBuilder.newBuilder()
	.baseUri(URI.create("https://api.example.com"))
	.build(ExamplApi.class);

List<User> existingUsers = api.getUsers();

// ...

User foo = makeSomeUser();
api.createUser(foo);

The MP Rest Client code takes care of all the actual plumbing of turning that interface into a usable REST client, translating between JSON and custom classes (like User there), and other housekeeping work. Things can get much more complicated (authorization, interceptors, etc.), but that will do for now.

Records

What has been a nice addition with Domino 14 and recent releases of the XPages JEE project is that now I can also use Java records with these interfaces. Records were introduced in Java 14 and are meant for a couple of reasons - for our needs, they cut down on boilerplate code, but they also have beneficial implications with multithreading.

In our example above, we might define the User class as:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
package com.example.api.model;

import jakarta.json.bind.annotation.JsonbProperty;

public class User {
	private String username;
	@JsonbProperty("first_name")
	private String firstName;
	@JsonbProperty("last_name")
	private String lastName;

	public String getUsername() {
		return this.username;
	}
	public void setUsername(String username) {
		this.username = username;
	}

	public String getFirstName() {
		return this.firstName;
	}
	public void setFirstName(String firstName) {
		this.firstName = firstName;
	}

	public String getLastName() {
		return this.lastName;
	}
	public void setLastName(String lastName) {
		this.lastName = lastName;
	}
}

This is a standard old Java bean class, and it's fine. Ideally, we'd add more to it - equals(...), hashCode(), and toString() in particular - and this version only has three fields. It'd keep growing dramatically the more we add: more fields, bean validation annotations, and so forth.

For cases like this, particularly where you likely won't be writing constructor code much (unless they add derivation syntax), records are perfect. In this case, we could translate the class to:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
package com.example.api.model;

import jakarta.json.bind.annotation.JsonbProperty;

public record User(
	String username,
	@JsonbProperty("first_name")
	String firstName,
	@JsonbProperty("last_name")
	String lastName
) {
}

This is dramatically smaller and will scale much better, lines-of-code-wise. And, since the important parts of the stack support it, it's nearly a drop-in replacement: JSON conversion knows about it, so it works in the MP Rest Client and in your own REST services, and Expression Language can bind to it, so it's usable in Pages and elsewhere.

Since records are read-only, they're not a universal replacement for bean-format classes (in particular, beans used in XPages forms), but they're extremely convenient when they fit and they're another good tool to have in your toolbelt.

New Releases: XPages JEE 3.2.0 and WebFinger for Domino 2.0.0

Fri Nov 08 12:05:54 EST 2024

This week and last, I uploaded a couple updates for some of my OpenNTF projects.

XPages Jakarta EE 3.2.0

First up is a new version of the XPages Jakarta EE project. This one is primarily about bug fixing and shoring up some of the 3.x line of features some more. Of particular note, it fixed a problem that crept in that caused the use of Concurrency to break the application, which was a bit of a problem.

WebFinger for Domino 2.0.0

About two years ago, I created the WebFinger for Domino project, primarily as a way to publish profile information in a way that works with Mastodon, though the result would work with ActivityPub services generally.

I'm fond of the goals of open formats like WebFinger and the related IndieWeb world. While Domino's licensing has been hostile to this world of personal sites since basically the 90s, it's still a good conceptual fit, and so I like to glom stuff like this on top of it when I can.

Anyway, I was reminded of this project and started thinking about ways to expand it beyond just its original use. For now, I mostly wanted to dust it off and make it extensible. The result is version 2.0.0, which refactors the code to allow for more stuff in the output. Along those lines, I added the ability to specify PGP public keys and have them included in the list, along with a Servlet to host them on your server. I plan to look around for other things people like to include in their profiles and add them in. I don't actually ever do much with PGP, but it's nice to have it in there.

XPages JEE 3.0

Sun Jun 09 14:45:14 EDT 2024

Today, I uploaded the release version of 3.0.0 of the XPages Jakarta EE Support project. It's been proving stable in my use since the last beta, and so I think this is as good a time as any to release it properly.

Changes

The big-ticket change remains the move to Jakarta EE 10 as the baseline, which brings a handful of new features as well as a new Java version requirement. That means that this release also requires at least Domino 14. Domino 12.x served us well, but its time has passed.

Jakarta EE 10, for its part, is mostly about solving a lot of old business in the JEE community: it continues the gradual deprecation of EJB in favor of CDI, it removes some old stuff like applet requirements, and then also brings in a couple "scratch an itch" features.

Of particular note is the addition of the EntityPart type for REST services. Though it's a small feature, it's a real "finally" one, in that there hadn't been a proper way to deal with multipart/form-data MIME body parts individually, and so each implementation of Jakarta REST would bring in its own, or you'd have to fall back to taking an InputStream and parsing the MIME body yourself. Now, you can do so in a spec-based way:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
@POST
@Consumes(MediaType.MULTIPART_FORM_DATA)
public String post(@FormParam("part") EntityPart part) throws IOException {
	String mediaType = part.getMediaType();
	String name = part.getName();
	String fileName = part.getFileName();
	MultivaluedMap<String, String> headers = part.getHeaders();
	byte[] data = part.getContent().readAllBytes();
		
	// ...
}

There's also the split of Jakarta NoSQL into that spec plus the new Jakarta Data. In this release of XPages JEE, I mostly aimed to keep the same level of functionality while accounting for the renaming of packages and types, but I'll be interested in building on this in the future.

Finally, there's the project-specific change of condensing the many, many XPages libraries just down to Core, UI, and MicroProfile. That didn't impact functionality as such, but it sure is nice only having three (or six, with the source features) features to say "yes, install this" to when updating it in Designer and only three to check in Xsp Properties. It also allowed me to delete a lot of weird shim and conditional code, and it'll make maintenance of it much easier in the future without having to worry about every permutation of what libraries you have enabled in an NSF.

The Future

Speaking of which, that brings me to some of the next things in the docket. I imagine that the immediate work will be cleaning up any loose ends from the move. For example, Jakarta Concurrency 3.0 brought a bunch of new features, but I haven't actually checked to see if they work or if I need to do more adapting.

Additionally, Jakarta Data is intended to go beyond just NoSQL, and can also layer on top of Jakarta Persistence (née JPA, the API for working with relational DBs) and arbitrary services. I don't know yet if there's a usable implementation beyond the one in Open Liberty, so that may have to wait, but it'll be interesting to tinker with.

There are also a bunch of features I'd like to get cracking on now that this hurdle is done. For example, I'd like to move the NoSQL driver to use JNX, which would let me do a couple things that the Notes.jar classes just can't. Along with that, I'd like to add an option to publish MP Metrics to the Domino statistics store, which adopting JNX would allow easily.

Fortunately, I don't expect that there will be any other breaking-change discontinuities in the future. Jakarta EE 11 has some deprecations and removals, but it's mostly similar to JEE 10 in that it's about classes and idioms that are much, much older than any code built using this project. That should give this 3.x series a good, long period to be a comfortable baseline, even though the next major version of JEE.

XPages JEE 3.0 Beta 4

Wed May 22 13:48:19 EDT 2024

Earlier today, I uploaded beta 4 of XPages JEE 3.0 to GitHub. I've been taking a slow approach to this release due to its "breaking changes" nature, but I think it's just about ready for release.

Domino 14

Like previous betas, this release requires Domino 14 (and Notes 14 for development), since it moves to a baseline of Jakarta EE 10, which in turn requires Java 11. Doing this let me get rid of some extra shim code that was needed to support both Domino 14 and previous versions, and also let me move to some newer language constructs. If you're interested in the sorts of things that the new versions of Java brought, check out the OpenNTF webinar from April, where I talked about just that.

Library Reorganization

Beyond the Java version requirement, the big breaking change I made was to finally shrink the number of XPages libraries and p2 features in the project. As the project grew, I kept adding new distinct XPages libraries, for the principle of keeping each spec distinct, as they often technically are. A few things made me want to fix this, though:

  • Checking all the boxes in the Xsp Properties editor for each library was annoying
  • Checking "Yes, install this plug-in" for every single component, plus its source version, when installing in Designer was very annoying
  • I had to do weird tricks to add features that touched multiple specs. For example, the project tree had a bunch of cross-spec fragments like "jaxrs.cdi" and "json.cdi" to contribute parts for when CDI was present but not break things when it wasn't. This added an extra layer of indirection and maintenance hassle
  • The specs themselves have been converging, particularly in the sense that more and more they assume the "backbone" of CDI is present. For example, Faces removed its original @ManagedBean and related support in favor of going all-in on CDI. Jakarta REST is moving towards the same
  • It was hard to think of realistic scenarios where it would be important to split up the specs like this, using, say, REST but not CDI or Validation

Now, there are just three: "org.openntf.xsp.jakartaee.core", "org.openntf.xsp.jakartaee.ui", and "org.openntf.xsp.microprofile". I was tempted to roll MicroProfile into "core", but they're conceptually (and administratively) distinct enough that it was worth separating them. With this change, it's not only less annoying to install, but it lets me make a lot more assumptions about what is present across specs, simplifying a lot of little things.

Deep-Dive Sidebar: Class Loading

One interesting aspect I ran into when making this change was that I had to readjust my mental model for how class loading is done from an NSF-based application and the libraries it uses. The way it mostly works conceptually aligns with what you see in Designer:

  • Select a library to depend on
  • The XspLibrary has a "getPluginId" method, which then Designer uses to add the OSGi bundle to the classpath
  • Any Require-Bundle dependencies in that plugin marked as "visibility:=reexport" are also included on the classpath

So, in this way, you'd previously select the "org.openntf.xsp.cdi" library, which would then add a dependency on the bundle of the same name, which would in turn re-export the things the NSF should see, such as the CDI API classes.

When I consolidated the libraries, I did it in the straightforward way: I made new "*.library" bundles for them and then added the existing spec-specific bundles as re-exported dependencies. As far as Designer was concerned, all was well, and there was just another little layer in between.

However, that's not quite the whole story when it comes to the runtime on the server. Though Designer presents the NSF as a pseudo-OSGi bundle using the Plug-in Development Environment, Domino doesn't do the same thing. What Domino does is use a class called ModuleClassLoader (not to be confused with Equinox's ModuleClassLoader, which is entirely different and IS an OSGi loader) to handle loading classes from the NSF and its dependencies. The way it gets to its dependencies isn't really a "true" OSGi way, though: it keeps track of a collection of ClassLoader objects as extraDepends, which it consults each in turn as needed. Those ClassLoader objects, at least in post-8.5.2-era Domino, are the internal class loaders from the library OSGi bundles. This is cheating, and I imagine it was made for pragmatic transitional reasons when OSGi came into the picture.

The old layout conceptually looks like this:

Diagram of NSF to old library dependency

At first blush, this seems like a "six of one, half a dozen of the other" sort of situation, but it's not quite. What this setup does that normal OSGi doesn't is that it exposes META-INF/services files inside the direct dependencies to the application's ClassLoader, whereas these are normally encapsulated in OSGi. The effect was that a bunch of things that used to work started to fail - REST couldn't find all its output-writing classes, Validation couldn't find its implementation, and so forth. This is because they would all internally ask the thread-context ClassLoader (i.e. the NSF's loader) for resources within META-INF/services, and the extraDepends list used to be able to find them. Now that there was a layer of indirection, this no longer worked: the extraDepends loaders could see their own stuff but would not traverse the OSGi barrier to peek inside their further dependencies for these. Conceptually, now we have this:

Diagram of NSF to new library dependency

A direct ClassLoader dependency allows reading of resources, but a true OSGi-type dependency does not. So the result is that I had to "promote" a bunch of META-INF/services files from the now-downstream plugins into the "*.library" ones. It all makes sense once you see how the gears are moving, but it sure threw me for a loop for a while.

Bundle and Package Renaming

Okay, now back to the changes.

Since I was already breaking things anyway, I decided this was a good opportunity to fix the names of the bundles and packages in the project's source. For example, some names were antiquated: what was once "JSF" is "Jakarta Faces", but my bundle was "org.openntf.xsp.jsf". Additionally, I was inconsistent in my hierarchy: while Transaction was in "org.openntf.xsp.jakarta.transaction", others (like Faces there) skipped the "jakarta" level of the hierarchy. These don't normally matter to developers consuming the library, but they annoyed me. Now, all of the bundles and their contained packages are within either "org.openntf.xsp.jakarta", "org.openntf.xsp.jakartaee" (for platform-wide capabilities), or "org.openntf.xsp.microprofile".

Along with this will be a couple potential breaking changes for app-level code, such as moving org.openntf.xsp.beanvalidation.XPagesValidationUtil to org.openntf.xsp.jakarta.validation.XPagesValidationUtil, but there won't be TOO many due to this change.

Jakarta Data and NoSQL Changes

This one isn't from my latest round of changes and has been the case since early in the 3.x stream, but it's worth mentioning again here. The Repository concept from Jakarta NoSQL moved from that spec to the new "Jakarta Data" spec, and so related packages changed from jakarta.nosql.mapping to jakarta.data. Additionally, since the NoSQL spec shrunk to accommodate, things like @Column changed from jakarta.nosql.mapping.Column to jakarta.nosql.Column. It makes sense as NoSQL has been an evolving spec all along, but I suspect that this will be the biggest app-code-breaking change it experiences for a good while.

Release and Future Versions

My next steps are to put this through its paces now that all the issues are closed. Though I've ported everything to the JEE 10 versions, I haven't tested to make sure that most of the new features work. While JEE was largely a "cleanup" release, there are a bunch of new features, particularly in Faces, which is in turn always the jankiest part of the stack on Domino.

Post-3.0, I expect that my focus will start to shift to Jakarta EE 11. For a time, I was going to be SOL with it: though Domino 14 bumped Java to 17, JEE 11 was slated to target Java 21 at a minimum. In the mean time, however, that target shifted down to 17, putting it back on the table for Domino. JEE 11 was originally slated for Q1 of this year, but it slipped to some time around summer. That fits reasonably well with my cadence here. JEE 11 is technically also a breaking release, but I suspect that it won't break features that XPages JEE users use, at least not after this hurdle here.

XPages JEE 2.15.0 and Plans for JEE 10 and 11

Fri Feb 16 15:30:40 EST 2024

  1. Updating The XPages JEE Support Project To Jakarta EE 9, A Travelogue
  2. JSP and MVC Support in the XPages JEE Project
  3. Migrating a Large XPages App to Jakarta EE 9
  4. XPages Jakarta EE Support 2.2.0
  5. DQL, QueryResultsProcessor, and JNoSQL
  6. Implementing a Basic JNoSQL Driver for Domino
  7. Video Series On The XPages Jakarta EE Project
  8. JSF in the XPages Jakarta EE Support Project
  9. So Why Jakarta?
  10. XPages Jakarta EE 2.5.0 And The Looming Java-Version Wall
  11. Adding Concurrency to the XPages Jakarta EE Support Project
  12. Adding Transactions to the XPages Jakarta EE Support Project
  13. XPages Jakarta EE 2.9.0 and Next Steps
  14. XPages JEE 2.11.0 and the Javadoc Provider
  15. The Loose Roadmap for XPages Jakarta EE Support
  16. XPages JEE 2.12.0: JNoSQL Views and PrimeFaces Support
  17. XPages JEE 2.13.0
  18. XPages JEE 2.14.0
  19. XPages JEE 2.15.0 and Plans for JEE 10 and 11

Today, I released version 2.15.0 of the XPages Jakarta EE project. As is often the case lately, this version contains bug fixes but also a few notable features:

  • You can now specify Servlets in WEB-INF/web.xml (as opposed to just via the @WebServlet annotation). This is helpful for defining a Servlet when the actual implementation is in a JAR or when following non-annotation-based examples
  • You can now specify context-param values in WEB-INF/web.xml in the NSF and META-INF/web-fragment.xml in JAR design elements, which will be available to JSP, JSF, JAX-RS, @WebServlet-annotated Servlets, and web.xml-defined Servlets
  • Added @BooleanStorage annotation for NoSQL entities to define how boolean values are converted to note items
  • Added CRUD operations for calendar events to NoSQL, around a few new methods on Repository. This exposes some of the capabilities of NotesCalendar and can be used for, for example, providing an iCalendar feed based on a mail database. To go with that, XPages JEE also re-exports iCal4J as included in the Domino stack for NSF use, though this API is... not smooth

The first two here are focused around bringing NSFs more in line with "normal" Jakarta EE applications, while the latter are some nice improvements for the NoSQL driver. I hope to put the last one in particular to good use - for example, OpenNTF's site will be able to provide a calendar of webinars and other events that we can manage internally using a normal Notes calendar, and that sounds nice to me.

Next Versions

I still have the 3.x branch of the project chugging along, and I think it'll be ready for a real release before too long. Since it'll be a breaking-changes release thanks to upstream changes, I'm using it as an opportunity to consolidate the sprawl of features and XPages Libraries. Currently, my plan is:

  • One for "core", covering most things in the Jakarta EE Core Profile, plus the other utility specs I've implemented: Transactions, Bean Validation (which really should be in Core in my estimation), Concurrency, Servlet, and so forth, plus Data and NoSQL
  • One for "UI", covering Jakarta Pages, Jakarta Faces, and MVC - basically, the stuff you could use to replace XPages to make an HTML-generating app in your NSF
  • One for MicroProfile, or at least the specs I've implemented so far. I'm a little tempted to wrap this in to Core, since things like OpenAPI are useful almost all the time, but it's a clean-enough separation that it'll be fine

This will require Domino 14, since Jakarta EE 10 requires at least Java 11.

That brings me to some unexpected good news: though Jakarta EE 11 was long planned to use Java 21 as its minimum version (since 21 is the current LTS), it looks like they've switched to making Java 17 the baseline. For me, this is a little sad in an idealistic sense, since it pushes things like Virtual Threads out of the realm of being a core part of JEE, but I'm very happy that I'll be able to use all JEE 11 specs in Domino 14. Even if Domino 15 used Java 21, it'd still be a long while before that came, and we'd lag behind the standard for at least a year. Instead, this puts the project back in line with upstream, and allows me personally to potentially resume committing to Jakarta NoSQL - I'd been out of the loop for a very long time when it moved to 11 and then 17 as its required version.

I don't know right now whether JEE 11 will be the same sort of breaking change for the project (which would mean a 4.x release) or if I'll be able to make it a 3.x one - the specs aren't out yet, so time will tell. The big focus of 11 will be further centralization on CDI instead of EJB, and I'm all for it.

My plan is to get 3.x out for Domino 14, based on JEE 10, as soon as time allowed, and then I'll start looking into bumping to JEE 11 when it releases in the summer.

XPages JEE 2.14.0

Fri Oct 27 11:47:02 EDT 2023

  1. Updating The XPages JEE Support Project To Jakarta EE 9, A Travelogue
  2. JSP and MVC Support in the XPages JEE Project
  3. Migrating a Large XPages App to Jakarta EE 9
  4. XPages Jakarta EE Support 2.2.0
  5. DQL, QueryResultsProcessor, and JNoSQL
  6. Implementing a Basic JNoSQL Driver for Domino
  7. Video Series On The XPages Jakarta EE Project
  8. JSF in the XPages Jakarta EE Support Project
  9. So Why Jakarta?
  10. XPages Jakarta EE 2.5.0 And The Looming Java-Version Wall
  11. Adding Concurrency to the XPages Jakarta EE Support Project
  12. Adding Transactions to the XPages Jakarta EE Support Project
  13. XPages Jakarta EE 2.9.0 and Next Steps
  14. XPages JEE 2.11.0 and the Javadoc Provider
  15. The Loose Roadmap for XPages Jakarta EE Support
  16. XPages JEE 2.12.0: JNoSQL Views and PrimeFaces Support
  17. XPages JEE 2.13.0
  18. XPages JEE 2.14.0
  19. XPages JEE 2.15.0 and Plans for JEE 10 and 11

Today, I released version 2.14.0 of the XPages Jakarta EE Support project. As with the last few releases, this is primarily about bug fixes and compatibility as I prepare for the big switch in 3.0, but there are some notable, if small, feature additions.

To begin with, I improved handling of reading JSON in NoSQL entities when reading from a view. This applies to the @ItemStorage(type=ItemStorage.Type.JSON) annotation on entity properties, which causes the value to be loaded and stored as JSON, useful for storing custom class types in a document. Now, such values can be read from view entries - previously, this processing was skipped for those. Of note when using this: normally, storing as JSON will automatically set the item's summary flag to false, to avoid overflowing the summary limit. However, you can add @ItemFlags(summary=true) to the property to override this behavior so that the values can show up in views.

Additionally, I added the ability to use JAXRSClassContributors inside the NSF. These were originally an internal mechanism for the project to dynamically add REST endpoints and extensions, like those used by MVC and OpenAPI. Now, though, I've made it so that such classes can be registered via a file named "META-INF/services/org.openntf.xsp.jaxrs.JAXRSClassContributor" in the NSF, and also added the ability to specify configuration properties. The latter is important because, though all xsp.properties values were already inserted into the JAX-RS configuration, there was no way to provide non-string values. This came up in the context of MVC, which has a CSRF configuration property that must be an enum value.

For the final feature, I improved support for JAX-RS's status-indicating exceptions, such as NotSupportedException and BadRequestException. Previously, the project supported translating NotFoundException to a 404, but now it will also translate these other standard exceptions to their corresponding HTTP statuses.

The release is otherwise rounded out by a number of bug fixes to fix problems encountered in the wild. Additionally, I added a workaround for some classpath pollution in the latest Domino 14 beta - I hope that the trouble will be gone for GA, but this project should handle it either way.

CollabSphere Workshop Schedule Update and OpenNTF Webinar

Tue Aug 22 14:06:55 EDT 2023

As I mentioned last month, I'll be participating in a couple presentations, including a workshop on the XPages Jakarta EE project.

This workshop is scheduled for Tuesday, but its time has shifted. Originally, it was scheduled for 1 PM - 3 PM local time, but it's moved up to 9 AM - 11 AM to help with some coordination. Looks like it's also in the Pullman room and not the Linnaeus room now. Same idea, but you'll probably want to bring a cup of coffee with you.

In addition, and particularly so if you won't be attending CollabSphere, I'll be doing this month's OpenNTF webinar this week, on Thursday. The plan for that is to be like a mini/less-interactive version of the workshop, but covering the same general idea of the various ways to develop JEE apps on Domino with the project. If you're interested in that, you can register here.

Modes Of App Development With XPages Jakarta EE

Fri Jul 28 11:46:50 EDT 2023

I've been working on my workshop for this year's CollabSphere, and one of the main decisions I have to make is what I'm going to focus on. The idea of the workshop is to give a bit more brass-tacks information about how to use the project: rather than just a list of features, it'll be about the specific business of building an app using it.

But how does one build an app in it? There's certainly no lack of tools available, but that leads to the opposite problem: what's the right one for your project? What's likely to be the most common path people take?

The Types

As I've been working on it, I've grouped things into four main categories, and I figured it'd be useful to enumerate them here to coordinate my thoughts and provide some general information. There aren't hard lines between these: you can use any mixture of some or all of the parts in an app, and do different mixes in different apps. These are just what I expect to be the main groupings:

  • "XPages Plus", using some new capabilities in existing or new apps with XPages-based UIs
  • REST services, focusing on providing REST endpoints for JavaScript-based apps or other servers
  • MVC and JSP, focusing on clean, lightweight UIs for document-based apps, but less ideal for complex business logic
  • JSF, building the same sorts of apps XPages is adept at, but using newer technology

"XPages Plus"

The first route is how the project got started: you keep building XPages apps but sprinkle in a few new capabilities to improve them.

For example, you could replace your managed beans defined in faces-config.xml with CDI beans, allowing you to get the quick benefit of annotation-based definitions and then the bigger benefits of @Inject, producer methods, and interceptors.

You could also start using newer EL features, like the long-desired ability to pass parameters to methods.

This path wouldn't necessarily require a lot of reworking of your app or changing the way you think about XPages development, but would still be something of a minor development refresh and can set you up well for future improvements.

Your data access will likely still be through the traditional xp:dominoDocument and xp:dominoView components, but you could also write beans that access data with lotus.domino or ODA, or switch to using the NoSQL driver.

REST Services

Alternatively, you could decide you want to focus your apps around REST services with either a JavaScript app in, for example, React as the front end, or providing services to remote servers.

With this, you'd largely stop using XPages design elements entirely, instead defining your services in Java classes with JAX-RS annotations. This brings huge advantages over other ways to write REST services on Domino, with the JAX-RS annotations allowing for clear, logical definition of services, their parameters, and their output. Moreover, the ancillary tooling brings things like automatic OpenAPI definitions, which would be annoying to maintain using things like the XPages-side REST controls.

This path is good if you're specifically aiming to build a JavaScript-based app, either because you just like it, because your organization decided to go that route, or if you have a larger team that splits the duties of front-end and back-end developers. It can also naturally blend into the next one.

Your data access here won't be through the XPages components, but you could still use lotus.domino or ODA classes, or switch to the NoSQL driver. That actually goes for the next two, too, so we'll just count that as assumed.

MVC and JSP

I'll admit that part of the reason I want to consider this a top-tier route is because I just personally really like it. I've had a blast writing apps like this blog and the OpenNTF site using this path, with its much-cleaner code and back-to-basics approach to HTML.

Regardless of my personal enjoyment of it, though, this has some nice advantages. The fact that MVC builds on top of JAX-RS means that it melds well with the REST-services approach above. For example, you might primarily write REST services for a JS app, but then do a set of "admin" pages with MVC. Or you might use this as part of the prototype phase: structure your app the same way you will when you expand to a multi-tier team, but start out by doing a quick UI with MVC on top of the same or related endpoints.

With this path, your app will start with Java classes with JAX-RS annotations, and then you'd mix it with JSP files inside WebContent/WEB-INF. One down side to this approach is that Designer doesn't provide much help for writing JSP files. In the tooling, I bind .jsp and .tag files to the HTML editor, so you at least get normal HTML assistance, but that won't help you with specific JSP tags and EL. Fortunately, the set of tools you'll likely use in JSP is comparatively small, so you'll eventually memorize things like <c:forEach items="..." var="...">...</c:forEach> in much the same way that you could eventually write out an <xp:repeat/> in your sleep in XPages.

JSF

This one, technically tricky though it may be, is conceptually straightforward: write the same sort of apps you do with XPages, but do it with modern JSF instead. This makes a lot of sense, since JSF shares XPages's acumen with complicated forms with partial refreshes and changing state data, but has benefited from some development that didn't happen on the XPages side.

It's not a direct replacement: in particular, JSF has no knowledge of Domino data sources, so there's no xp:dominoDocument or xp:dominoView. You'd still need to do your data access via beans, as in the previous two options, likely using either lotus.domino/ODA or the NoSQL driver. Additionally, Designer really doesn't help you here - again, I map .xhtml and .jsf files to the HTML editor, but JSF components have a lot of properties to set, and so you'll be spending a lot of time referencing documentation.

Still, it's clear why this is proving to be a popular path. The development model is the same as in XPages, while the JSF stack (especially including PrimeFaces) brings a lot of amenities that aren't in XPages and are also more portable to other environments.

Conclusion

So, for now, I'm thinking of splitting up the workshop to cover each of these paths a bit. That runs the risk of feeling like too much of a grab bag, but I don't want to give the opposite impression, that the project only allows for some specific path. It's a broad platform update, accommodating many development approaches, and I want to keep that clear. Fortunately, each path has a pretty-clean pitch, and the shared components (CDI, bean validation, the REST client, etc.) build on each other well, so the idea that it's a pool of features that you can swim in is, I think, compelling.

XPages JEE 2.13.0

Fri Jul 21 11:51:52 EDT 2023

  1. Updating The XPages JEE Support Project To Jakarta EE 9, A Travelogue
  2. JSP and MVC Support in the XPages JEE Project
  3. Migrating a Large XPages App to Jakarta EE 9
  4. XPages Jakarta EE Support 2.2.0
  5. DQL, QueryResultsProcessor, and JNoSQL
  6. Implementing a Basic JNoSQL Driver for Domino
  7. Video Series On The XPages Jakarta EE Project
  8. JSF in the XPages Jakarta EE Support Project
  9. So Why Jakarta?
  10. XPages Jakarta EE 2.5.0 And The Looming Java-Version Wall
  11. Adding Concurrency to the XPages Jakarta EE Support Project
  12. Adding Transactions to the XPages Jakarta EE Support Project
  13. XPages Jakarta EE 2.9.0 and Next Steps
  14. XPages JEE 2.11.0 and the Javadoc Provider
  15. The Loose Roadmap for XPages Jakarta EE Support
  16. XPages JEE 2.12.0: JNoSQL Views and PrimeFaces Support
  17. XPages JEE 2.13.0
  18. XPages JEE 2.14.0
  19. XPages JEE 2.15.0 and Plans for JEE 10 and 11

Today, I released version 2.13.0 of the XPages Jakarta EE Support project. Though there's not a single big banner feature, this one brings a number of good enhancements in a bunch of areas.

Domino 14

The first thing it brings is compatibility with Domino 14 EAP1. The goal here is to just bring the same features to that version - it doesn't bump the individual components to their Jakarta EE 10 versions yet, since that will come with breaking changes and prevent use on 12.0.2 and before.

There remains a caveat here, which is that EAP1 doesn't include a Java compiler, and so JSP doesn't work unless you shim in parts of a JDK into a Domino installation. If you're not using JSP, though, you should be able to run your apps on 14 using this new build.

JSF

It turns out that Faces support is a popular feature, which makes sense: it's the most direct analogue to writing XPages, while bringing in a lot of new features. While Faces has always been tricky to keep working, this build includes some fixes for stability and usability. I'd still consider this route to be the least-proven way to do UIs with this project, but it's shaping up really nicely.

JavaSapi

Speaking of experimental features, this release comes with a new feature that builds on the JavaSapi bridge I added a bit ago: you can now specify extensions within an NSF that will participate in JavaSapi pre-processing of requests.

To do this, you can make a file named META-INF/services/org.openntf.xsp.jakartaee.jasapi.JavaSapiExtension in your NSF's Java classpath (e.g. the Code/Java folder) and have it name a JavaSapiExtension class. For example:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
package javasapi;

import org.openntf.xsp.jakartaee.jasapi.JavaSapiContext;
import org.openntf.xsp.jakartaee.jasapi.JavaSapiExtension;

public class TestJavaSapiExtension implements JavaSapiExtension {
	@Override
	public Result rawRequest(JavaSapiContext context) {
		// Add a custom header to all responses
		context.getResponse().setHeader("X-InNSFCustomHeader", "Hello from NSF");
		return Result.SUCCESS;
	}
	
	@Override
	public Result authenticate(JavaSapiContext context) {
		// Custom authentication mechanism. If you use this, do it more securely!
		String overrideName = context.getRequest().getHeader("X-OverrideName");
		if(overrideName != null && !overrideName.isEmpty()) {
			context.getRequest().setAuthenticatedUserName(overrideName, "Basic");
			return Result.REQUEST_AUTHENTICATED;
		}
		return JavaSapiExtension.super.authenticate(context);
	}
}

As with any time I do anything with JavaSapi, I can't stress enough how unsupported this is. It's not even officially a feature of Domino, and I've found it fairly easy to crash the server by doing the wrong thing here. On the other hand, it's neat and fun, so... feel free to tinker with it.

NoSQL

Finally, I added some methods to DominoRepository instances to access profile and named notes:

1
2
3
4
SomeEntity profile = repository.findProfileDocument("SomeProfile", "Your Username")
	.orElseThrow(() -> new NotFoundException("Could not find profile for user"));
SomeEntity named = repository.findNamedDocument("Some Name", "Your Username")
	.orElseThrow(() -> new NotFoundException("Could not find named doc for user"));

I made them return Optional for safety's sake - I think they'll in general create the documents if they don't exist, but I wanted to leave room in the API for a future ability to only return them if they haven't previously been explicitly created.

Anyway, that's one more step in making the driver useful as a general-purpose Domino access mechanism. My goal is to make it so that you'd only need lotus.domino, ODA, or another Domino-specific API in specific edge cases. I can already do almost everything I need to, and now I'm just working down the list of less-critical features.

Next Steps

As I've been working on 2.13.0, I've also been working on the 3.0 branch, including an early beta last month. That's the branch that breaks pre-Domino-14 compatibility and bumps most components up to their Jakarta EE 10 versions. Since I can't realistically have a proper release of that until Domino 14 is out, my plan is to keep tinkering with the side branch and releasing betas from time to time.

In the mean time, I wouldn't be surprised if there's a 2.14.0. There are some tweaks and efficiency improvements I want to make particularly for JSF, so I expect I'll have enough on my plate before Domino 14's release to get another current-line release out.

XPages JEE 2.12.0: JNoSQL Views and PrimeFaces Support

Thu May 25 15:08:31 EDT 2023

Tags: jakartaee
  1. Updating The XPages JEE Support Project To Jakarta EE 9, A Travelogue
  2. JSP and MVC Support in the XPages JEE Project
  3. Migrating a Large XPages App to Jakarta EE 9
  4. XPages Jakarta EE Support 2.2.0
  5. DQL, QueryResultsProcessor, and JNoSQL
  6. Implementing a Basic JNoSQL Driver for Domino
  7. Video Series On The XPages Jakarta EE Project
  8. JSF in the XPages Jakarta EE Support Project
  9. So Why Jakarta?
  10. XPages Jakarta EE 2.5.0 And The Looming Java-Version Wall
  11. Adding Concurrency to the XPages Jakarta EE Support Project
  12. Adding Transactions to the XPages Jakarta EE Support Project
  13. XPages Jakarta EE 2.9.0 and Next Steps
  14. XPages JEE 2.11.0 and the Javadoc Provider
  15. The Loose Roadmap for XPages Jakarta EE Support
  16. XPages JEE 2.12.0: JNoSQL Views and PrimeFaces Support
  17. XPages JEE 2.13.0
  18. XPages JEE 2.14.0
  19. XPages JEE 2.15.0 and Plans for JEE 10 and 11

Last week, I put up version 2.12.0 of the XPages JEE Support project. Beyond the usual fit-and-finish bits here and there, there are two main improvements in this release.

Jakarta NoSQL Views

A while back, I caved to the necessity of explicit view use in Domino by adding the @ViewEntries and @ViewDocuments annotations that you can use in DominoRepository instances to point to a view to read. In the normal case, this works well: you generally know what the view you want to read from is, and these are made for that purpose.

However, you don't always know the view or folder you want to read from. The classic case here is a mail file: a user can make a bunch of custom views and folders, and so, if you were to make a web UI for this, you'll need some way to read these arbitrarily. So, to account for that, I added two new methods available on all DominoRepository instances:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
Stream<T> readViewEntries(
	String viewName,
	int maxLevel,
	boolean documentsOnly,
	ViewQuery viewQuery,
	Sorts sorts,
	Pagination pagination
);

Stream<T> readViewDocuments(
	String viewName,
	int maxLevel,
	boolean distinct,
	ViewQuery viewQuery,
	Sorts sorts,
	Pagination pagination
);

These work similarly to using the annotations: the first three parameters in each correspond to the properties you can set on the annotations, while the last three are the implicitly-supported optional parameters on such a method. The results of calling them are the same as if you had called an annotated method - it's just that the calling code is a bit more detailed.

The other piece of this puzzle is that you'll need to know what views are available, say for a sidebar. To account for that, I added this method:

1
Stream<ViewInfo> getViewInfo();

This will return information about all the views and folders in the database referenced by the DominoRepository instance. It doesn't try to be too smart: it will query all views and folders, without trying to parse out selection formulas for references to the repository's form, since that would be error prone in the normal case and outright wrong in edge cases (like if you have "synthetic" entity types that don't reference a real form at all). The information you get here is what you'd likely expect: view name, whether it's a view or folder, selection formula, column info, and so forth.

Jakarta Faces and PrimeFaces

I'm calling this one "PrimeFaces" since that's the immediate goal of these changes, but it's really about allowing for third-party Faces (JSF) extensions and themes without having to jump through too many hoops.

The challenge with PrimeFaces and things like it is that, while the Java packages for JSF no longer conflict with XPages (javax.faces and jakarta.faces are clearly related, but Java considers them entirely distinct), not all of the implementation bits changed. The big one here is WEB-INF/faces-config.xml: that file goes by the same name for XPages and JSF, but any Faces lifecycle participants declared in there (ViewHandlers, PhaseListeners, etc.) are not at all compatible.

To account for this, I've carved out a subdirectory, WEB-INF/jakarta. Within that, you can put JARs in WEB-INF/jakarta/lib and make a file named WEB-INF/jakarta/faces-config.xml. When present, the new-JSF runtime will pick up on these libraries, while XPages won't, and the runtime will also redirect calls to WEB-INF/faces-config.xml from JSF to WEB-INF/jakarta/faces-config.xml. In this way, you're able to have advanced extensions for both frameworks in the same NSF.

This isn't without its necessary workarounds, though. The big one comes in if you want to reference classes from these JSF-specific libraries in Java design elements. Since Designer's classpath won't know about them, your safest bet is to access them reflectively. For example, I ported a JSF example app from rieckpil.de to an NSF. In this, almost all of the code is identical - other than removing some EJB bits (which is not part of the XPages JEE project), the majority of the code was unchanged. However, one of the classes, IndexBean, directly referenced PrimeFaces model classes in order to build the bar chart. Think of that as similar to when you use com.ibm.xsp.model.DataObject in XPages code: it's a UI-specific class that can help bridge the difference between your stuff and the UI. However, since Designer doesn't know about those classes at build time, I had to change the calls to stuff like barChartModelClass.getMethod("setSeriesColors", String.class).invoke(model, "007ad9");. Not unworkable, but definitely ungainly. In a cruel twist of fate, this is exactly the sort of time when a JVM scripting language like SSJS shines. Alas.

As a final note, I waffled a bit (and am still waffling) on whether it'd be worth wrapping libraries like PrimeFaces in OSGi bundles, potentially as an optional add-on project. The way it's done here - including the JARS in your "webapp" - is more or less the standard way to do it, but real current projects would use a dependency mechanism like Maven instead of manually adding the JAR. On the other hand, there's a distinct benefit this way in that you can pick your version without having to do anything server-wide, and the use of a side directory means you don't suffer from Designer's poor performance when using JARs on a non-local server. Still, I may at least add an extension point for JSF classpath extensions at some point, though, since it could be useful.

Next Versions

As I mentioned earlier this month, this project is in some ways waiting for the Domino 14 beta cycle to properly begin, which will allow me to make some significant long-desired changes.

Still, there'll probably be at least another release before 3.x, which is currently named 2.13.0. Beyond ideally having no-app-changes support for Java 17, I've been doing some tinkering with JavaSapi, with the idea of being able to have your app code participate in filtering and authenticating requests. As with anything related to JavaSapi, it's sort of inherently-treacherous territory, considering it's not an official feature of Domino, but I've had some promising (if crash-prone) success so far. I'll probably also want to consolidate some of my handling of individual components and how they're configured in the NSF. There'll be a bigger push for that in 3.x, but for now there's still definitely room for me to go back and clean up some of the ways I've gone about things. The specs I added early (CDI, JAX-RS, etc.) are a bit more ad-hoc than some of the newer ones, with the newer ones coalescing more around the ComponentModule part (Domino's Java conception of a running app, NSF or otherwise) and less around the XPages ApplicationEx part. There's an inherent amount of necessary grime with this stack, but I have some ideas for at least some cleaning.

Otherwise, I'm mostly champing at the bit to do my big revamps in 3.x: lowering the count of individual XPages Libraries that separate the features, bumping specs and implementations to their next major versions, improving the code with Java 9 through 17 enhancements, and so forth. That should be fun.

The Loose Roadmap for XPages Jakarta EE Support

Thu May 04 10:29:44 EDT 2023

  1. Updating The XPages JEE Support Project To Jakarta EE 9, A Travelogue
  2. JSP and MVC Support in the XPages JEE Project
  3. Migrating a Large XPages App to Jakarta EE 9
  4. XPages Jakarta EE Support 2.2.0
  5. DQL, QueryResultsProcessor, and JNoSQL
  6. Implementing a Basic JNoSQL Driver for Domino
  7. Video Series On The XPages Jakarta EE Project
  8. JSF in the XPages Jakarta EE Support Project
  9. So Why Jakarta?
  10. XPages Jakarta EE 2.5.0 And The Looming Java-Version Wall
  11. Adding Concurrency to the XPages Jakarta EE Support Project
  12. Adding Transactions to the XPages Jakarta EE Support Project
  13. XPages Jakarta EE 2.9.0 and Next Steps
  14. XPages JEE 2.11.0 and the Javadoc Provider
  15. The Loose Roadmap for XPages Jakarta EE Support
  16. XPages JEE 2.12.0: JNoSQL Views and PrimeFaces Support
  17. XPages JEE 2.13.0
  18. XPages JEE 2.14.0
  19. XPages JEE 2.15.0 and Plans for JEE 10 and 11

At Engage, HCL officially announced Java 17 in Domino 14 (I'm sure they announced other things too, but I have my priorities). This will allow me to do a lot in pretty much all of my projects, but it's particularly pertinent to XPages JEE.

Currently, the project targets generally Jakarta EE 9, which came out in late 2020 and was "just" a switch from javax.* to jakarta.*, with no official new features. However, Jakarta EE 10 came out a year ago - in addition to bringing a raft of new features, it also bumped the minimum Java version to Java 11, pushing it outside of Domino's realm. Accordingly, I've had to hold off on a lot of major- and minor-version bumps in the XPages JEE project as new releases started being compiled for Java 11. Once V14 is out, though, I'll be able to move to the current JEE platform... at least until JEE 11 comes out next year and requires Java 21, anyway.

So I've been working on how I'm going to approach this, and what I'm thinking is that I'll do it in two phases: first, a final 2.x release that provides Java 17/Domino 14 compatibility for existing components, and then a new 3.x breaking-changes release to bring in Jakarta EE 10 components.

The Final 2.x Release

I currently have this penciled in for the next release, 2.12.0, but that may change if I decide I want to get a real 2.12.0 release out before Domino 14 is at least in stable beta form. Let's call it "2.99.0" for now.

The idea here will be that I'll want to make sure all existing code in NSFs continues to work unchanged: upgrade your server to V14, install 2.99.0, and your apps keep working. In theory, this shouldn't be too complex. There's some shimming needed for Weld (the CDI implementation) to account for changes from Project Jigsaw in Java 9 and later, and there might be some stuff around AccessController, but in general I expect it'll just be some tweaks here and there. Time will tell, of course.

Once that's out, I plan to not look back (unless there's demand, I suppose). The switch to Java 17 is a huge deal, and I don't think it'll be worth spending much more time with Java 8 once it's no longer required. The 2.x branch is already, I feel, in a pretty good place, so I'll feel comfortable having a stable final version.

The Breaking 3.0 Release

Then, the plan will be to start down the path of 3.x with breaking changes - not everything, but some. For one, JEE 10 has a handful of backwards-incompatible changes. Those are mostly for legacy true-JEE code, though, and the main ones that XPages JEE code will likely want to be aware of will be the switch of XML namespaces to shorter representations. That will affect JSP and JSF code, but the old URIs (the jcp.org ones) will continue to work, at least for a while.

Most of the breaking changes will probably happen internally. I've talked for a long while now about my desire to do some reorganization of the project. The big one is wrangling the proliferation of Eclipse Features and XPages Libraries. Anyone who has installed the project in Designer is well aware of just how many times you have to click "yes, I want to install the thing I'm installing", and that alone is enough to warrant a reorganization. Beyond that, though, I've had to take care to try to make it so that the individual components don't depend on each other unnecessarily. There's a certain amount of good discipline that provides, but it eventually wears a bit thin.

I'm not quite sure what form the consolidation will take, but it'll probably be something like three features: "core", "extended", and "MicroProfile". "Core" would probably roughly map to the actual Jakarta Core Profile, plus things that I find essentially obligatory like Bean Validation. "Extended" would be all the things like JSP and JSF, the "leaves" on the dependency tree: they depend on core features, but nothing depends on them. Then "MicroProfile" would be, well, MicroProfile features. The only thing still giving me pause is that there's not too much case for not installing all of these all the time anyway - if you don't want to use, say, JSF, you don't have to; additionally, it's not like Domino is a svelte cloud-native mini server meant to be deployed a thousand times in a cluster, so having the extra bundles sitting there isn't really onerous. We'll see. I hem and haw a lot on this, but eventually I'll have to make a decision.

Regardless of what form that takes, I expect that the changes to in-NSF code will be either minimal or none - for users of the project, it'll mostly be a matter of making sure to fully uninstall the old plugins before an upgrade and then tweaking Xsp Properties to select whatever the new form of the XPages Libraries ends up being.

Side Note: Jakarta NoSQL and Data

One interesting aspect of this move will be the path Jakarta NoSQL has been on. Though I've included it in the XPages JEE project for a little while now (and continue to heavily expand on it), it's always been technically a beta release. It's clearly proven itself stable even in its beta form, but it's going through a shift in the run-up to Jakarta EE 11. Specifically, the higher abstraction levels - the Repository interface and friends - are moving to a new project, Jakarta Data. The idea of that project will be that it will be able to sit on top of Jakarta NoSQL and other storage types, namely JPA.

It's going to be very neat, but it's created a bit of a pickle for me. Since it's targetting Jakarta EE 11, that means the release of it and NoSQL are going to require at least Java 21, and there's no word on when Domino will support that.

One option would be to stick with what I have now for the foreseeable future: a mildly-forked version of Jakarta NoSQL 1.0.0-b4. It's a workhorse and has been doing a good job, and it'd mean that app code wouldn't have to change. I'm not crazy about this for obvious reasons: I don't want to have one component stuck way behind while all the other parts get a nice jump forward, even if it works.

The other main option I'm considering is sliding forward to another beta release and landing there until Java 21 support shows up. The current development versions of the Data spec and JNoSQL with its implementation target Java 17, so I'll probably go with whatever the last beta is before the official switch to 21. Though it's tough to predict the future, that will probably end up being API-wise similar enough to the release forms of them that future jumps won't be difficult. We shall see, anyway.

Timeline

Anyway, the timeline for this is a little vague, and will mostly depend on when the Domino 14 betas come out and whether they contain anything show-stopping. My hope is to be able to have something that passes all the test cases ASAP with betas and then to have it continue to be stable through to the actual release.

I'm looking forward to leaving Java 8 behind for good, though, that much is certain.

Integrating External Java Apps With Keep And Keycloak

Wed May 03 09:43:59 EDT 2023

Last year, I wrote a post describing some early work on a Jakarta NoSQL driver for the Domino REST API (hereafter referred to as "Keep" to avoid ambiguity with the various other Domino REST APIs).

I've since picked back up on the project and similar aspects, and I figured it'd be useful to return to provide some more details.

OpenAPI

For starters, I mentioned in passing my configuration of the delightful openapi-generator tool, but didn't actually detail my configuration. It's changed a little since my first work, since I found where you can specify using the jakarta.* namespace.

I use a config.yaml file like:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
additionalProperties:
  library: microprofile
  dateLibrary: java8
  apiPackage: org.openntf.xsp.nosql.communication.driver.keep.client.api
  invokerPackage: org.openntf.xsp.nosql.communication.driver.keep.client
  modelPackage: org.openntf.xsp.nosql.communication.driver.keep.client.model
  useBeanValidation: true
  useRuntimeException: true
  openApiNullable: false
  microprofileRestClientVersion: "3.0"
  useJakartaEe: true

That will generate client interfaces that will mostly compile in a plain Jakarta EE project. The files have some references to an implementation-specific MIME class to work around JAX-RS's historical lack of one, but those imports can be safely deleted.

Keycloak/OIDC in Keep

I also mentioned only in passing that you could configure Keep to trust the Keycloak server's public keys with a link to the documentation. Things on the Keep side have expanded since then, and you can now configure Keep to reference Keycloak using Vert.x's internal OIDC support, and also skip the step of creating special fields in your person docs to house the Notes-format DN. For example, in a Keep JSON config file:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
{
	"oidc": {
		"my-keycloak": {
			"active": true,
			"providerUrl": "https://my.keycloak.server/auth/realms/myrealm",
			"clientId": "keep-app",
			"clientSecret": "<my secret>",
			"userIdentifierInLdapFormat": true
		}
	}
}

That will cause Keep to fetch much of the configuration information from the well-known endpoint Keycloak exposes, and also to map names from Keycloak from the LDAP-style format of "cn=Foo Fooson,o=SomeOrg" to Domino-style "CN=Foo Fooson/O=SomeOrg". This is useful even when using Domino as the Keycloak LDAP backend, since Domino does the translation in the other direction first.

Keycloak/OIDC in Jakarta EE

In the original post in the series, talking about configuring app authentication for the AppDev Pack, I talked about Open Liberty's openidConnectClient feature, which lets you configure OIDC at the server level. That's neat, and I remain partial to putting authentication at the server level when it makes sense, but it's no longer the only game in town. The version of Jakarta Security that comes with Jakarta EE 10 supports OIDC inside the app in a neat way, and so I've switched to using that.

To do that, you make a CDI bean that defines your OIDC configuration - this can actually be on a class that does other things as well, but I like putting it in its own place:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
package config;

import jakarta.enterprise.context.ApplicationScoped;
import jakarta.security.enterprise.authentication.mechanism.http.OpenIdAuthenticationMechanismDefinition;
import jakarta.security.enterprise.authentication.mechanism.http.openid.ClaimsDefinition;

@ApplicationScoped
@OpenIdAuthenticationMechanismDefinition(
	clientId="${oidc.clientId}",
	clientSecret="${oidc.clientSecret}",
	redirectURI="${baseURL}/app/",
	providerURI="${oidc.domain}",
	claimsDefinition = @ClaimsDefinition(
		callerGroupsClaim = "groups"
	)
)
public class AppSecurity {
}

There are a couple EL references here. baseURL is provided for "free" by the framework, allowing you to say "wherever the app is hosted" without having to hard-code it. oidc here refers to a bean I made that's annotated with @Named("oidc") and has getters like getClientId() and so forth. You can make a class like that to pull in your OIDC config and secrets from outside, such as a resource file, environment variables, or so forth. providerURI should be the same base URL as Keep uses above.

Once you do that, you can start putting @RolesAllowed annotations on resources you want protected. So far, I've been using @RolesAllowed("users"), since my Keycloak puts all authenticated users in that group, but you could mix it up with "admin" or other meaningful roles per endpoint. For example, inside a JAX-RS class:

1
2
3
4
5
6
7
@Path("superSecure")
@GET
@Produces(MediaType.TEXT_PLAIN)
@RolesAllowed("users")
public String getSuperSecure() {
	return "You're allowed in!";
}

When accessing that endpoint, the app will redirect the user to Keycloak (or your OIDC provider) automatically if they're not already logged in.

Accessing the Token

In my previous posts, I mentioned that I was able to access the OIDC token that the server used by setting accessTokenInLtpaCookie in the Liberty config, and then getting oidc_access_token from the Servlet request object's attributes, and that that only showed up on requests after the first.

The good news is that, with the latest Jakarta Security, there's a standardized way to do this. In a CDI bean, you can inject an OpenIdContext object to get the current user's token:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
package bean;

import jakarta.enterprise.context.RequestScoped;
import jakarta.inject.Inject;
import jakarta.security.enterprise.identitystore.openid.OpenIdContext;

@RequestScoped
public class OidcContextBean {
  
	@Inject
	private OpenIdContext context;
  
	public String getToken() {
		// Note: if you don't restrict everything in your app, do a null check here
		return context.getAccessToken().getToken();
	}
}

There are other methods on that OpenIdContext object, providing access to specific claims and information from the token, which would be useful in other situations. Here, I only really care about the token as a string, since that's what I'll send to Keep.

With that token in hand, you can build a MicroProfile Rest Client using the generated API interfaces. For example:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
public class SomeClass {
	/* snip */
	@Inject
	private OidcContextBean oidcContext;

	/* snip */

	private DataApi getDataApi() {
		return RestClientBuilder.newBuilder()
			.baseUri("http://your.keep.server:8880/api/v1/")
			.register((ClientRequestFilter) (ctx) -> {
				ctx.getHeaders().add(HttpHeaders.AUTHORIZATION, "Bearer " + oidcContext.getToken()); //$NON-NLS-1$
			})
			.build(DataApi.class);
	}
}

That will cascade the OIDC token used for your app login over to Keep, allowing your app to access data on behalf of the logged-in user smoothly.

I've been kicking the tires on some example apps and fleshing out the Jakarta NoSQL driver using this, and it's been going really smoothly so far. Eventually, my goal will be to make it so that you can take code using the JNoSQL driver for Domino inside an NSF using the XPages JEE project and move it with minimal changes over to a "normal" JEE app using Keep for access. There'll be a bit of rockiness in that the upstream JNoSQL API is changing a bit to adapt to Jakarta Data and will do so in time for JEE to require Java 21, but at least it won't be too painful an analogy.

XPages JEE 2.11.0 and the Javadoc Provider

Thu Apr 20 09:47:58 EDT 2023

  1. Updating The XPages JEE Support Project To Jakarta EE 9, A Travelogue
  2. JSP and MVC Support in the XPages JEE Project
  3. Migrating a Large XPages App to Jakarta EE 9
  4. XPages Jakarta EE Support 2.2.0
  5. DQL, QueryResultsProcessor, and JNoSQL
  6. Implementing a Basic JNoSQL Driver for Domino
  7. Video Series On The XPages Jakarta EE Project
  8. JSF in the XPages Jakarta EE Support Project
  9. So Why Jakarta?
  10. XPages Jakarta EE 2.5.0 And The Looming Java-Version Wall
  11. Adding Concurrency to the XPages Jakarta EE Support Project
  12. Adding Transactions to the XPages Jakarta EE Support Project
  13. XPages Jakarta EE 2.9.0 and Next Steps
  14. XPages JEE 2.11.0 and the Javadoc Provider
  15. The Loose Roadmap for XPages Jakarta EE Support
  16. XPages JEE 2.12.0: JNoSQL Views and PrimeFaces Support
  17. XPages JEE 2.13.0
  18. XPages JEE 2.14.0
  19. XPages JEE 2.15.0 and Plans for JEE 10 and 11

Yesterday, I put two releases up on OpenNTF, and I figure it'd be worth mentioning them here.

XPages Jakarta EE Support

The first is a new version of the XPages Jakarta EE Support project. As with the last few, this one is mostly iterative, focusing on consolidation and bug fixes, but it added a couple neat features.

The largest of those is the JPA support I blogged about the other week, where you can build on the JDBC support in XPages to add JPA entities. This is probably a limited-need thing, but it'd be pretty cool if put into practice. This will also pay off all the more down the line if I'm able to add in Jakarta Data support in future versions, which expands the Repository idiom currently in the NoSQL build I use to cover both NoSQL and RDBMS databases.

I also added the ability to specify a custom JsonbConfig object via CDI to customize the output of JSON in REST services. That is, if you have a service like this:

1
2
3
4
5
@GET
@Produces(MediaType.APPLICATION_JSON)
public SomeCustomObject get() {
	return findSomeObject();
}

In this case, the REST framework uses JSON-B to turn SomeCustomObject into JSON. The defaults are usually fine, but sometimes (either for personal preference or for migration needs) you'll want to customize it, particularly changing the behavior from using bean getters for properties to instead use object fields directly as Gson does.

I also expanded view support in NoSQL by adding a mechanism for querying views with full-text searches. This is done via the ViewQuery object that you can pass to a repository method. For example, you could have a repository like this:

1
2
3
4
public interface EmployeeRepository extends DominoRepository<Employee, String> {
	@ViewEntries("SomeView")
	Stream<Employee> listFromSomeView(Sorts sorts, ViewQuery query);
}

Then, you could perform a full-text query and retrieve only the matching entries:

1
2
3
4
5
Stream<Employee> result = repo.listFromSomeView(
	Sorts.sorts().asc("lastName"),
	ViewQuery.query()
		.ftSearch("Department = 'HR'", Collections.singleton(FTSearchOption.EXACT))
);

Down the line, I plan to add this capability for whole-DB queries, but (kind of counter-intuitively) that would get a bit fiddlier than doing it for views.

XPages Javadoc Provider

The second one is a new project, the XPages Javadoc Provider. This is a teeny-tiny project, though, not even containing any Java code. This is a plugin for either Designer or normal Eclipse and it provides Javadoc for some standard XPages classes - specifically, those covered in the official Javadoc for Designer and the XPages Extensibility APIs. This covers things like com.ibm.commons and the core stuff from com.ibm.xsp, but doesn't cover things like javax.faces.* or lotus.domino.

The way this works is that it uses Eclipse's Javadoc extension point to tell Designer/Eclipse that it can find Javadoc for a couple bundles via the hosted version, really just linking the IDE to the public HTML. I went this route (as opposed to embedding the Javadoc in the plugin) because the docs don't explicitly say they're redistributable, so I have to treat them as not. Interestingly, the docs are actually still hosted at public.dhe.ibm.com. If HCL publishes them on their site or makes them officially redistributable, I'll be able to update the project, but for now it's relying on nobody at IBM remembering that they're up there.

In any event, it's not a huge deal, but it's actually kind of nice. Being able to have Javadoc for things like XspLibrary removes a bit of the guesswork in using the API and makes the experience feel just a bit better.

JPA in the XPages Jakarta EE Project

Sat Mar 18 11:55:36 EDT 2023

For a little while now, I'd had an issue open to implement Jakarta Persistence (JPA) in the project.

JPA is the long-standing API for working with relational-database data in JEE and is one of the bedrocks of the platform, used by presumably most normal apps. That said, it's been a pretty low priority here, since the desire to write applications based on a SQL database but running on Domino could be charitably described as "specialized". Still, the spec has been staring me in the face, maybe it'd be useful, and I could pull a neat trick with it.

The Neat Trick

When possible, I like to make the XPages JEE project act as a friendly participant in the underlying stack, building on good use of the ComponentModule system, the existing app lifecycle, and so forth. This is another one of those areas: XPages (re-)gained support for relational data over a decade ago and I could use this.

Tucked away in the slide deck that ships with the old ExtLib is this tidbit:

Screenshot of a slide, highlighting 'Available using JNDI'

JNDI is a common, albeit creaky, mechanism used by app servers to provide resources to apps running on them. If you've done LDAP from Java, you've probably run into it via InitialContext and whatnot, but it's used for all sorts of things, DB connections included. What this meant is that I could piggyback on the existing mechanism, including its connection pooling. Given its age and lack of attention, I imagine that it's not necessarily the absolute best option, but it has the advantage of being built in to the platform, limiting the work I'd need to do and the scope of bugs I'd be responsible for.

Implementation

With one piece of the puzzle taken care for me, my next step was to actually get a JPA implementation working. The big, go-to name in this area is Hibernate (which, incidentally, I remember Toby Samples getting running in XPages long ago). However, it looks like Hibernate kind of skipped over the Jakarta EE 9 target with its official releases: the 5.x series uses the javax.persistence namespace, while the 6.x series uses jakarta.persistence but requires Java 11, matching Jakarta EE 10. Until Domino updates its creaky JVM, I can't use that.

Fortunately, while I might be able to transform it, Hibernate isn't the only game in town. There's also EclipseLink, another well-established implementation that has the benefits of having an official release series targeting JEE 9 and also using a preferable license.

And actually, there's not much more to add on that front. Other than writing a library to provide it to the NSF and a resolver to account for OSGi's separation, I didn't have to write a lot of code.

Most of what I did write was the necessary code and configuration for normal JPA use. There's a persistence.xml file in the normal format (referencing the source made by the XPages JDBC config file), a model class, and then access using the normal API.

In a normal full app server, the container would take care of some of the dirty work done by the REST resource there, and that's something I'm considering for the future, but this will do for now.

Writing Tests

One of the neat side effects is that, when I went to write the test case for this, I got to make better use of Testcontainers. I'm a huge fan of Testcontainers and I've used it for a good while for my IT suites, but I've always lost a bit by not getting to use the scaffolding it provides for common open-source projects. Now, though, I could add a PostgreSQL container alongside the Domino one:

1
2
3
4
5
6
postgres = new PostgreSQLContainer<>("postgres:15.2")
	.withUsername("postgres")
	.withPassword("postgres")
	.withDatabaseName("jakarta")
	.withNetwork(network)
	.withNetworkAliases("postgresql");

Here, I configure a basic Postgres container, and the wrapper class provides methods to specify the extremely-secure username and password to use, as well as the default database name. Here, I pass it a network object that lets it share the same container network space as the Domino server, which will then be able to refer to it via TCP/IP as the bare name "postgresql".

The remaining task was to write a method in the test suite to make sure the table exists. You can do this in other ways - Testcontainers lets you run init scripts via URL, for example - but for one table this suits me well. In the test class where I want to access the REST service I wrote, I made a @BeforeAll method to create the table:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
@BeforeAll
public static void createTable() throws SQLException {
	PostgreSQLContainer<?> container = JakartaTestContainers.instance.postgres;
		
	try(Connection conn = container.createConnection(""); Statement stmt = conn.createStatement()) {
		stmt.executeUpdate("CREATE TABLE IF NOT EXISTS public.companies (\n"
				+ "	id BIGSERIAL PRIMARY KEY,\n"
				+ "	name character varying(255) NOT NULL\n"
				+ ");");
	}
}

Testcontainers takes care of some of the dirty work of figuring out and initializing the JDBC connection for me. That's not particularly-onerous work, but it's one of the small benefits you get when you're doing the same sort of thing other users of the tool are doing.

With that, everything went swimmingly. Domino saw the Postgres container (thanks to copying the JDBC driver to the classpath) and the JPA access worked just the same as it does in my real environment.

Like with the implementation, there's not much there beyond "yep, do the things the docs say and it works". Though there were the usual hurdles that I've gotten used to with adding things like this to Domino, this all went pleasantly smoothly. I may build on this in the future - such as the aforementioned server-managed JPA bits - but that will depend on whether I or others have need. Regardless, I'm glad it's in there.

XAgents to Jakarta REST Services

Sun Feb 05 15:16:48 EST 2023

  1. Code-First REST APIs With XPages Jakarta EE Support
  2. Code-First REST APIs Followup: OpenAPI
  3. XAgents to Jakarta REST Services

For a good long time now, XAgents have been one of the common ways to do non-HTML output in an XPages environment - JSON, mostly. I think the technique was codified and the term coined by Stephan Wissel back in 2008 and the idea has been the same since.

Effectively, an XAgent lets you write a Servlet but with a bit more scaffolding. Though XPages has a path to use Servlets officially, that method is more out-of-the-way than XAgents and doesn't (without further hoop jumping) give you some niceties like sessionAsSigner.

However, though they're venerable and sort of convenient, XAgents are lacking in a number of ways. For one, stuffing them inside an XPage is ungainly: even if the XPage just calls out to some Java code, you're polluting your UI-element space with something kind of unrelated, forcing you to name your XPages stuff like "apiEmployees.xsp". More importantly, though, it doesn't provide a lot of affordances for the sort of work you'd actually want to do when writing a REST API. Though XPages has a couple mechanisms for generating JSON, little of that is exposed by the XPage editor environment, and I suspect that many or most XAgents writing JSON commit the cardinal sin of doing so through direct string concatenation, likely often without much escaping. Further, there's no built-in mechanism for picking apart path segments or multi-branched routing.

So I figured today is a good opportunity to talk a bit about one of the XPages Jakarta EE Support project's flagship features: writing REST services that consume and emit JSON. This post covers some of the same ground I've talked about before, but sometimes it's useful to re-contextualize this sort of thing.

The XAgent Way

To level-set the discussion, I'll give a starting example in an XAgent, and we'll keep it simple. The idea here will be that you have Person documents in a database and you want to write an API that will take a UNID and emit the corresponding person's first and last name. This is very similar to the Employees example in the "Code-First REST API" example project from the JEE repo, but without diving into more-complex parts or the Jakarta NoSQL data layer.

In an XAgent, you might do something like this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
<?xml version="1.0" encoding="UTF-8"?>
<xp:view xmlns:xp="http://www.ibm.com/xsp/core" viewState="nostate" rendered="false">
	<xp:this.afterRenderResponse><![CDATA[#{javascript:
		var resp = facesContext.getExternalContext().getResponse();
		resp.setContentType("application/json");
		
		var writer = facesContext.getResponseWriter();
		
		var doc = database.getDocumentByUNID(param.unid);
		var result = toJson({
			"firstName": doc.getItemValueString("FirstName"),
			"lastName": doc.getItemValueString("LastName")
		});
		writer.write(result);
		
		writer.endDocument();
		facesContext.responseComplete();
	}]]&gt;</xp:this.afterRenderResponse>
</xp:view>

You'd put that in an XPage and call it like:

1
2
$ curl http://your.server/someapp.nsf/apiPeople.xsp?unid=68D1CAA80F65780D8525894D006B1CE7
{"lastName":"Fooson","firstName":"Foo"}

That does the job well enough. However, this will get gangly very fast. If you want to expand on the types of data the Person document contains, add more operations to the API, add other query parameters, or so forth, you'll have to either have a giant blob of code here, spin it off to an SSJS script library, or move it to Java classes. And, all along the way, the dev environment won't be giving you any help with the process - it knows nothing about writing REST APIs, and so it's all "manual". Better than with a classic agent, but not great.

Jakarta Way, Take 1

So let's move this over to Jakarta EE. We'll start by doing basically a concept-for-concept move: just take the above and make a REST class out of it.

Such a class could look something like this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
package rest;

import com.ibm.xsp.extlib.util.ExtLibUtil;

import jakarta.json.Json;
import jakarta.json.JsonObject;
import jakarta.ws.rs.GET;
import jakarta.ws.rs.core.MediaType;
import jakarta.ws.rs.Path;
import jakarta.ws.rs.PathParam;
import jakarta.ws.rs.Produces;

import lotus.domino.Database;
import lotus.domino.Document;
import lotus.domino.NotesException;

@Path("v1/person")
public class PersonResourceV1 {
	
	@Path("{unid}")
	@GET
	@Produces(MediaType.APPLICATION_JSON)
	public JsonObject get(@PathParam("unid") String unid) throws NotesException {
		Database database = ExtLibUtil.getCurrentDatabase();
		Document doc = database.getDocumentByUNID(unid);
		
		return Json.createObjectBuilder()
			.add("firstName", doc.getItemValueString("FirstName"))
			.add("lastName", doc.getItemValueString("LastName"))
			.build();
	}
}

This one you'd call in a similar way, and get similar results:

1
2
$ curl http://your.server/someapp.nsf/xsp/app/v1/person/68D1CAA80F65780D8525894D006B1CE7
{"firstName":"Foo","lastName":"Fooson"}

There's a bit more boilerplate - it is Java - but you can already see some of the benefits. The URL is a little more REST-like and the code is much more task-focused. Instead of disabling all the normal features of the XPage and manually emitting text, a large amount of the code is explicitly describing what you intend to do in a REST service. The @Path annotations describe the components of the URL, and the use of @Path("{unid}") lets us name one of those parts without having to do a query string. Similarly, the @GET and @Produces(MediaType.APPLICATION_JSON) lines directly say what's going on: you can do a GET request to the URL and get JSON back.

These annotations pay off in a couple ways. First of all, the code will be a lot easier to understand when you come back later. Imagine if your API also handled new documents, deletions, and modifications - with an XAgent, you'd have branching paths and would have to rely on either good naming or commenting to mentally traverse it. With this, you would have other methods with similar annotations - @PUT, @POST, @DELETE - and would see exactly where requests are going to go.

And it's not just for you the programmer (or for the next human replacing you when you retire to a beach somewhere): these annotations mean something to tools that process it as well. One such tool comes with the XPages JEE project: the MicroProfile OpenAPI generator. With this class present, you automatically get a useful OpenAPI spec:

1
$ curl http://your.server/someapp.nsf/xsp/app/openapi.yaml
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
---
openapi: 3.0.3
info:
  title: XAgent Comparison
servers:
- url: http://your.server/someapp.nsf/xsp/app
paths:
  /v1/person/{unid}:
    get:
      parameters:
      - name: unid
        in: path
        required: true
        schema:
          type: string
      responses:
        "200":
          description: OK
          content:
            application/json:
              schema:
                type: object

This sort of thing pays off tremendously once you start working with a client JS app, especially with a larger development team.

Jakarta Way, Take 2

But we can do better than this. As it is, the direct translation from the XAgent still left the actual output pretty vague. We at least know it's emitting a JSON object, but that's the extent of it. Moreover, the app code wastes some conceptual time actually building the JSON object, which is okay but unnecessary. Let's bring JSON Binding into the mix. This will increase the size of the code, but will pay off in conceptual cleanliness and in future work.

That could look like:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
package rest;

import com.ibm.xsp.extlib.util.ExtLibUtil;

import jakarta.validation.constraints.NotEmpty;
import jakarta.ws.rs.GET;
import jakarta.ws.rs.core.MediaType;
import jakarta.ws.rs.Path;
import jakarta.ws.rs.PathParam;
import jakarta.ws.rs.Produces;

import lotus.domino.Database;
import lotus.domino.Document;
import lotus.domino.NotesException;

@Path("v2/person")
public class PersonResourceV2 {
	
	// Normally, this class would be in a separate file
	public static class Person {
		private String firstName;
		private @NotEmpty String lastName;
		
		public String getFirstName() {
			return firstName;
		}
		public void setFirstName(String firstName) {
			this.firstName = firstName;
		}
		public String getLastName() {
			return lastName;
		}
		public void setLastName(String lastName) {
			this.lastName = lastName;
		}
	}
	
	@Path("{unid}")
	@GET
	@Produces(MediaType.APPLICATION_JSON)
	public Person get(@PathParam("unid") String unid) throws NotesException {
		Database database = ExtLibUtil.getCurrentDatabase();
		Document doc = database.getDocumentByUNID(unid);
		
		Person result = new Person();
		result.setFirstName(doc.getItemValueString("FirstName"));
		result.setLastName(doc.getItemValueString("LastName"));
		return result;
	}
}

Calling this will work the same way as last time, but with a "v2" to indicate our new-and-improved back end:

1
2
$ curl http://your.server/someapp.nsf/xsp/app/v2/person/68D1CAA80F65780D8525894D006B1CE7
{"firstName":"Foo","lastName":"Fooson"}

What does this get us? Well, for one, we're no longer explicitly working with JSON, which is nice. We could change the media type to XML and the output type will automatically adapt, and the same will go for any future formats we write an adapter for. Additionally, this will work better with a more-structured database access layer. I'll leave that part out here, but the aforementioned "Code-First REST API" example shows that.

Some more of the payoff shows up when we check the generated OpenAPI spec. If we make the same call as above, the output will be more descriptive now:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
---
openapi: 3.0.3
info:
  title: XAgent Comparison
servers:
- url: http://your.server/someapp.nsf/xsp/app
paths:
  /v2/person/{unid}:
    get:
      parameters:
      - name: unid
        in: path
        required: true
        schema:
          type: string
      responses:
        "200":
          description: OK
          content:
            application/json:
              schema:
                $ref: '#/components/schemas/Person'
components:
  schemas:
    Person:
      required:
      - lastName
      type: object
      properties:
        firstName:
          type: string
        lastName:
          minLength: 1
          type: string

For someone consuming this spec, this is starting to get really good. No longer does the output just say it's a generic JSON object: now it describes it as "Person", shows the two properties that will be output, and declares that lastName will always be at least one character both for input and output. As you add more operations and types, this spec will continue to grow, naturally piggybacking on the annotations and types you write.

Conclusion

Strictly speaking, you can do the same stuff with an XAgent that you can with the XPages JEE project. You could manage the internal complexity and you could manually write the OpenAPI spec, but it would be much, much harder to do, and I think it'd be a safe bet to say that very few XAgent-based APIs have specs to go with them anyway.

Especially as the scope of your app grows, the development and maintenance experience with the JEE approach is night-and-day compared to classical mechanisms like XAgents. If you're still writing APIs that way or similar, I definitely recommend you give this path a try.

XPages Jakarta EE 2.9.0 and Next Steps

Tue Nov 22 12:53:21 EST 2022

  1. Updating The XPages JEE Support Project To Jakarta EE 9, A Travelogue
  2. JSP and MVC Support in the XPages JEE Project
  3. Migrating a Large XPages App to Jakarta EE 9
  4. XPages Jakarta EE Support 2.2.0
  5. DQL, QueryResultsProcessor, and JNoSQL
  6. Implementing a Basic JNoSQL Driver for Domino
  7. Video Series On The XPages Jakarta EE Project
  8. JSF in the XPages Jakarta EE Support Project
  9. So Why Jakarta?
  10. XPages Jakarta EE 2.5.0 And The Looming Java-Version Wall
  11. Adding Concurrency to the XPages Jakarta EE Support Project
  12. Adding Transactions to the XPages Jakarta EE Support Project
  13. XPages Jakarta EE 2.9.0 and Next Steps
  14. XPages JEE 2.11.0 and the Javadoc Provider
  15. The Loose Roadmap for XPages Jakarta EE Support
  16. XPages JEE 2.12.0: JNoSQL Views and PrimeFaces Support
  17. XPages JEE 2.13.0
  18. XPages JEE 2.14.0
  19. XPages JEE 2.15.0 and Plans for JEE 10 and 11

Keeping with my productive week off, today I release version 2.9.0 of the XPages Jakarta EE Support project. Similar to the previous release, this one contains new features primarily related to Jakarta NoSQL, but also has some improvements for JSF and a bunch of bug fixes and compatibility improvements.

Jakarta NoSQL

The improvements to the JNoSQL driver come from some needs I came across when moving older lotus.domino/ODA-based code to using JNoSQL repositories. In particular, I added the remaining applicable view entry properties as available fields to map, added better support for reading note IDs, and fetching documents by note ID.

JSF

While JSF support remains limited by not having a proper way to add in third-party component libraries like PrimeFaces, it's still a potentially-compelling tool in an NSF as an alternative to XPages in some cases. Accordingly, I fixed a few bugs I had run into when loading pages after modifying the NSF design. Additionally, I fixed up support for JSF as an MVC view engine. It now properly joins JSP as a mechanism for rendering your output with an MVC structure, and I think there's some real potential there.

Bug Fixes and Compatibility

Most of the other closed issues deal with a few bugs here and there, and in particular involve some improvements for running apps in XPiNC and on a server with Domino Leap also installed. I don't use XPiNC anymore and haven't tried Leap, so I greatly appreciate bug reports specific to these and the assistance in tracking down the trouble.

The Future and Next Steps

I'm pondering now what the next release of the project will focus on. I have no shortage of feature ideas, and there are a few potentially-disruptive changes I'd like to make.

Unfortunately, those changes will be largely confined to improving the support for the specs that are already present and not advancing to new versions. The predicted Java-version wall arrived: Jakarta EE 10 is out and requires Java 11 and above. Since Domino remains mired in Java 8, that means that new versions of the specs and implementations are hard-incompatible until that changes.

On the plus side, there's still a lot of improvements I can make with Jakarta EE 9 as the baseline.

Reorganization

One big one I've been thinking about is a reorganization of the individual libraries that make up the project. The way it's been designed, almost every spec has its own Equinox Feature and XPage Library to go with it. This was fine early on when it was just CDI, EL, and JAX-RS, but it's grown annoying: installing the project in Designer is a seemingly-endless process of approving each plug-in at a time and the list of libraries to check in Xsp Properties is interminable. More critically, being able to selectively turn on and off specs like this doesn't make sense anymore. CDI has grown so important to Jakarta EE in general and this spec in particular that it doesn't make sense to not have it present if you're going to use this project at all. It's a foundational component of so many other parts and is essentially The Way to do Jakarta-based development.

So I'm thinking I'm going to reorganize the projects into fewer features and libraries, which will be a breaking change that will necessitate a bump to 3.0 - fortunately, the numbers line up well for that. I have a few potential options here:

  1. Just lump them all into one. You'd have basically one big switch to say "this is a JEE project" and everything would be on. The virtue here is that this is how I already work and is essentially the recommended way to do things. Additionally, as far as I know, while having additional components may slow first load (though not as much as other parts), I don't think they have a significant impact if enabled but unused during runtime.
  2. Try to line the specs up with one of the existing Jakarta Profiles. Those profiles are meant to be curated selections of useful specs, and this project has enough to implement what in newer versions is deemed the Core Profile. The trouble with this, though, is that the Core Profile is very much geared to be the shared subset with MicroProfile and similar and is a bit thin for Domino's monolith-focused development style. The Web and Full profiles, on the other hand, require "traditional" APIs like EJB that are not present in this project.
  3. Break them apart into my own "core" and "optional" features. For example, it doesn't make sense to use this without JAX-RS, CDI, and Bean Validation enabled, but JSF is entirely independent of the other specs and is among the least likely to be used in practice for now. This would also allow me to establish a running flow where "experimental" features start out as optional add-ins and then eventually make their way to core.

I'm currently waffling between #1 and #3, with a slight lean towards #1. If I can be sure that either everything or nothing is present, I could get rid of some weird hedges and workarounds, like how the JAX-RS implementation doesn't "officially" know about the CDI library yet references CDI classes explicitly by name.

New Application Types

Currently, to use this project, you can either put your code into an NSF and use the automatic behavior of the libraries or you can put your code in OSGi-based webapps or Servlets and then manually manage integration with these specs.

Both of these are limited by their reliance on the many assumptions IBM built in to how these apps should work. In-NSF apps require that all Jakarta code come from a request including "xsp" in the URL or to a file ending in ".jsp", ".jsf", or ".xhtml". If you're writing, say, an MVC-based app, all of your URLs are going to have to start with something like "foo.nsf/xsp/app/...", which is okay but ugly. Additionally, the way these apps are implemented - NSFComponentModule - severely limits my hooks for listening for things like application and session expiration, which hampers CDI's lifecycle handling a bit.

For a good while, I've pondered the notion of adding another ComponentModule type to handle the case where you want to go all-in on Jakarta EE. With this idea, the new module implementation would have full control over incoming requests, allowing URLs without the xsp/app bit in there, and would have better handling of lifecycles. In this way, I could make it so that your could would look more like (or be identical to) a "normal" .war-based webapp, with fewer workarounds for the existing XPages stuff. This would also allow me to do things like lessen the amount of Servlet 2.5-to-5.0 bridging and could assist tremendously in improving JSF support.

Along similar lines, I've been considering doing something similar for OSGi-based webapps, and I've made some progress along those lines in a feature branch. The idea here would be to do something similar to how you can deploy web.xml-based webapps via OSGi now, but with built-in support for Jakarta EE 9 features (with web.xml then being optional). With this setup, you'd be able to write an app that does an Import-Package for the various jakarta.* packages you want and add a bit in your MANIFEST.MF to signal to this project that it should participate. This could either be a variant of the extension point used by the existing OSGi webapp support or using the Web-ContextPath directive from the OSGi spec. One of the goals here would be to make it so that you would be able to write a Jakarta EE 9 app using normal development tools - Eclipse/IntelliJ/VS Code, Maven, etc. - and then just use maven-bundle-plugin to add the OSGi info you need without having any specific dependencies on Domino bits, especially the nightmare of depending on the non-redistributable XPages OSGi artifacts.

Other Options

And, in the mean time, I have a bunch of other tasks I could work on. Slowly converting my client project to Jakarta NoSQL instead of direct ODA use has turned up a whole slew of things that would be useful to add (for example, stampAll support), so I can slowly burn down that feature-request list.

There's also the notion of documentation! While a lot of the behavior of this project is in theory documented by virtue of the upstream specs and the general world of Jakarta blogs, videos, and courses, there's enough to know about the specifics of the interactions with Domino that more documentation is in order. Historically, I've just done this by expanding the README, but it's gotten pretty unwieldy at this point. It would probably make sense to break the specifics and examples out into at least wiki pages, if not a format that can be built into a PDF/etc. and included in the distribution.

So yep, I'll have my hands busy with this thing for a good while more, I figure.

The Myriad Idioms For Finding Implementations In Java

Tue Oct 18 10:25:06 EDT 2022

Tags: jakartaee java
  1. Java Services (Not the RESTful Kind)
  2. Java ClassLoaders
  3. Managed Beans to CDI
  4. The Myriad Idioms For Finding Implementations In Java

A few years ago, I wrote a post about Java service location, which covered things like META-INF/services and OSGI extensions. Today, I'd like to discuss a similar concept: code in a top-level API that finds a specific implementation. For reasons that will become clear shortly, I'll call this the "FactoryFinder pattern".

Background

Not all Java code uses this kind of thing and, while service loading is related, the overlap isn't complete. Where this does come up a lot is in a framework like Jakarta EE, which is very intentionally split between vendor-neutral specification classes/interfaces (the ones starting with jakarta.*) and specific implementations.

For example, the Jakarta REST (n?e JAX-RS) specification only defines various classes and interfaces within the jakarta.ws.rs package space, but doesn't include any actual implementation. That's left to various vendors. The number of implementations varies by spec, and JAX-RS is particularly prolific on this front. In the XPages Jakarta EE project, we use RESTEasy, whose classes are all in the org.jboss.resteasy package space.

There's a (usually) hard wall between these layers: the spec declares an API that programmers can use, and then the implementation has to allow itself to be called by those class names and obey the specification's rules. When writing JAX-RS resources in an NSF, the fact that it's using RESTEasy does not enter into your experience. That raises the question, though, of how this works. How does the vendor-neutral specification locate the implementation classes to hand off the work? Well, that question has a number of different answers.

Entrypoint Classes and Locating Implementations

In general, each spec accomplishes this using one or more entrypoint classes. For example, JAX-RS uses RuntimeDelegate and its static getInstance() method to locate server implementations and ClientBuilder and its newBuilder() method to load client implementations. Outwardly, these methods just promise that they'll find and provide an implementation, but the actual way that specs do this varies.

One of the most common ways to coordinate this loading is to have a class named FactoryFinder. This idiom and specific name proved very popular over at Sun as they built up the JEE specs:

Eclipse Open Type dialog for FactoryFinder

Despite their identical names, each of these classes is a different implementation, and they have different characteristics. There are routines in common, and each spec uses a subset of these. I'll go over the common ones here, in no particular order other than that I'll start with the ones found in the JAX-RS API first.

ServiceLoader

This one is used in basically every spec up until the latest era. This uses the java.util.ServiceLoader class to find implementations by way of text files in META-INF/services named after the spec class and containing implementation class names. For example, RESTEasy contains a file named META-INF/services/jakarta.ws.rs.ext.RuntimeDelegate that references the class org.jboss.resteasy.core.providerfactory.ResteasyProviderFactoryImpl. That looks like this:

1
2
3
4
5
Iterator<T> iterator = ServiceLoader.load(service, FactoryFinder.getContextClassLoader()).iterator();

if(iterator.hasNext()) {
	return iterator.next();
}

FactoryFinder.getContextClassLoader() there is a utility method that just uses an AccessController block to work with Java policy limitations like we see on Domino all the time.

This is simple enough in the normal case, but can get a little tricky when you add in something like OSGi. By default, ServiceLoader will look in the thread-context class loader, which will usually be where your application code lives. Inside an app container, like an NSF, the implementation class may not actually be visible, though. Accordingly, many of these finders fall back to looking using the class loader of the spec class, which has a higher chance of seeing the implementation. That looks similar:

1
2
3
4
5
Iterator<T> iterator = ServiceLoader.load(service, FactoryFinder.class.getClassLoader()).iterator();

if(iterator.hasNext()) {
	return iterator.next();
}

In the XPages Jakarta EE project, neither of these calls will tend to work by default, since neither the app nor the API bundle won't see the implementation bundle by default. In some cases, I deal with this via the methods below, but in others I will do so by re-packaging the implementation as an OSGi fragment bundle. Fragment bundles attach themselves onto their host's classloader fully, and this allows ServiceLoader to find the implementation.

Configuration Properties

A handful of these specs, JAX-RS included, will also look for the name of an implementation class using an external properties file. The placement of this in the priority order - as a fallback after ServiceLoader - and the classes used in the implementation make me figure that these are quite often relics of earlier habits.

JAX-RS, for its part, will look within the java.home system property, which points to the JVM's installation directory. In there, it looks for a properties file named lib/jaxrs.properties:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
String javah = System.getProperty("java.home");
configFile = javah + File.separator + "lib" + File.separator + "jaxrs.properties";
File f = new File(configFile);
if (f.exists()) {
	Properties props = new Properties();
	inputStream = new FileInputStream(f);
	props.load(inputStream);
	String factoryClassName = props.getProperty(factoryId);
	return newInstance(factoryClassName, classLoader);
}

This tries to use the thread-context class loader only, so it wouldn't work for a complex app server situation. Likely, it's meant for either an older type of application or a standalone special-purpose JAR.

System Properties

Similar to reading a designated properties file, these specs will often then fall back to looking for a Java system property of a given name. These properties may be dynamically set at runtime or may be set during the JVM launch. Often, this property will be the name of the interface/abstract class being looked up, like so:

1
2
3
4
String systemProp = System.getProperty(factoryId);
if (systemProp != null) {
	return newInstance(systemProp, classLoader);
}

This one can actually come in handy sometimes - though not ideal, I've used similar cases where I set the name of an implementation or delegation class in a property before initializing the spec. It's best to avoid that when possible, but I'm often glad it's there.

OSGi Escape

Next up is one that JAX-RS doesn't use, but shows up periodically. Though Jakarta EE isn't based around OSGi, a good number of the implementations historically have used (and still use) it, and OSGi always sits in a "not standard, but too popular to consistently ignore" limbo.

To account for this, there's a similarly semi-standard library called the OSGi resource locator. This library provides a class named org.glassfish.hk2.osgiresourcelocator.ServiceLoader that does its own search and loading for ServiceLoader-compatible META-INF/services files within OSGi bundles in the current platform. The idea is that, if you have an OSGi-based platform that you want to work with this type of loading, you will provide the Resource Locator class and let any loaders written to use it fall back to it.

Because this class is not normally present even when actually in OSGi, APIs that make use of it have to be careful and indirect about trying to load it at all. We'll use JAX-B as our example here. They'll generally try to load the bridge class reflectively, which avoids having OSGi-wrapping tools like bnd create a potentially-undesired dependency on the presence of the bridge. Then, they'll reflectively ask it to load service implementations. That tends to look like this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
// Use reflection to avoid having any dependency on ServiceLoader class
Class serviceClass = Class.forName(factoryId);
Class target = Class.forName(OSGI_SERVICE_LOADER_CLASS_NAME);
Method m = target.getMethod(OSGI_SERVICE_LOADER_METHOD_NAME, Class.class);
Iterator iter = ((Iterable) m.invoke(null, serviceClass)).iterator();
if (iter.hasNext()) {
	Object next = iter.next();
	logger.fine("Found implementation using OSGi facility; returning object [" +
		next.getClass().getName() + "].");
	return next;
} else {
	return null;
}

That's also generally wrapped in a big try/catch block to avoid gumming up the works if any pieces are missing.

The XPages Jakarta EE project actually contains a reimplementation of this that avoids some hurdle or other that I found with the stock version. I avoided doing something like that for a while, but it ended up being the most practical way to get some of these specs working.

Default Implementation

Back outside the realm of OSGi, a handful of these specifications will also include a hard-coded default provider class name. These are generally the classes from what used to be dubbed reference implementations and which are largely components of GlassFish by virtue of that being Sun's version.

For example, the JSON-P API has a final fallback of trying to look for org.glassfish.json.JsonProviderImpl by name:

1
2
Class<?> clazz = Class.forName(DEFAULT_PROVIDER);
return (JsonProvider) clazz.getConstructor().newInstance();

Though these implementations generally also declare themselves via ServiceLoader files, this is presumably useful in historical or edge cases where there's still a decent chance that the RI will be available. This does have an unfortunate effect on error messages, though, where the case of "I can't find any implementation at all" ends up being reported as e.g. "Provider org.glassfish.json.JsonProviderImpl not found". That's not really a problem with the approach as such, though, but rather just the way it shakes out in practice.

Manually-Set Implementation or Locator

The final mechanism I'm going to discuss is sort of a final escape hatch. Sometimes, the provider class will have a method that lets you set an arbitrary implementation yourself, without having the API do any of these lookups at all. Some, like MicroProfile Config and CDI even go one step further and provide a method that configures not just a specific implementation but rather an implementation locator. These APIs are my friends and I love them.

This mechanism works well for my needs in the XPages Jakarta EE project, where either it's easier to just set one implementation for the whole server or, like with CDI, there's complex logic that requires inspecting the active Servlet request to see what NSF I'm in.

APIs of this style will usually have a method named like setInstance or setProvider on either their core entrypoint class or on the provider locator. For example, MicroProfile Config provides the former on its ConfigProviderResolver class:

1
2
3
public static void setInstance(ConfigProviderResolver resolver) {
	instance = resolver;
}

instance here is a static property. Once it's set - either by this method or by a dynamic lookup - the main instance() method will use it:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
if (instance == null) {
	synchronized (ConfigProviderResolver.class) {
		if (instance != null) {
			return instance;
		}
		instance = loadSpi(ConfigProviderResolver.class.getClassLoader());
	}
}

return instance;

The XPages JEE project makes use of this method at HTTP start, setting a provider resolver that includes some stock config sources as well as some classes that know how to read properties from the Notes environment and from the xsp.properties file.

Though this mechanism seems like the crudest out of the bunch, I'm extremely happy whenever it's there.

Conclusion

That was a lot! And there's not really a lesson to be learned here, but rather more that it's often useful to know about all these different mechanisms. When working in the XPages JEE project, I've had to use almost all of them at one time or another, and I've had to familiarize myself with which APIs use which and adapt them individually. For some, I've altered the implementation to be a fragment bundle; for others, I've created my own fragment to provide services and implementations; and so forth. It's a bit of a shame that there's no grand unified system for this, but at least it can be interesting to see the messy path that these specs have taken as Java technologies and the ecosystem evolved.

Code-First REST APIs Followup: OpenAPI

Fri Aug 26 11:04:08 EDT 2022

Tags: jakartaee
  1. Code-First REST APIs With XPages Jakarta EE Support
  2. Code-First REST APIs Followup: OpenAPI
  3. XAgents to Jakarta REST Services

In yesterday's post, I gave a two-file example of writing a basic CRUD REST API for NSF documents. In that post, I casually mentioned that one of the side benefits of this approach would have to wait until I fixed an open bug.

Well, I fixed that bug not long after making that post, so now I can detail what that is.

But just before I do that, I should mention that I added an "examples" directory to the project repository, where I plan to put examples like this in on-disk-project form, without the baggage of the test-suite example NSFs in the main tree: https://github.com/OpenNTF/org.openntf.xsp.jakartaee/tree/develop/examples. Anyway, back to what I fixed up here.

One of the neat little side features that the framework brings in is MicroProfile OpenAPI, which automatically generates OpenAPI specifications for your REST services based on your code. Depending on your workflow, this can be tremendously convenient. OpenAPI, being a widely-supported spec, has tons of tools available, and you can use this when integrating other applications with yours, or when working in a multi-tiered development team. For example, if the UI portion of your app is being developed separately from the back end, you could hand off the OpenAPI file to the other developer(s) and they will have the information they need to write against your services. And, since it's generated from the code and not manually, it has the benefit of being inherently consistent with the current design of the app.

Default Output

By default, based on how the app worked when we left it yesterday, going to "foo.nsf/xsp/app/openapi.yaml" will get you this output:

  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
---
openapi: 3.0.3
info:
  title: Jakarta Code-First REST
servers:
- url: http://some.server/foo.nsf/xsp/app
paths:
  /employees:
    get:
      responses:
        "200":
          description: OK
          content:
            application/json:
              schema:
                type: array
                items:
                  $ref: '#/components/schemas/Employee'
    post:
      requestBody:
        content:
          application/json:
            schema:
              $ref: '#/components/schemas/Employee'
      responses:
        "200":
          description: OK
          content:
            application/json:
              schema:
                $ref: '#/components/schemas/Employee'
  /employees/{id}:
    get:
      parameters:
      - name: id
        in: path
        required: true
        schema:
          type: string
      responses:
        "200":
          description: OK
          content:
            application/json:
              schema:
                $ref: '#/components/schemas/Employee'
    put:
      parameters:
      - name: id
        in: path
        required: true
        schema:
          type: string
      requestBody:
        content:
          application/json:
            schema:
              $ref: '#/components/schemas/Employee'
      responses:
        "200":
          description: OK
          content:
            application/json:
              schema:
                $ref: '#/components/schemas/Employee'
    delete:
      parameters:
      - name: id
        in: path
        required: true
        schema:
          type: string
      responses:
        "204":
          description: No Content
components:
  schemas:
    Employee:
      required:
      - name
      - title
      - department
      type: object
      properties:
        id:
          type: string
        name:
          minLength: 1
          type: string
          nullable: false
        title:
          minLength: 1
          type: string
          nullable: false
        department:
          minLength: 1
          type: string
          nullable: false
        age:
          format: int32
          minimum: 1
          type: integer

This includes all of the operations we defined in the rest.EmployeesResource class as well as the definition of the model.Employee entity class. Additionally, it picked up on our Bean Validation annotations, and so all of the @NotEmpty properties are marked as being non-null and non-empty strings, while the age has a minimum of 1, as coded.

Expanding the Definition

That, on its own, is pretty useful, and it will automatically adapt to any code changes you make. However, you can go further.

Versions

For example, while having the file as it is will work for development, you'll want to give it a version when it goes into production, so that any API consumers can know when it's expected that the API changed. There are two ways to do this with this project. If you have a $TemplateBuild shared field with a template version, then the code will pick up on that. Alternatively, you can specify configuration properties via MicroProfile Config. To do that, create a new file in the "Code/Java" directory of the project in the Package Explorer view in Designer named "microprofile-config.properties" within a directory named "META-INF":

Creating a microprofile-config.properties file

If you don't have a Package Explorer pane, you can add it by going to Window and then either switching to the "XPages" perspective or going to "Show Eclipse Views" and picking it from there. To add the folder and then the file, you can right-click the "Code/Java" folder there and then going to "New" - "Other..." and picking each in turn.

Once you have that file open, add a line like this:

1
mp.openapi.extensions.smallrye.info.version=1.0.1

("SmallRye" is the name of several MicroProfile spec implementations)

Then save. Now, when you open the OpenAPI spec, it'll start like this:

1
2
3
4
5
---
openapi: 3.0.3
info:
  title: Jakarta Code-First REST
  version: 1.0.1

Now, as long as you update this for API changes or use a $TemplateBuild field, your OpenAPI will be nicely versioned. As a nice bonus, if you build your NSF using the NSF ODP Tooling project, it can add the Maven version to $TemplateBuild by default, so you don't have to worry about manual updates.

Endpoint Descriptions

Next, while all of our endpoints are listed, it'd be good to add some additional detail. While they're more-or-less clear now, it'll get less so as the app grows. This is generally done via annotations - there are a bunch of them, but we'll focus on just a few for now.

We'll start with a basic one: adding a description to the endpoint that lists all Employees. To do this, go back to rest.EmployeeResource and add an annotation of type org.eclipse.microprofile.openapi.annotations.Operation:

1
2
3
4
5
6
@GET
@Produces(MediaType.APPLICATION_JSON)
@Operation(description="Retrieves a list of all employee entities in the data store")
public List<Employee> get() {
	return employees.findAll(Sorts.sorts().asc("name")).collect(Collectors.toList());
}

Once you add that, then that part of the OpenAPI spec will read:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
paths:
  /employees:
    get:
      description: Retrieves a list of all employee entities in the data store
      responses:
        "200":
          description: OK
          content:
            application/json:
              schema:
                type: array
                items:
                  $ref: '#/components/schemas/Employee'

Better by one step. The @Operation annotation itself has a couple more properties, which let you specify an operationId (very useful for code generated from the spec, so I advise doing it in a fully-fledged app) or marking the operation as "hidden", so it won't show up in the output at all.

Model Annotations

Next up, we'll add some descriptive information to the Employee model itself. For this, we'll go back to model.Employee and start adding annotations of type org.eclipse.microprofile.openapi.annotations.media.Schema:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
@Schema(description = "Represents an individual employee within the system")
@Entity
public class Employee {
	/* snip */
	
	private @Id String id;
	@Schema(description="The employee's full name", example="Foo Fooson")
	private @Column @NotEmpty String name;
	@Schema(description="The employee's job title", example="CTO")
	private @Column @NotEmpty String title;
	@Schema(description="The name of the employee's current department within the company", example="IT")
	private @Column @NotEmpty String department;
	@Schema(description="The employee's current age", example="80")
	private @Column @Min(1) int age;

	/* snip */
}

The @Schema annotation is usable in a lot of situations and has a lot of options, but these will suffice for now. Once we add these, our OpenAPI spec expands in the components section to this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
components:
  schemas:
    Employee:
      description: Represents an individual employee within the system
      required:
      - name
      - title
      - department
      type: object
      properties:
        id:
          type: string
        name:
          description: The employee's full name
          minLength: 1
          type: string
          example: Foo Fooson
          nullable: false
        title:
          description: The employee's job title
          minLength: 1
          type: string
          example: CTO
          nullable: false
        department:
          description: The name of the employee's current department within the company
          minLength: 1
          type: string
          example: IT
          nullable: false
        age:
          format: int32
          description: The employee's current age
          minimum: 1
          type: integer
          example: 80

Now, anyone reading this (or interpreting it with a tool) will have just a bit more information about it. In this case, the descriptions don't add much, but you can imagine expanding this to cover your business specific business rules for formatting, internal codes, etc..

Swagger

Though there's a lot more that you can do to expand the OpenAPI generation, I'll leave it there for now. I'll finish up here with one of the more straightforward benefits you get from this: using Swagger UI. Swagger UI is a tremendously-popular tool for visualizing (and, to an extent, working with) OpenAPI specifications. You can download Swagger UI yourself (to run locally or put in your NSF) or use the live demo, which runs in your browser.

If you want to use the live demo, you can enable CORS in the microprofile-config.properties file created earlier, setting rest.cors.enable to true and rest.cors.allowedOrigins to *.

Once you have it accessible, you can point Swagger UI to your URL, like "http://some.server/foo.nsf/xsp/app/openapi.yaml", and it'll generate a nice summary:

Screenshot of Swagger UI pointing at our app

You can imagine either handing that off to your front-end developer or using it yourself when working on the client part of your system. As you expand your spec - say, adding @Tag to categorize your resources - the UI will expand to reflect it as well.

Conclusion

I'm quite fond of the MicroProfile OpenAPI spec here - it's easy to use and you don't have to worry about the fiddly work of actually generating the spec. Additionally, it's an excellent example of the kind of benefits you get from building on top of Jakarta and MP specifications: because they're built by the active involvement and many companies and with an eye towards interoperability, you automatically get to use tools like Swagger UI or OpenAPI Generator that have no knowledge of Domino. You're rowing in the same direction as lots of others.

In the short term, I plan to update the examples section of the project Git repo with the newer version of these classes, and then follow up by putting the "GitHub issues" client code I wrote for my recent OpenNTF presentation in as another example. Once I do the latter, I'll make sure to post about it as well.

Code-First REST APIs With XPages Jakarta EE Support

Thu Aug 25 11:43:50 EDT 2022

Tags: jakartaee
  1. Code-First REST APIs With XPages Jakarta EE Support
  2. Code-First REST APIs Followup: OpenAPI
  3. XAgents to Jakarta REST Services

Today, I'd like to do a bit of a demonstration post. Specifically, I'd like to demonstrate the basics of making a basic CRUD (Create, Read, Update, Delete) REST API using the XPages Jakarta EE Support project, storing data in the NSF of the app. This will kind of act like a condensed version of the longer series on rewriting the OpenNTF site.

I think it will be a good example of how you can design an API starting from the data level up, with all of the pieces fitting together the whole time in a cohesive whole. There are some bugs for me to address stopping it from also being the source of an OpenAPI spec you could give to front-end developers, but that will come along for the ride once I fix that.

In any event, the core here will be simple: it will be an NSF that will have one document type - "Employee" - and the ability to manipulate those documents in a type-safe way from a REST client. I won't be going over how to actually use this in a browser or remote app, just because that's essentially an infinite rabbit hole. As it is, the code involved is written entirely in an NSF using Designer. This assumes you have a recent build installed and that your NSF has all of the libraries from this checked in Xsp Properties.

The Data Model

We'll be starting with defining the data model. While you could make a Form design element for this too, you don't need to. Our model will be pretty bare-bones: an ID and four scalar properties, without worrying in this exercise about relationships with other model objects. The class (with getters/setters snipped) looks like this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
package model;

import java.util.stream.Stream;

import org.openntf.xsp.nosql.mapping.extension.DominoRepository;

import jakarta.nosql.mapping.Column;
import jakarta.nosql.mapping.Entity;
import jakarta.nosql.mapping.Id;
import jakarta.nosql.mapping.Sorts;
import jakarta.validation.constraints.Min;
import jakarta.validation.constraints.NotEmpty;

@Entity
public class Employee {
	public interface Repository extends DominoRepository<Employee, String> {
		Stream<Employee> findAll(Sorts sorts);
	}
	
	private @Id String id;
	private @Column @NotEmpty String name;
	private @Column @NotEmpty String title;
	private @Column @NotEmpty String department;
	private @Column @Min(1) int age;
	
	/* (snip) "Source" -> "Generate Getters and Setters..." */
}

This will cover all of our data-access needs. The Repository interface there is a Jakarta NoSQL repository that has built-in knowledge for CRUD and query operations that we'll need. In a larger app, you might also add some view-backed sources or other complexities, but we don't need it here.

Beyond the NoSQL annotations - @Entity, @Id, and @Column - this model also uses Jakarta Bean Validation annotations to ensure that the data being stored meets our requirements. The string all have to be non-empty and the age has to be an integer greater than zero (labor laws are lax in this imagined country, apparently). Those annotations will be enforced by Jakarta NoSQL, and will also be used when we get to the REST services. Having this sort of thing is a huge relief: since this is the only way our app will deal with data storage, there's inherently no path in the codebase that can store invalid data.

REST Services

Next, we'll start on the REST services. For a basic CRUD app like this, we'll have a few to define: listing all of them, creating a new one, and then reading, updating, and deleting an individual Employee. We'll start with the "list all" operation. This is in a second class:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
package rest;

import java.util.List;
import java.util.stream.Collectors;

import jakarta.inject.Inject;
import jakarta.nosql.mapping.Sorts;
import jakarta.ws.rs.GET;
import jakarta.ws.rs.Path;
import jakarta.ws.rs.Produces;
import jakarta.ws.rs.core.MediaType;
import model.Employee;

@Path("employees")
public class EmployeeResource {
	
	@Inject
	private Employee.Repository employees;
	
	@GET
	@Produces(MediaType.APPLICATION_JSON)
	public List<Employee> get() {
		return employees.findAll(Sorts.sorts().asc("name")).collect(Collectors.toList());
	}
}

This class will listen at foo.nsf/xsp/app/employees for a GET and provide back a JSON array of Employee objects. Behold, in all its glory:

Call to the employees list with no entries returned

Okay, well, we haven't created anything, so the fact that the JSON result over on the right is empty is correct. It's returning JSON - that JSON just happens to be [].

Create

So we'd better add a method to actually create a new Employee document. REST-idiom-wise, this should be POST to the same path that gets the list of employees, to create a new entity:

1
2
3
4
5
6
7
@POST
@Consumes(MediaType.APPLICATION_JSON)
@Produces(MediaType.APPLICATION_JSON)
public Employee create(@Valid Employee employee) {
	employee.setId(null);
	return employees.save(employee);
}

Now we're getting somewhere. Compared to the previous method, this one sprouts a @Consumes annotation to indicate that it expects a valid JSON form of an Employee, which it then takes as a method parameter. That parameter is annotated with jakarta.validation.Valid, which tells JAX-RS that it should perform bean validation on the incoming object before even calling the method. This isn't strictly necessary, but it's nice to have: without it, the method call would still fail, but the failure would show up as a stack trace from Jakarta NoSQL's innards and would have a 500 status code. We'll see in a bit what it looks like instead with this.

But first, for the normal case:

Call to create a valid employee entity

Shown here, the REST client POSTs valid JSON to this new endpoint (which is the same URL as previously) and receives back the new state of the entity with a 200 OK response. Because we don't have any extra computation going on here, it's just the same value but with the UNID filled in from having saved the document to the NSF.

If I mangle the data - say, by removing a property or, in this case, making an invalid age - I'll instead get back a 400 Bad Request response with some descriptive text:

Trying to create an invalid Employee entity

There are two minor things of note here. The first is that the response isn't in JSON. While this isn't wholly wrong per se, it's not ideal. There's an open issue to improve this. The second is something that can be changed readily within the app: the method parameter name here is arg0 instead of employee. While this again isn't wrong, since the important information is still conveyed, it'd be nice to improve this. Fortunately, we can: in Package Explorer, right-click the NSF project and go to properties. There, you can enable custom compilation settings to store the method parameter names.

Setting custom project compiler settings

I don't know why this is disabled by default.

Once you set that, the message will say employee instead of arg0, which is a bit nicer, and the better name will come along in other content types when that improves in the project too.

Query (redux) and Get

Now that we've created a document, we can re-run the base GET request and see a single-entry array:

Call to the employees list with one entry returned

That's more like it. We'll also want the ability to retrieve an individual entry by UNID, though, so we'll go back and add the method to our EmployeeResource class:

1
2
3
4
5
6
7
@Path("{id}")
@GET
@Produces(MediaType.APPLICATION_JSON)
public Employee getEmployee(@PathParam("id") String id) {
	return employees.findById(id)
		.orElseThrow(() -> new NotFoundException(MessageFormat.format("Could not find employee for ID {0}", id)));
}

Compared to our previous method, this adds a few new tricks:

  • The @Path("{id}") bit specifies a next level of path below employees, and the brackets indicate that it's an arbitrary value that can be picked up as a parameter.
  • The @PathParam("id") annotation indicates that the id method argument will be populated with the variable part of the path.
  • The orElseThrow(() -> new NotFoundException(...)) bit uses the orElseThrow method of Optional to handle the case where no document can be found with that UNID, and then throws the JAX-RS-specific NotFoundException to trigger a proper 404 Not Found response to the client.

The results of calling this are what you might expect, returning a single JSON object representing the Employee:

Call to get a single entity

Modification

Next up is the "U" part of CRUD: updating an existing document. This method essentially composes the "create new" and "read single" methods above. In REST verbiage, this should be a PUT to the same URL as the individual GET:

1
2
3
4
5
6
7
8
@Path("{id}")
@PUT
@Consumes(MediaType.APPLICATION_JSON)
@Produces(MediaType.APPLICATION_JSON)
public Employee update(@PathParam("id") String id, @Valid Employee employee) {
	employee.setId(id);
	return employees.save(employee);
}

The only new concept here is the @PUT annotation - the rest is a re-composition of earlier operations. Defining this allows the caller to send a new version of an Employee entity to replace the existing one:

Call to update an existing entity

With some more work, you could also make an PATCH method that would take an unvalidated Employee and update only changed fields, but that's out of scope for this for now. That'd be a good addition for a fully-fleshed-out REST endpoint, though.

Deletion

Finally, we'll get to the last part of CRUD: deleting documents. This actually ends up being the simplest method of all:

1
2
3
4
5
@Path("{id}")
@DELETE
public void delete(@PathParam("id") String id) {
	employees.deleteById(id);
}

This listens at the same path as the last two, but for DELETE verbs. Then, all it does is delete the entity and return no content. If you chose, you could return JSON like {"success":true} or something - it's always kind of arbitrary what you respond with on DELETE beyond the success status code.

Call to delete an entity

In that screenshot, you can see that it returns 204 No Content, which is the HTTP way to say "yep, that worked, and I don't have anything else to tell you".

Conclusion

This was two classes (and a nested interface) in total, and it allowed us to create a type- and validation-safe REST API for NSF documents. Beyond just the relatively-small amount of code, there are a few things that make this foundation important.

First of all, the code is (as long as you're comfortable with Java and some of the concepts) eminently readable. This is code that you could hand off to another team member or come back to in five years and be able to very-quickly comprehend. This is a critical distinction from less-declarative frameworks like traditional XPages.

Secondly, this is a capable basis for future development. You can come back in and expand this app - more entities, additional methods, etc. - and this original code will still hold strong. You can scale the app up to medium-sized (like the OpenNTF site) or all the way to monstrosity and your framework will be consistent the whole time. This contrasts from frameworks that are either too limited to scale up or (like XPages) turn into an unmaintainable mess above a basic level.

I could go on, but I'll leave it there for now. I continue to find this environment quite pleasant to develop for, and it's always satisfying to see how several of the specs tie together like this.

August OpenNTF Webinar - XPages Jakarta EE Support In Practice

Tue Aug 16 08:16:56 EDT 2022

This Thursday (two days from now), I'll be presenting for OpenNTF's webinar series on the topic of the XPages Jakarta EE Support project. From our summary:

The XPages Jakarta EE Support project on OpenNTF adds an array of modern capabilities to NSF-based Java development. These improvements can be used for wholly-new applications or added incrementally to existing ones.

In this webinar Jesse Gallagher will demonstrate how to use this project to perform common tasks in better ways, such as creating and consuming REST services, writing managed beans with CDI, and using new EL features in XPages. Though these examples will largely use Java, they do not require any knowledge of OSGi or extension library development, nor any tools other than Designer.

This webinar will take place on August 18, 2022 at 11:00AM (New York Time) to 12:30PM.

Register for this webinar at: https://register.gotowebinar.com/register/6878765070462193675

My intent for this is to show the most-common components used with some examples of how I'm using them in practice. I hope it will also be an opportunity for anyone who (reasonably) balks at the opaque monolith to ask questions and get a better idea for whether it'd be helpful for them.

Adding Transactions to the XPages Jakarta EE Support Project

Wed Jul 20 16:03:55 EDT 2022

Tags: jakartaee
  1. Updating The XPages JEE Support Project To Jakarta EE 9, A Travelogue
  2. JSP and MVC Support in the XPages JEE Project
  3. Migrating a Large XPages App to Jakarta EE 9
  4. XPages Jakarta EE Support 2.2.0
  5. DQL, QueryResultsProcessor, and JNoSQL
  6. Implementing a Basic JNoSQL Driver for Domino
  7. Video Series On The XPages Jakarta EE Project
  8. JSF in the XPages Jakarta EE Support Project
  9. So Why Jakarta?
  10. XPages Jakarta EE 2.5.0 And The Looming Java-Version Wall
  11. Adding Concurrency to the XPages Jakarta EE Support Project
  12. Adding Transactions to the XPages Jakarta EE Support Project
  13. XPages Jakarta EE 2.9.0 and Next Steps
  14. XPages JEE 2.11.0 and the Javadoc Provider
  15. The Loose Roadmap for XPages Jakarta EE Support
  16. XPages JEE 2.12.0: JNoSQL Views and PrimeFaces Support
  17. XPages JEE 2.13.0
  18. XPages JEE 2.14.0
  19. XPages JEE 2.15.0 and Plans for JEE 10 and 11

As my work of going down the list of JEE specs is hitting dwindling returns, I decided to give a shot to implementing the Jakarta Transactions spec.

Implementation Oddities

This one's a little spicy for a couple of reasons, one of which is that it's really a codified implementation of another spec, the X/Open XA standard, which is an old standard for transaction processing. As is often the case, "old" here also means "fiddly", but fortunately it's not too bad for this need.

Another reason is that, unlike with a lot of the specs I've implemented, all of the existing implementations seem a bit too heavyweight for me to adapt. I may look around again later: I could have missed one, and eventually GlassFish's implementation may spin off. In the mean time, I wrote a from-scratch implementation for the XPages JEE project: it doesn't cover everything in the spec, and in particular doesn't support suspend/resume for transactions, but it'll work for normal cases in an NSF.

The Spec In Use

Fortunately, while implementations are generally complex (as befits the problem space), the actual spec is delightfully simple for users. There are two main things for an app developer to know about: the jakarta.transaction.UserTransaction interface and the @jakarta.transaction.Transactional annotation. These can be used separately or combined to add transactional behavior. For example:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
@Transactional
public void createExampleDocAndPerson() {
	Person person = new Person();
	/* do some business logic */
	person = personRepository.save(person);
	
	ExampleDoc exampleDoc = new ExampleDoc();
	/* do some business logic */
	exampleDoc = repository.save(exampleDoc);
}

The @Transactional annotation here means that the intent is that everything in this method either completes or none of it does. If something to do with ExampleDoc fails, then the creation of the Person doc shouldn't take effect, even though code was already executed to create and save it.

You can also use UserTransaction directly:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
@Inject
private UserTransaction transaction;

public void createExampleDocAndPerson() throws Exception {
	transaction.begin();
	try {
		Person person = new Person();
		/* do some business logic */
		person = personRepository.save(person);
	
		ExampleDoc exampleDoc = new ExampleDoc();
		/* do some business logic */
		exampleDoc = repository.save(exampleDoc);

		transaction.commit();
	} catch(Exception e) {
		transaction.rollback();
	}
}

There's no realistic way to implement this in a general way, so the way the Transactions API takes shape is that specific resources - databases, namely - can opt in to transaction processing. While this won't necessarily save you from, say, sending out an errant email, it can make sure that related records are only ever updated together.

Domino Implementation

This support is fairly common among SQL databases and other established resources, and, fortunately for us, Domino's transaction support became official in 12.0.

So I modified the JNoSQL driver to hook into transactions when appropriate. When it's able to find an open transaction, it begins a native Domino transaction and then registers itself as a javax.transaction.xa.XAResource (a standard Java SE interface) to either commit or roll back that DB transaction as appropriate.

From what I can tell, Domino's transaction support is a bit limited compared to what the spec can do. Apparently, Domino's support is based around thread-specific stacks - while this has the nice attribute of supporting nested transactions, it doesn't look to support suspending transactions or propagating them across threads.

Fortunately, the normal case will likely not need those more-arcane capabilities. Considering that few if any Domino applications currently make use of transactions at all, this should be a significant boost.

Next Steps

As I mentioned above, I'll likely take another look around for svelte implementations of the spec, since I'd be more than happy to cast my partial implementation to the wayside. While my version seems to check out in practice, I'd much sooner rely on a proven implementation from elsewhere.

Beyond that, any next steps may come in the form of supporting other databases via JPA. While JDBC drivers may hook into this as it is, I haven't tried that, and this could be a good impetus to adding JPA to the supported stack. That'll come post-2.7.0, though - this release has enough big-ticket items and should now be in a cool-off period before I call it solid and move to the next one.

Adding Concurrency to the XPages Jakarta EE Support Project

Mon Jul 11 13:37:46 EDT 2022

Tags: jakartaee
  1. Updating The XPages JEE Support Project To Jakarta EE 9, A Travelogue
  2. JSP and MVC Support in the XPages JEE Project
  3. Migrating a Large XPages App to Jakarta EE 9
  4. XPages Jakarta EE Support 2.2.0
  5. DQL, QueryResultsProcessor, and JNoSQL
  6. Implementing a Basic JNoSQL Driver for Domino
  7. Video Series On The XPages Jakarta EE Project
  8. JSF in the XPages Jakarta EE Support Project
  9. So Why Jakarta?
  10. XPages Jakarta EE 2.5.0 And The Looming Java-Version Wall
  11. Adding Concurrency to the XPages Jakarta EE Support Project
  12. Adding Transactions to the XPages Jakarta EE Support Project
  13. XPages Jakarta EE 2.9.0 and Next Steps
  14. XPages JEE 2.11.0 and the Javadoc Provider
  15. The Loose Roadmap for XPages Jakarta EE Support
  16. XPages JEE 2.12.0: JNoSQL Views and PrimeFaces Support
  17. XPages JEE 2.13.0
  18. XPages JEE 2.14.0
  19. XPages JEE 2.15.0 and Plans for JEE 10 and 11

For a little while, I've had a task open for me to investigate the Jakarta Concurrency and MP Context Propagation specs, and this weekend I decided to dive into that. While I've shelved the MicroProfile part for now, I was successful in implementing Concurrency, at least for the most part.

The Spec

The Jakarta Concurrency spec deals with extending Java's default multithreading services - Threads, ExecutorServices, and ScheduledExecutorServices - in a couple ways that make them more capable in Jakarta EE applications. The spec provides Managed variants of these executors, though they extend the base Java interfaces and can be treated the same way by user code.

While the extra methods here and there for task monitoring are nice, and I may work with them eventually, the big-ticket item for my needs is propagating context from the initializer to the thread. By "context" here I mean things like knowledge of the running NSF, its CDI environment, the user making the HTTP request, and so forth. As it shakes out, this is no small task, but the spec makes it workable.

Examples

In its basic form, an ExecutorService lets you submit a task and then either let it run on its own time or use get() to synchronously wait for its execution - sort of like async/await but less built-in. For example:

1
2
ExecutorService exec = /* get an ExecutorService */;
String basic = exec.submit(() -> "Hello from executor").get();

A ScheduledExecutorService extends this a bit to allow for future-scheduled and repeating tasks:

1
2
3
4
5
String[] val = new String[1];
ScheduledExecutorService exec = /* get a ScheduledExecutorService */;
exec.schedule(() -> { val[0] = "hello from scheduler"; }, 250, TimeUnit.MILLISECONDS);
Thread.sleep(300);
// Now val[0] is "hello from scheduler"

Those examples aren't exactly useful, but hopefully you can get some further ideas. With an ExecutorService, you can spin up multiple concurrent tasks and then wait for them all - in a client project, I do this to speed up large view reading by divvying it up into chunks, for example. Alternatively, you could accept an interactive request from a user and then kick off a thread to do hefty work while returning a response before it's done.

There's an example of this sort of thing in XPages up on OpenNTF from about a decade ago. It uses Eclipse Jobs as its concurrency tool of choice, but the idea is largely the same.

The Basic Implementation

For the core implementation code, I grabbed the GlassFish implementation, which was the Reference Implementation back when JEE had Reference Implementations. With that as the baseline, I was responsible for just a few tasks:

The devil was in the details, but the core lifecycle wasn't too bad.

JNDI

One intriguing and slightly vexing thing about this API is that the official way to access these executors is to use JNDI, the "Java Naming and Directory Interface", which is something of an old and weird spec. It's also one of the specs that remains in standard Java while being in the javax.* namespace, and those always feel weird nowadays.

Anyway, JNDI is used for a bunch of things (like ruining Thanksgiving), but one of them is to provide named objects from a container to an application - kind of like managed beans.

One common use for this is to allow an app container (such as Open Liberty) to manage a JDBC connection to a relational database, allowing the app to just reference it by name and not have to manage the driver and connection specifics. I use that for this blog, in fact.

XPages apps don't really do this, but Domino does include a com.ibm.pvc.jndi.provider.java OSGi bundle that handles JNDI basics. I'm sure there are some proper ways to go about registering services with this, but I couldn't be bothered: in practice, I just call context.rebind(...) and call it a day.

Ferrying the Context

The core workhorse of this is the ContextSetupProvider implementation. It's the part that's responsible for being notified when context is going to be shunted around and then doing the work of grabbing what's needed in a portable way and setting it up for threaded code. For my needs, I set up an extension interface that can be used to register different participants, so that the Concurrency bundle doesn't have to retain knowledge about everything.

So far, there are a few of these.

Notes Context

The first of these is the NSFNotesContextParticipant, which takes on the job of identifying the current Notes/XPages environment and preparing an equivalent one in the worker thread. The "Threads and Jobs" project above does something like this using the ThreadSessionExecutor class provided with the runtime, but that didn't really suit my needs.

What this class does is grab the current com.ibm.domino.xsp.module.nsf.NotesContext, pulls the NSFComponentModule and HttpServletRequest from it, and then uses that information to set up the new thread before tearing it down when the task is done.

This process initializes the thread like a NotesThread and also sets appropriate Session and Database objects in the context, so code running in an NSF can use those in their threaded tasks without having to think about the threading:

1
String userName = exec.submit(() -> "Username is: " + NotesContext.getCurrent().getCurrentSession().getEffectiveUserName()).get();

This class does some checking to make sure it's in an NSF-specific request. I may end up also writing an equivalent one for OSGi Servlet/Web Container requests as well.

CDI

Outside of Notes-runtime specifics, the most important context to retain is the CDI context. Fortunately, that one's not too difficult:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
@Override
public void saveContext(ContextHandle contextHandle) {
	if(contextHandle instanceof AttributedContextHandle) {
		if(LibraryUtil.isLibraryActive(CDILibrary.LIBRARY_ID)) {
			((AttributedContextHandle)contextHandle).setAttribute(ATTR_CDI, CDI.current());
		}
	}
}

@Override
public void setup(ContextHandle contextHandle) throws IllegalStateException {
	if(contextHandle instanceof AttributedContextHandle) {
		CDI<Object> cdi = ((AttributedContextHandle)contextHandle).getAttribute(ATTR_CDI);
		ConcurrencyCDIContainerLocator.setCdi(cdi);
	}
}

@Override
public void reset(ContextHandle contextHandle) {
	if(contextHandle instanceof AttributedContextHandle) {
		ConcurrencyCDIContainerLocator.setCdi(null);
	}
}

This checks to make sure CDI is enabled for the current app and, if so, uses the standard CDI.current() method to find the container. This is the same method used everywhere and ends up falling to the NSFCDIProvider class to actually locate it. That bounces back to a locator service that returns the thread-set one, and thus the executor is able to find it. Then, threaded code is able to use CDI.current().select(...) to find beans from the application:

1
2
String databasePath = exec.submit(() -> "Database is: " + CDI.current().select(Database.class).get()).get();
String applicationGuy = exec.submit(() -> "applicationGuy is: " + CDI.current().select(ApplicationGuy.class).get().getMessage()).get();

Next Steps

To flesh this out, I have some other immediate work to do. For one, I'll want to see if I can ferry over the current JAX-RS application context - that will be needed for using the MicroProfile Rest Client, for example.

Beyond that, I'm considering implementing the MicroProfile Context Propagation spec, which provides some alternate capabilities to go along with this functionality. It may be a bit more work than it's worth for NSF use, but I like to check as many boxes as I can. Those concepts look to have made it into Concurrency 3.0, but, as that version is targeted for Jakarta EE 10, it's almost certain that final implementation builds will require Java 11.

Finally, though, and along similar lines, I'm pondering backporting the @Asynchronous annotation from Concurrency 3.0, which is a CDI extension that allows you to implicitly make a method asynchronous:

1
2
3
4
5
@Asynchronous
public CompletableFuture<Object> someExpensiveOperation() {
	Object result = /* do something expensive */;
	return Asynchronous.Result.complete(result);
}

With that, it's similar to submitting the task specifically, but CDI will do all the work of actually making it async when called. We'll see - I've avoided backporting much, but that one is tempting.

Rewriting The OpenNTF Site With Jakarta EE: UI

Mon Jun 27 15:06:20 EDT 2022

  1. Rewriting The OpenNTF Site With Jakarta EE, Part 1
  2. Rewriting The OpenNTF Site With Jakarta EE: UI

In what may be the last in this series for a bit, I'll talk about the current approach I'm taking for the UI for the new OpenNTF web site. This post will also tread ground I've covered before, when talking about the Jakarta MVC framework and JSP, but it never hurts to reinforce the pertinent aspects.

MVC

The entrypoint for the UI is Jakarta MVC, which is a framework that sits on top of JAX-RS. Unlike JSF or XPages, it leaves most app-structure duties to other components. This is due both to its young age (JSF predates and often gave rise to several things we've discussed so far) and its intent. It's "action-based", where you define an endpoint that takes an incoming HTTP request and produces a response, and generally won't have any server-side UI state. This is as opposed to JSF/XPages, where the core concept is the page you're working with and the page state generally exists across multiple requests.

Your starting point with MVC is a JAX-RS REST service marked with @Controller:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
package webapp.controller;

import java.text.MessageFormat;

import bean.EncoderBean;
import jakarta.inject.Inject;
import jakarta.mvc.Controller;
import jakarta.mvc.Models;
import jakarta.ws.rs.GET;
import jakarta.ws.rs.NotFoundException;
import jakarta.ws.rs.Path;
import jakarta.ws.rs.PathParam;
import jakarta.ws.rs.Produces;
import jakarta.ws.rs.core.MediaType;
import model.home.Page;

@Path("/pages")
public class PagesController {
    
    @Inject
    Models models;
    
    @Inject
    Page.Repository pageRepository;
    
    @Inject
    EncoderBean encoderBean;

    @Path("{pageId}")
    @GET
    @Produces(MediaType.TEXT_HTML)
    @Controller
    public String get(@PathParam("pageId") String pageId) {
        String key = encoderBean.cleanPageId(pageId);
        Page page = pageRepository.findBySubject(key)
            .orElseThrow(() -> new NotFoundException(MessageFormat.format("Unable to find page for ID: {0}", key)));
        models.put("page", page); //$NON-NLS-1$
        return "page.jsp"; //$NON-NLS-1$
    }
}

In the NSF, this will respond to requests like /foo.nsf/xsp/app/pages/Some_Page_Name. Most of what is going on here is the same sort of thing we saw with normal REST services: the @Path, @GET, @Produces, and @PathParam are all normal JAX-RS, while @Inject uses the same CDI scaffolding I talked about in the last post.

MVC adds two things here: @Inject Models models and @Controller.

The Models object is conceptually a Map that houses variables that you can populate to be accessible via EL on the rendered page. You can think of this like viewScope or requestScope in XPages and is populated in something like the beforePageLoad phase. Here, I use the Models object to store the Page object I look up with JNoSQL.

The @Controller annotation marks a method or a class as participating in the MVC lifecycle. When placed on a class, it applies to all methods on the class, while placing it on a method specifically allows you to mix MVC and "normal" REST resources in the same class. Doing that would be useful if you want to, for example, provide HTML responses to browsers and JSON responses to API clients at the same resource URL.

When a resource method is marked for MVC use, it can return a string that represents either a page to render or a redirection in the form "redirect:some/resource". Here, it's hard-coded to use "page.jsp", but in another situation it could programmatically switch between different pages based on the content of the request or state of the app.

While this looks fairly clean on its own, it's important to bear in mind both the strengths and weaknesses of this approach. I think it will work here, as it does for my blog, because the OpenNTF site isn't heavy on interactive forms. When dealing with forms in MVC, you'll have to have another endpoint to listen for @POST (or other verbs with a shim), process that request from scratch, and return a new page. For example, from the XPages JEE example app:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
@Path("create")
@POST
@Consumes(MediaType.APPLICATION_FORM_URLENCODED)
@Controller
public String createPerson(
        @FormParam("firstName") @NotEmpty String firstName,
        @FormParam("lastName") String lastName,
        @FormParam("birthday") String birthday,
        @FormParam("favoriteTime") String favoriteTime,
        @FormParam("added") String added,
        @FormParam("customProperty") String customProperty
) {
    Person person = new Person();
    composePerson(person, firstName, lastName, birthday, favoriteTime, added, customProperty);
    
    personRepository.save(person);
    return "redirect:nosql/list";
}

That's already fiddlier than the XPages version, where you'd bind fields right to bean/document properties, and it gets potentially more complicated from there. In general, the more form-based your app is, the better a fit XPages/JSF is.

JSP

While MVC isn't intrinsically tied to JSP (it ships with several view engine hooks and you can write your own), JSP has the advantage of being built in to all Java webapp servers and is very well fit to purpose. When writing JSPs for MVC, the default location is to put them in WEB-INF/views, which is beneath WebContent in an NSF project:

Screenshot of JSPs in an NSF

The "tags" there are the general equivalent of XPages Custom Controls, and their presence in WEB-INF/tags is convention. An example page (the one used above) will tend to look something like this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
<%@page contentType="text/html" pageEncoding="UTF-8" trimDirectiveWhitespaces="true" session="false" %>
<%@taglib prefix="t" tagdir="/WEB-INF/tags" %>
<%@taglib prefix="c" uri="http://java.sun.com/jsp/jstl/core" %>
<%@taglib prefix="fn" uri="http://java.sun.com/jsp/jstl/functions" %>
<t:layout>
    <turbo-frame id="page-content-${page.linkId}">
        <div>
            ${page.html}
        </div>
        
        <c:if test="${not empty page.childPageIds}">
            <div class="tab-container">
                <c:forEach items="${page.cleanChildPageIds}" var="pageId" varStatus="pageLoop">
                    <input type="radio" id="tab${pageLoop.index}" name="tab-group" ${pageLoop.index == 0 ? 'checked="checked"' : ''} />
                    <label for="tab${pageLoop.index}">${fn:escapeXml(encoder.cleanPageId(pageId))}</label>
                </c:forEach>
                    
                <div class="tabs">
                    <c:forEach items="${page.cleanChildPageIds}" var="pageId">
                        <turbo-frame id="page-content-${pageId}" src="xsp/app/pages/${encoder.urlEncode(pageId)}" class="tab" loading="lazy">
                        </turbo-frame>
                    </c:forEach>
                </div>
            </div>
        </c:if>
    </turbo-frame>
</t:layout>

There are, by shared lineage and concept, a lot of similarities with an XPage here. The first four lines of preamble boilerplate are pretty similar to the kind of stuff you'd see in an <xp:view/> element to set up your namespaces and page options. The tag prefixing is the same idea, where <t:layout/> refers to the "layout" custom tag in the NSF and <c:forEach/> refers to a core control tag that ships with the standard tag library, JSTL. The <turbo-frame/> business isn't JSP - I'll deal with that later.

The bits of EL here - all wrapped in ${...} - are from Expression Language 4.0, which is the current version of XPages's aging EL. On this page, the expressions are able to resolve variables that we explicitly put in the Models object, such as page, as well as CDI beans with the @Named annotation, such as encoderBean. There are also a number of implicit objects like request, but they're not used here.

In general, this is safely thought of as an XPage where you make everything load-time-bound and set viewState="nostate". The same sorts of concepts are all there, but there's no concept of a persistent component that you interact with. Any links, buttons, and scripts will all go to the server as a fresh request, not modifying an existing page. You can work with application and session scopes, but there's no "view" scope.

Hotwired Turbo

Though this app doesn't have much need for a lot of XPages's capabilities, I do like a few components even for a mostly "read-only" app. In particular, the <xe:djContentPane/> and <xe:djTabContainer/> controls have the delightful capability of deferring evaluation of their contents to later requests. This is a powerful way to speed up initial page load and, in the case of the tab container, skip needing to render parts of the page the user never uses.

For this and a couple other uses, I'm a fan of Hotwired Turbo, which is a library that grew out of 37 Signals's Rails-based development. The goal of Turbo and the other Hotwired components is to keep the benefits of server-based HTML rendering while mixing in a lot of the niceties of JS-run apps. There are two things that Turbo is doing so far in this app.

The first capability is dubbed "Turbo Drive", and it's sort of a freebie: you enable it for your app, tell it what is considered the app's base URL, and then it will turn any in-app links into "partial refresh" links: it downloads the page in the background and replaces just the changed part on the page. Though this is technically doing more work than a normal browser navigation, it ends up being faster for the user interface. And, since it also updates the URL to match the destination page and doesn't require manual modification of links, it's a drop-in upgrade that will also degrade gracefully if JavaScript isn't enabled.

The second capability is <turbo-frame/> up there, and it takes a bit more buy-in to the JS framework in your app design. The way I'm using Turbo Frames here is to support the page structure of OpenNTF, which is geared around a "primary" page as well as zero or more referenced pages that show up in tabs. Here, I'm buying in to Turbo Frames by surrounding the whole page in a <turbo-frame/> element with an id using the page's key, and then I reference each "sub-page" in a tab with that same ID. When loading the frame, Turbo makes a call to the src page, finds the element with the matching id value, and drops it in place inside the main document. The loading="lazy" parameter means that it defers loading until the frame is visible in the browser, which is handy when using the HTML/CSS-based tabs I have here.

I've been using this library for a while now, and I've been quite pleased. Though it was created for use with Rails, the design is independent of the server implementation, and the idioms fit perfectly with this sort of Java app too.

Conclusion

I think that wraps it up for now. As things progress, I may have more to add to this series, but my hope is that the app doesn't have to get much more complicated than the sort of stuff seen in this series. There are certainly big parts to tackle (like creating and managing projects), but I plan to do that by composing these elements. I remain delighted with this mode of NSF-based app development, and look forward to writing more clean, semi-declarative code in this vein.

Rewriting The OpenNTF Site With Jakarta EE, Part 1

Sun Jun 19 10:13:32 EDT 2022

Tags: jakartaee java
  1. Rewriting The OpenNTF Site With Jakarta EE, Part 1
  2. Rewriting The OpenNTF Site With Jakarta EE: UI

The design for the OpenNTF home page has been with us for a little while now and has served us pretty well. It looks good and covers the bases it needs to. However, it's getting a little long in the tooth and, more importantly, doesn't cover some capabilities that we're thinking of adding.

While we could potentially expand the current one, this provides a good opportunity for a clean start. I had actually started taking a swing at this a year and a half ago, taking the tack that I'd make a webapp and deploy it using the Domino Open Liberty Runtime. While that approach would put all technologies on the table, it'd certainly be weirder to future maintainers than an app inside an NSF (at least for now).

So I decided in the past few weeks to pick the project back up and move it into an NSF via the XPages Jakarta EE Support project. I can't say for sure whether I'll actually complete the project, but it'll regardless be a good exercise and has proven to be an excellent way to find needed features to implement.

I figure it'll also be useful to keep something of a travelogue here as I go, making posts periodically about what I've implemented recently.

The UI Toolkit

The original form of this project used MVC and JSP for the UI layer. Now that I was working in an NSF, I could readily use XPages, but for now I've decided to stick with the MVC approach. While it will make me have to solve some problems I wouldn't necessarily have to solve otherwise (like file uploads), it remains an extremely-pleasant way to write applications. I am also not constrained to this: since the vast majority of the logic is in Java beans and controller classes, switching the UI front-end would not be onerous. Also, I could theoretically mix JSP, JSF, XPages, and static HTML together in the app if I end up so inclined.

In the original app (as in this blog), I made use of WebJars to bring in JavaScript dependencies, namely Hotwire Turbo to speed up in-site navigation and use Turbo Frames. Since the NSF app in Designer doesn't have the Maven dependency mechanism the original app did, I just ended up copying the contents of the JAR into WebContent. That gave me a new itch to scratch, though: I'd love to be able to have META-INF/resources files in classpath JARs picked up by the runtime and made available, lowering the number of design elements present in the NSF.

The Data Backend

The primary benefit of this project so far has been forcing me to flesh out the Jakarta NoSQL driver in the JEE support project. I had kind of known hypothetically what features would be useful, but the best way to do this kind of thing is often to work with the tool until you hit a specific problem, and then solve that. So far, it's forced me to:

  • Implement the view support in my previous post
  • Add attachment support for documents, since we'll need to upload and download project releases
  • Improve handling of rich text and MIME, though this also has more room to grow
  • Switched the returned Streams from the driver to be lazy loading, meaning that not all documents/entries have to be read if the calling code stops reading the results partway through
  • Added the ability to use custom property types with readers/writers defined in the NSF

Together, these improvements have let me have almost no lotus.domino code in the app. The only parts left are a bean for formatting Notes-style names (which I may want to make a framework service anyway) and a bean for providing access to the various associated databases used by the app. Not too shabby! The app is still tied to Domino by way of using the Domino-specific extensions to JNoSQL, but the programming model is significantly better and the amount of app code was reduced dramatically.

Next Steps

There's a bunch of work to be done. The bulk of it is just implementing things that the current XPages app does: actually uploading projects, all the stuff like discussion lists, and so forth. I'll also want to move the server-side component of the small "IP Tools" suite I use for IP management stuff in here. Currently, that's implemented as Wink-based JAX-RS resources inside an OSGi bundle, but it'll make sense to move it here to keep things consolidated and to make use of the much-better platform capabilities.

As I mentioned above, I can't guarantee that I'll actually finish this project - it's all side work, after all - but it's been useful so far, and it's a further demonstration of how thoroughly pleasant the programming model of the JEE support project is.

Working Domino Views Into Jakarta NoSQL

Sun Jun 12 15:33:47 EDT 2022

A few versions ago, I added Jakarta NoSQL support to the XPages Jakarta EE Support project. For that, I used DQL and QueryResultsProcessor exclusively, since it's a near-exact match for the way JNoSQL normally goes things and QRP brought the setup into the realm of "good enough for the normal case".

However, as I've been working on a project that puts this to use, the limitations have started to hold me back.

The Limitations

The first trouble I ran into was the need to list, for example, the most recent 20 of an entity. This is something that QRP took some steps to handle, but it still has to build the pseudo-view anew the first time and then any time documents change. This gets prohibitively expensive quickly. In theory, QRP has enough flexibility to use existing views for sorting, but it doesn't appear to do so yet. Additionally, its "max entries" and "max documents" values are purely execution limits and not something to use to give a subset report: they throw an exception when that many entries have been processed, not just stop execution. For some of this, one can deal with it when manually writing the DQL query, but the driver doesn't have a path to do so.

The second trouble I ran into was the need to get a list composed of multiple kinds of documents. This one is a limitation of the default idiom that JNoSQL uses, where you do queries on named types of documents - and, in the Domino driver, that "type" corresponds to Form field values.

The Uncomfortable Solution

Thus, hat in hand, I returned to the design element I had hoped to skim past: views. Views are an important tool, but they are way, way overused in Domino, and I've been trying over time to intentionally limit my use of them to break the habit. Still, they're obviously the correct tool for both of these jobs.

So I made myself an issue to track this and set about tinkering with some ways to make use of them in a way that would do what I need, be flexible for future needs, and yet not break the core conceit of JNoSQL too much. My goal is to make almost no calls to an explicit Domino API, and so doing this will be a major step in that direction.

Jakarta NoSQL's Extensibility

Fortunately for me, Jakarta NoSQL is explicitly intended to be extensible per driver, since NoSQL databases diverge more wildly in the basics than SQL databases tend to. I made use of this in the Darwino driver to provide support for stored cursors, full-text search, and JSQL, though all of those had the bent of still returning full documents and not "view entries" in the Domino sense.

Still, the idea is very similar. Jakarta NoSQL encourages a driver author to write custom annotations for repository methods to provide hints to the driver to customize behavior. This generally happens at the "mapping" layer of the framework, which is largely CDI-based and gives you a lot of room to intercept and customize requests from the app-developer level.

Implementation

To start out with, I added two annotations you can add to your repository methods: @ViewEntries and @ViewDocuments. For example:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
@RepositoryProvider("blogRepository")
public interface BlogEntryRepository extends DominoRepository<BlogEntry, String> {
    public static final String VIEW_BLOGS = "vw_Content_Blogs"; //$NON-NLS-1$
    
    @ViewDocuments(value=VIEW_BLOGS, maxLevel=0)
    Stream<BlogEntry> findRecent(Pagination pagination);
    
    @ViewEntries(value=VIEW_BLOGS, maxLevel=0)
    Stream<BlogEntry> findAll();
}

The distinction here is one of the ways I slightly break the main JNoSQL idioms. JNoSQL was born from the types of databases where it's just as easy to retrieve the entire document as it is to retrieve part - this is absolutely the case in JSON-based systems like Couchbase (setting aside attachments). However, Domino doesn't quite work that way: it can be significantly faster to fetch only a portion of a document than the data from all items, namely when some of those items are rich text or MIME.

The @ViewEntries annotation causes the driver to consider only the item values found in the entries of the view it's referencing. In a lot of cases, this is all you'll need. When you set a column in Designer to be just directly an item value from the documents, the column is by default named with the same name, and so a mapped entity pulled from this column can have the same fields filled in as from a document. This does have the weird characteristic where objects pulled from one method may have different instance values from the "same" objects from another method, but the tradeoff is worth it.

@ViewDocuments, fortunately, doesn't have this oddity. With that annotation, documents are processed in the same way as with a normal query; they just are retrieved according to the selection and order from the backing view.

Using these capabilities allowed me to slightly break the JNoSQL idiom in the other way I needed: reading unrelated document types in one go. For this, I cheated a bit and made a "document" type with a form name that doesn't correspond to anything, and then made the mapped items based on the view name. So I created this entity class:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
@Entity("ProjectActivity")
public class ProjectActivity {
    @Column("$10")
    private String projectName;
    @Column("Entry_Date")
    private OffsetDateTime date;
    @Column("$12")
    private String createdBy;
    @Column("Form")
    private String form;
    @Column("subject")
    private String subject;

    /* snip */
}

As you might expect, no form has a field named $10, but that is the name of the view column, and so the mapping layer happily populates these objects from the view when configured like so:

1
2
3
4
5
@RepositoryProvider("projectsRepository")
public interface ProjectActivityRepository extends DominoRepository<ProjectActivity, String> {
    @ViewEntries("AllbyDate")
    Stream<ProjectActivity> findByProjectName(@ViewCategory String projectName);
}

These are a little weird in that you wouldn't want to save such entities lest you break your data, but, as long as you keep that in mind, it's not a bad way to solve the problem.

Future Changes

Since this implementation was based on fulfilling just my immediate needs and isn't the result of careful consideration, it's likely to be something that I'll revisit as I go. For example, that last example shows the third custom annotation I introduced: @ViewCategory. I wanted to restrict entries to a category that is specified programmatically as part of the query, and so annotating the method parameter was a great way to do that. However, there are all sorts of things one might want to do dynamically when querying a view: setting the max level programmatically, specifying expand/collapse behavior, and so forth. I don't know yet whether I'll want to handle those by having a growing number of parameter annotations like that or if it would make more sense to consolidate them into a single ViewQueryOptions parameter or something.

I also haven't done anything special with category or total rows. While they should just show up in the list like any other entry, there's currently nothing special signifying them, and I don't have a way to get to the note ID either (just the UNID). I'll probably want to create special pseudo-items like @total or @category to indicate their status.

There'll also no doubt be a massive wave of work to do when I turn this on something that's not just a little side project. While I've made great strides in my oft-mentioned large client project to get it to be more platform-independent, it's unsurprisingly still riven with Domino API references top to bottom. While I don't plan on moving it anywhere else, writing so much code using explicit database-specific API calls is just bad practice in general, and getting this driver to a point where it can serve that project's needs would be a major sign of its maturity.

XPages Jakarta EE 2.5.0 And The Looming Java-Version Wall

Wed May 25 14:41:52 EDT 2022

  1. Updating The XPages JEE Support Project To Jakarta EE 9, A Travelogue
  2. JSP and MVC Support in the XPages JEE Project
  3. Migrating a Large XPages App to Jakarta EE 9
  4. XPages Jakarta EE Support 2.2.0
  5. DQL, QueryResultsProcessor, and JNoSQL
  6. Implementing a Basic JNoSQL Driver for Domino
  7. Video Series On The XPages Jakarta EE Project
  8. JSF in the XPages Jakarta EE Support Project
  9. So Why Jakarta?
  10. XPages Jakarta EE 2.5.0 And The Looming Java-Version Wall
  11. Adding Concurrency to the XPages Jakarta EE Support Project
  12. Adding Transactions to the XPages Jakarta EE Support Project
  13. XPages Jakarta EE 2.9.0 and Next Steps
  14. XPages JEE 2.11.0 and the Javadoc Provider
  15. The Loose Roadmap for XPages Jakarta EE Support
  16. XPages JEE 2.12.0: JNoSQL Views and PrimeFaces Support
  17. XPages JEE 2.13.0
  18. XPages JEE 2.14.0
  19. XPages JEE 2.15.0 and Plans for JEE 10 and 11

Earlier today, I published version 2.5.0 of the XPages Jakarta EE Support project. It's mostly a consolidation and bug-fix release, but there are few interesting features and notes about the implementation. Plus, as teased in the post title up there, there's a looming problem for the project.

New Features

There are two main new features in this version.

First, I added some configurable CORS support for REST services. Fortunately for me, RestEasy comes with a CORS filter by default, and it just needs to be enabled. I wired it up using MicroProfile Config to read some values out of xsp.properties:

1
2
3
4
5
6
7
8
rest.cors.enable=true                   # required for CORS
rest.cors.allowCredentials=true         # defaults to true
rest.cors.allowedMethods=GET,HEAD       # defaults to all
rest.cors.allowedHeaders=Some-Header    # defaults to all
rest.cors.exposedHeaders=Some-Header    # optional
rest.cors.maxAge=600                    # optional
# allowedOrigins is required, and can be "*"
rest.cors.allowedOrigins=http://foo.com,http://bar.com

I also added support for using the long-standing @WebServlet annotation. Though REST services will generally do what you want, sometimes it's handy to use the lower-level Servlet capability, and now you can do so inline:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
@WebServlet(urlPatterns = { "/someservlet", "/someservlet/*", "*.hello" })
public class ExampleServlet extends HttpServlet {
	private static final long serialVersionUID = 1L;
	
	@Inject
	ApplicationGuy applicationGuy;

	@Override
	protected void doGet(HttpServletRequest req, HttpServletResponse resp) throws ServletException, IOException {
		resp.setContentType("text/plain");
		resp.getWriter().println("Hello from ExampleServlet. context=" + req.getContextPath() + ", path=" + req.getServletPath() + ", pathInfo=" + req.getPathInfo());
		resp.getWriter().println("ApplicationGuy: " + applicationGuy.getMessage());
		resp.getWriter().flush();
	}
}

Consolidation

There were a couple specs where I had previously either copied the source into the repository (CDI, Mail) or had maintained a local branch fork (NoSQL). Those were always uncomfortable concessions to reality, but I decided to look further into ways to handle that.

For NoSQL, part of it was what I talked about in my last post: using Eclipse Transformer to make use of javax.* compiled binaries and source converted to jakarta.* automatically. But beyond that, it had the same problem that I had forked Mail for. Namely, it hits the same trouble that lots of non-OSGi code does in an OSGi context, where it uses ServiceLoader in a non-extensible way. Though I have an open PR to make use of the pseudo-standard "HK2" ServiceLoader provider, waiting for that would mean continuing the local-build trouble.

Instead, for all of these cases I made use of OSGi's Weaving capability to re-write those parts of the class files on the fly. While this is a bit unfortunate, it works well in practice. The only real down side for now is having to be a bit more careful when bumping the versions in the future, but this type of code changes very rarely.

The Looming Wall

While this has been going swimmingly, I've started to hit some real impediments with Domino's Java version. The next release of Jakarta EE, version 10, requires Java 11 as a minimum. This is similar to the move Equinox (Domino's OSGi framework of choice) made just under two years ago, and which has itself bitten me with a blockage to upgrading Tycho to version 2.0 and above. Java 11 is about four years old now, and is no longer even the latest LTS release, so this all makes sense.

I've known this was coming for a while, but incompatible versions of JEE specs and implementations started to trickle in over the past year, leading to me leaving notes for myself about maximum versions. JEE 10 itself is fairly imminent now, so I'll be capped at the ones released with JEE 9 a while ago.

So I've been pondering my options here.

In one sense, I solved this problem years ago. The Domino Open Liberty Runtime project has had the ability to download any version of open-source Java that you want, and I expanded it last year to let you pick from several common flavors. Liberty maintains a breathless pace of advancement, adding official support for Java 18 the month after it came out. If one wants to run JEE apps on Domino, that's the most complete way. However, though it does its job technologically well, it's not exactly a natural fit for Domino developers in its current state.

But I've been considering anew a notion I had years ago, which is to write an extension for Liberty so that it reads class files and resources out of an NSF directly. In some early investigation a bit ago, this started to appear quite doable. In theory, I could write an adapter that would take an incoming request for "foo.nsf" and then read files out of the NSF in the same way XPages does, but instead feeding them to Liberty's runtime. Doing this would essentially implement all remaining JEE and MicroProfile specs in one fell swoop on top of the "any Java version" support, but would add the fault-prone attribute of running a separate process and proxying requests to it. In practice, that setup has proven itself good, but it's certainly more complicated than the "single process on port 80" deal that Domino's HTTP is now.

That route also wouldn't inherently support XPages, which would be something of an impediment to the XPages JEE project's original remit. That's something I've also pondered, and in theory I could make an auto-vivifying version of the XPages Runtime project that grabs all the pertinent XPages bundles from the current server and patches them into the Liberty server as an extension feature, similar to how all the built-in Liberty features work. This could be done, but I'll admit that I balk a bit at the prospect. Though I run XPages outside Domino constantly, it's with full knowledge of the tradeoffs and special considerations. Getting a normal NSF-based XPages app to run in this way would take some additional work.

Anyway, those options could work, but none of them are great. The true fix would naturally be for HCL to move to a newer Java version in Domino's HTTP stack, but I don't control that, so I'll content myself with considering what to do in the mean time. Admittedly, pondering this sort of thing is enjoyable in its own right. Also fortunately, even without tackling this, there's still plenty of stuff in the pile for me to tackle as the fancy strikes me.

Putting Eclipse Transformer To Use In Dependency Wrangling

Tue May 24 15:46:15 EDT 2022

Tags: jakartaee java

Setting code aside, the backbone of the XPages Jakarta EE Support project is its dependency pool. In it, I use my fork of the p2-maven-plugin to wrangle all the spec and implementation dependencies. Aside from just collecting them, this file does a ton of work to create and reconfigure their OSGi bundle rules to get everything working on Domino.

There have been limitations, though, and some of them have to do with the Jakarta NoSQL project. Though there are side branches of that project using the jakarta.* namespace, the main master branch is still on javax.* for a couple Jakarta depenencies. Historically, I've dealt with this by running a build locally and deploying it to OpenNTF's Maven server. However, this adds a bit of randomness to the mix: if a snapshot build of NoSQL goes out to the main repository that happens to be newer, then building the dependency repository locally might pick up on that instead, since it's named the same thing.

Transformer

Fortunately, IBM wrote the solution for me: Eclipse Transformer. This Transformer is a rules engine to translate files (Java and related resources, namely) based on configuration - and, while it's generic, it's really designed for the transition from javax.* to jakarta.* namespaces.

It allows you to do these transformations at runtime or (as I'll be doing here) ahead of time, even if you don't have access to the original source. Though I do have access to the source, it's more useful at the moment to act like I don't.

I'd known about the tool and have seen how it's used heavily by both app servers and implementation vendors to be able to support both old- and new-style uses, and so I've kept it in mind for in case the need ever came up. It's a perfect fit for this.

p2-maven-plugin

I considered a couple ways to handle this, but realized the cleanest for now would be to integrate it into the dependency pool generator that I already have, since it fits right in with the OSGi transformations I'm doing.

So I went on over to the p2-maven-plugin fork and got to work. When defining Maven artifacts to bring in, the format looks like this:

1
2
3
4
<artfiact>
    <id>jakarta.servlet:jakarta.servlet-api:4.0.4</id>
    <source>true</source>
</artfiact>

Now, Servlet already has a jakarta.* version, but it'll be useful here as an example that avoids the other transformations I'm doing.

My addition is to add a transform configuration option here, with jakarta as the only value for now:

1
2
3
4
5
<artfiact>
    <id>jakarta.servlet:jakarta.servlet-api:4.0.4</id>
    <source>true</source>
    <transform>jakarta</transform>
</artfiact>

...and that'll be it! When that is specified, the code will now run the artifact and its source JAR transparently through Transformer and the version you get in your p2 repository will reflect the transition. And, well, it works perfectly in my case. The resultant NoSQL spec and dependencies are functionally equivalent to the ones in the jakarta.* source branch, but without having to actually change the source files yet. Neat.

Implementation

Though it took a bit to track down the best way to do it, it turned out that Transformer is quite easy to embed into a Java app like the Maven plugin. The majority of the code ends up being effectively Java boilerplate to provide the default values for Jakarta transformation. Truncated, it looks like this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
String inputFileName = t.getAbsolutePath(); // the artifact in ~/.m2/repository
File dest = File.createTempFile(t.getName(), ".jar"); //$NON-NLS-1$
String outputFileName = dest.getAbsolutePath();

Map<String, String> optionDefaults = JakartaTransform.getOptionDefaults();
Function<String, URL> ruleLoader = JakartaTransform.getRuleLoader();
TransformOptions options = /* build TransformOptions object that reads the above variables */

Transformer transformer = new Transformer(logger, options);
ResultCode result = transformer.run();
switch(result) {
case ARGS_ERROR_RC:
case FILE_TYPE_ERROR_RC:
case RULES_ERROR_RC:
case TRANSFORM_ERROR_RC:
	throw new IllegalStateException("Received unexpected result from transformer: " + result);
case SUCCESS_RC:
default:
	return dest;
}

There are plenty of options to specify, but that's really about it. Once given the Jakarta defaults, it will do the right thing in the normal case, both for the compiled class files as well as the source JAR.

I'm not sure if I'll need it in other cases (NoSQL will move over in the main branch eventually), but it's sure handy here and should be useful in a pinch. From time to time, I've run across dependencies that would be useful to include but use old JEE specs, and this could do the trick in those cases too.

So Why Jakarta?

Thu Apr 28 16:10:11 EDT 2022

Tags: jakartaee java
  1. Updating The XPages JEE Support Project To Jakarta EE 9, A Travelogue
  2. JSP and MVC Support in the XPages JEE Project
  3. Migrating a Large XPages App to Jakarta EE 9
  4. XPages Jakarta EE Support 2.2.0
  5. DQL, QueryResultsProcessor, and JNoSQL
  6. Implementing a Basic JNoSQL Driver for Domino
  7. Video Series On The XPages Jakarta EE Project
  8. JSF in the XPages Jakarta EE Support Project
  9. So Why Jakarta?
  10. XPages Jakarta EE 2.5.0 And The Looming Java-Version Wall
  11. Adding Concurrency to the XPages Jakarta EE Support Project
  12. Adding Transactions to the XPages Jakarta EE Support Project
  13. XPages Jakarta EE 2.9.0 and Next Steps
  14. XPages JEE 2.11.0 and the Javadoc Provider
  15. The Loose Roadmap for XPages Jakarta EE Support
  16. XPages JEE 2.12.0: JNoSQL Views and PrimeFaces Support
  17. XPages JEE 2.13.0
  18. XPages JEE 2.14.0
  19. XPages JEE 2.15.0 and Plans for JEE 10 and 11

I've spent a lot of time over the last while tinkering with the XPages Jakarta EE Support project in particular and Jakarta technologies in general, and I figured it'd be worth discussing a bit why I like this stack and why I think it's worth putting work into.

There are a couple facets to this, I think. Why is it good on its own? Why is it good as a complement or replacement for XPages? And why is it good compared to the other roads offered for Domino developers?

Quick Aside: Spring and Others

Before I get much further, I should mention early on that this isn't so much about Jakarta as opposed to technologies like Spring. Spring is good! It's similar in concept, both because it started from a JEE-aligned mindset and now because Jakarta and MicroProfile have been adopting a lot of the best concepts. It's kind of a "D&D and Pathfinder" situation. While there are some philosophical differences, and Jakarta is (now) run by an open-source organization as opposed to an individual company, the distinction for our purposes isn't important.

This also goes for some other technologies that could potentially be slotted in for server-based app dev, like Vert.x. Vert.x, for its part, often serves different purposes, and so that discussion is also separate.

Technical Reasons

Going into all the specific things that I think are good about JEE technologies would be quite an ordeal, so I'll stick to summarizing some overarching themes that I appreciate.

Presumably as a sign of my own ever-increasing age, I appreciate the staid nature of many aspects of it. While some of that comes from the near-stagnation the stack suffered from towards the end of Oracle's sole stewardship of it, it's good that things like Servlet have remained consistent in important ways since the very beginning. Some aspects have come and some will soon go, but the main aspects have remained pleasantly consistent because they were designed to be simple and largely adaptable. Servlet has its limitations, but they're limitations that don't generally show up for normal use.

I also quite appreciate how annotation-based most of the specs are. This was a good way of moving away from the original "pile of XML" configuration process of the early versions of Java EE while still retaining introspection abilities. What I mean by that is the ability of programs (like a server or an IDE) to look at a Jakarta app and glean important information without having to actually execute the code. As a point of comparison, take this hypothetical version of a REST server, where you declare endpoints programmatically:

1
2
3
4
public void initServer(ServerConfig config) {
    config.addHandler("/foo", new FooHandler());
    config.addHandler("/foo/bar", new BarHandler());
}

...and then compare that to the annotation-based way of doing it:

1
2
3
4
5
6
7
8
@Path("/foo")
public class FooHandler {
	/* snip */

	@GET
	@Path("bar")
	public Object getBar() { /* ... */ }
}

Both could be functionally the same at runtime, but the latter allows tools to inspect the classes statically to provide summaries and capabilities in the UI in a way that would be technically possible but much more difficult otherwise. This is certainly not unique to Jakarta, but it's an important feature of it nonetheless.

Moreover, I think that the stack is morphing itself nicely into a cleaner, modern form. It's been a rocky process, but a lot of the individual specs are either adapting themselves onto CDI or using it as the baseline. As much as I sang the praises of Servlet in the earlier paragraph, you can write a thoroughly-capable app using CDI and JAX-RS without ever caring about much else beyond a data layer.

This adaptability is also paying off with newer-era work like Quarkus. Quarkus is an intriguing project that combines slices of Jakarta, MicroProfile, and others with the native-compilation capabilities of GraalVM to provide a toolchain that lets you write quite-efficient compiled apps, targeted primarily for Kubernetes deployments where the startup and response time of a single node is very important. This is really solving a lot of problems I don't have, but it's interesting to watch, and to see how these goals feed back into Jakarta with things like CDI Lite.

Jakarta As An XPages Extension Or Successor

XPages was (and is) a fork of a subset of Java EE, with the split happening somewhere just before 2006's Java EE 5. It's a small subset, but you can look down that list of technologies and see a few that remain to this day: Servlet 2.4/2.5, JSF 1.1/1.2 by way of the XPages fork, JavaMail, and a few other miscellaneous packages. DAS and the Extension Library brought in JAX-RS 1.1, so you can add a dash of 2009's Java EE 6 to the pot.

The XPages Jakarta EE Support project started as a mechanism for bringing in a newer JAX-RS version, followed by CDI to replace managed beans and EL 3 to replace XPages's primordial Expression Language support - essentially, as a slowly-growing platform update. In its current form, it brings in a wide slate of technologies, but the fact that it was starting as an extension of an existing ancient Java EE fork made it possible to do this gradually, piece by piece. Really, up until the move to the jakarta.* namespace, it was a process of just glomming compatible parts onto the existing Servlet baseline.

Even after that switch, the historical alignment with the older parts of the stack makes building on comparatively straightforward. That applies both to me as the person doing the adapting the spec implementations and (hopefully) to a developer actually using them. While XPages predated the annotation-heavy push in JEE as well as CDI entirely, a lot of the core concepts are in common, and I expect that it'd be an easier transition from XPages alone to Jakarta EE than, say, classic Domino web dev to XPages was. It certainly was for me, anyway.

Jakarta As A Cultural Match

This topic covers both my general appreciation for a thoroughly-open-source platform and why I specifically like it in relation to other roads open to Domino developers.

Java EE had for a long time been kind of open source: though Sun and then Oracle held the reins, the specs grew to be free to implement, and over time there flourished a slew of compatible servers, many of which are now or have always been fully open-source.

I like this for a lot of reasons. For one, it's just good as a programmer to have source access. Normally, you can just go by the spec, but having full access to the server's source lets you debug thorny problems when you hit an edge case. While closed-source software certainly has its place, there's just a layer of "all else being equal, source access is better".

Beyond being able to see and debug the source, it's valuable that the platform is open source and the implementations I use are as well, and the whole thing is guided by the extremely-established Eclipse Foundation. While a company handing something over to an open-source organization can sometimes just be a way to usher it to a plausibly-deniable death, the activity around Jakarta EE shows that isn't the case here. While Oracle and IBM still tend to naturally top the charts, it has a diverse pool of contributors, and its fate isn't tied to the interests of a single company. As with closed-source software, sometimes being shepherded by a single company has advantages, but it leaves you more exposed to the winds of their financial incentives.

This all contributes to a platform where I can be comfortable writing a bunch of code with the knowledge that, while it may not be good forever, the path will be at least clear. While there's something of an industry for modernizing old Java EE applications (one our old friend Niklas Heidloff is involved in), it's a task shared by a lot of companies and, indeed, a lot of the work is "replace old vendor-specific code with standards-based code". While nothing can truly prevent you from having a pile of obsolete code other than not writing it in the first place, following a path like this that's shared with a broad slice of the industry is a good way to mitigate the trouble.

And I think that paragraph is what a lot of it comes down to for me. As much as I can be, I want to be out of the business of writing code that doesn't have a built-in "plan B". If Domino magically stopped existing tomorrow, code written in this way wouldn't necessarily directly work elsewhere, but it'd be a much shorter journey than for the Notes-client and classic Domino web code I wrote in my early career. And, really, some of it would directly work elsewhere. The stuff that's just describing a REST entrypoint or a page layout? The stuff that describes the interaction between those and an abstracted data layer? That code doesn't care what your server is, and there's a whole ecosystem of servers ready to do the job. That, there, is what makes this worthwhile for me.

Structure of the Domino Web App Container

Fri Feb 25 15:01:33 EST 2022

A while back, I talked about the uses of HttpService in Domino. In that post, I talked about how the various HttpService implementations take a look at incoming URLs, see if they're something that should be handled on the Java layer, and then either handle them or pass them back to the legacy NHTTP code to do its thing. My fiddling with the XPages Jakarta EE project in recent days has gotten me thinking about this layer again, and I think it'll be interesting to expand how how this whole layer works (at least as I understand it).

Along with this post, it might be useful to peruse the slide deck for AD105 from LotusSphere 2011. There's a lot of good stuff in there, and basically nothing has changed in the intervening 11 years.

The Stack (Conceptually)

The Domino HTTP stack looks conceptually something like this:

Diagram of the Domino HTTP stack

This is, setting aside the specifics, pretty similar to how other app servers of various kinds are laid out. There's some bottom layer that handles the actual network connection, some part just above that that handles interpreting the requests as HTTP (optionally bypassed in some cases), and then an orchestrator that manages the actual apps sitting on the top layer and routes requests as appropriate.

The Stacks (Java-wise)

Before I continue, I think it will be useful to have some stack traces to reference back to, to see what they share in common and where they diverge. These three examples - from an XPage request, an Equinox-registered Servlet, and an OSGi-packaged webapp - all cover the part of the stack from the bottom up until where user code comes into play.

XPages (DesignerFacesServlet is what handles serving an XPage):

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
     at com.ibm.xsp.webapp.DesignerFacesServlet.service(DesignerFacesServlet.java:103)
     at com.ibm.designer.runtime.domino.adapter.ComponentModule.invokeServlet(ComponentModule.java:600)
     at com.ibm.domino.xsp.module.nsf.NSFComponentModule.invokeServlet(NSFComponentModule.java:1352)
     at com.ibm.designer.runtime.domino.adapter.ComponentModule$AdapterInvoker.invokeServlet(ComponentModule.java:877)
     at com.ibm.designer.runtime.domino.adapter.ComponentModule$ServletInvoker.doService(ComponentModule.java:820)
     at com.ibm.designer.runtime.domino.adapter.ComponentModule.doService(ComponentModule.java:589)
     at com.ibm.domino.xsp.module.nsf.NSFComponentModule.doService(NSFComponentModule.java:1336)
     at com.ibm.domino.xsp.module.nsf.NSFService.doServiceInternal(NSFService.java:725)
     at com.ibm.domino.xsp.module.nsf.NSFService.doService(NSFService.java:515)
     at com.ibm.designer.runtime.domino.adapter.LCDEnvironment.doService(LCDEnvironment.java:363)
     at com.ibm.designer.runtime.domino.adapter.LCDEnvironment.service(LCDEnvironment.java:319)
     at com.ibm.domino.xsp.bridge.http.engine.XspCmdManager.service(XspCmdManager.java:272)


Equinox Servlet (the org.eclipse.equinox.http.registry.servlets extension point):

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
	at com.example.SomeServlet.service(SomeServlet.java:104)
	at javax.servlet.http.HttpServlet.service(HttpServlet.java:806)
	at org.eclipse.equinox.http.registry.internal.ServletManager$ServletWrapper.service(ServletManager.java:180)
	at org.eclipse.equinox.http.servlet.internal.ServletRegistration.handleRequest(ServletRegistration.java:90)
	at org.eclipse.equinox.http.servlet.internal.ProxyServlet.processAlias(ProxyServlet.java:111)
	at org.eclipse.equinox.http.servlet.internal.ProxyServlet.service(ProxyServlet.java:67)
	at javax.servlet.http.HttpServlet.service(HttpServlet.java:806)
	at com.ibm.domino.xsp.adapter.osgi.OSGIModule.invokeServlet(OSGIModule.java:167)
	at com.ibm.domino.xsp.adapter.osgi.OSGIModule.access$0(OSGIModule.java:153)
	at com.ibm.domino.xsp.adapter.osgi.OSGIModule$1.invokeServlet(OSGIModule.java:134)
	at com.ibm.domino.xsp.adapter.osgi.AbstractOSGIModule.invokeServletWithNotesContext(AbstractOSGIModule.java:181)
	at com.ibm.domino.xsp.adapter.osgi.OSGIModule.doService(OSGIModule.java:128)
	at com.ibm.domino.xsp.adapter.osgi.OSGIService.doService(OSGIService.java:418)
	at com.ibm.designer.runtime.domino.adapter.LCDEnvironment.doService(LCDEnvironment.java:363)
	at com.ibm.designer.runtime.domino.adapter.LCDEnvironment.service(LCDEnvironment.java:319)
	at com.ibm.domino.xsp.bridge.http.engine.XspCmdManager.service(XspCmdManager.java:272)


Web Container (the com.ibm.pvc.webcontainer.application extension point):

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
	at ccom.example.ExampleServlet.doGet(ExampleServlet.java:18)
	at javax.servlet.http.HttpServlet.service(HttpServlet.java:693)
	at javax.servlet.http.HttpServlet.service(HttpServlet.java:806)
	at com.ibm.ws.webcontainer.servlet.ServletWrapper.service(ServletWrapper.java:1661)
	at com.ibm.ws.webcontainer.servlet.ServletWrapper.handleRequest(ServletWrapper.java:937)
	at com.ibm.pvc.internal.webcontainer.servlet.ServletWrapper.handleRequest(ServletWrapper.java:85)
	at com.ibm.ws.webcontainer.servlet.ServletWrapper.handleRequest(ServletWrapper.java:500)
	at com.ibm.ws.webcontainer.webapp.WebApp.handleRequest(WebApp.java:3810)
	at com.ibm.ws.webcontainer.webapp.WebGroup.handleRequest(WebGroup.java:276)
	at com.ibm.pvc.internal.webcontainer.VirtualHost.handleRequest(VirtualHost.java:143)
	at com.ibm.ws.webcontainer.WebContainer.handleRequest(WebContainer.java:931)
	at com.ibm.pvc.internal.webcontainer.WebContainerBridge.handleRequest(WebContainerBridge.java:25)
	at com.ibm.domino.osgi.core.webContainer.WebApplicationsTracker.doService(WebApplicationsTracker.java:141)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at com.ibm.domino.xsp.adapter.osgi.webContainer.OSGIWebContainerModule.invokeWebAppContainerService(OSGIWebContainerModule.java:207)
	at com.ibm.domino.xsp.adapter.osgi.webContainer.OSGIWebContainerModule.doService(OSGIWebContainerModule.java:178)
	at com.ibm.domino.xsp.adapter.osgi.OSGIService.doService(OSGIService.java:418)
	at com.ibm.designer.runtime.domino.adapter.LCDEnvironment.doService(LCDEnvironment.java:363)
	at com.ibm.designer.runtime.domino.adapter.LCDEnvironment.service(LCDEnvironment.java:319)
	at com.ibm.domino.xsp.bridge.http.engine.XspCmdManager.service(XspCmdManager.java:272)

Each one of these has some intriguing lines, but we'll end up focusing on the bottom 5-6 in each.

Anyway, back to the details.

Orchestrator

The bottom three lines of all three stacks are identical, and they show the entrypoint from the C side to Java.

The job of XspCmdManager is to take a bunch of handles and flags given to it by the C side, wrap them into something a little more suitable for polite company, and pass that off immediately to LCDEnvironment. At this level, while the code is clearly focused around getting to the point of handling Servlets, the actual classes don't actually implement javax.servlet parts - they're all a little more abstract than that. XspCmdManager has a few other responsibilities, but it's best thought of as just the glue layer between the native side and the Java stack.

From there, it passes the request along to LCDEnvironment, as the sole implementation of LCDRequestHandler. Things get a little meatier here. This is the part that loads up all of the HttpService implementations we've discussed before. It uses these services to answer calls to isXspUrl (which basically means "should Java handle this?") and then to handle the incoming requests. The first HttpService (sorted by its getPriority() value) that will handle the incoming request gets it, and here's our first point of divergence above. NSFService is the in-NSF XPages-and-stuff handler, taking care of requests with ".nsf" and either "xsp" or a registered extension in the path. OSGIService, for its part, handles both Equinox-registered servlets and Expeditor webapps, albeit in different ways.

ComponentModules

The next layer is interesting, and it's a part I didn't really talk about in the previous post. While an HttpService can just handle an incoming request directly (as the proxy service in the Domino Open Liberty Runtime project does), the idiom in the Domino stack is to use ComponentModule implementations to do it. These correspond conceptually to web apps deployed from WAR files in a standard app server: they're a cordoned-off blob of user code, with its own ClassLoader and notions for how to access resources.

For an NSF, this is NSFComponentModule. These objects are spawned by NSFService as needed when a matching request comes in for an NSF the first time and create a weird sort of webapp out of the NSF contents (with special handling for Single-Copy XPage Design). It doesn't go as far in that direction as the full web container, but it's enough to run and retain the XPages application. This type of module also opts in to the IServletFactory extension system, where contributors can add Servlets to the module either internally or via an OSGi extension. All ComponentModule types have the ability to opt in to this, but NSFComponentModule is the only one that does in practice.

Though both the Equinox Servlet and the PVC web container app go through OSGIService, here is where they split apart into OSGIModule (for standalone Servlets) and OSGIWebContainerModule.

OSGIModule uses some of the parts of the Equinox Servlet Container support to run an individual Servlet. It's the smallest of the three and mostly just creates the ServletRequest et al implementations, does a little wrapping in Equinox garb, and passes them on to your HttpServlet class.

OSGIWebContainerModule is fancier, since its job is to run (most of) a JEE-style web.xml-based app contained in an OSGi bundle. To do this, it uses a chopped-down fork of WebSphere - indeed, many of the classes in the stack still exist in much the same form in Liberty, but here are intermingled with Domino-specific stuff and some Expeditor PvC detritus. This isn't quite as capable as a full Servlet 2.5 container, lacking things like Filters and various listeners, but it gets the job done.

Module Adaptability

Though this system hasn't in practice grown beyond the built-in implementations to my knowledge, it's a neat little structure. There's some unfinished stuff in there in a mashupmaker package that I guess must have been to do with Lotus Mashups (which is apparently a product that existed at some point), but that's about it as far as extending it.

This is an intriguing layer, though, since it's also the one where the "adapter" Servlet objects from above are actually turned into instances of javax.servlet classes, specifically using classes like LCDAdapterHttpServletResponse. At this layer, you have a good amount of Java scaffolding, but being before the full conversion to javax.servlet classes means that a lot of the limitations in Domino's Servlet support only really come in in the upper echelons. Other than network- and HTTP-layer technologies like WebSocket and HTTP/2 (which are handled at the native layer), it'd be entirely possible to plug into this system with more-modern technologies while still participating cleanly in the environment. For example, you could write an HttpService that declares a higher priority than NSFService and use it to treat an NSF as an entirely-different sort of app, intercepting all URLs prefixed with it. I don't know that it would be a good idea to do so, but it's possible, and it's fun to think about.

JSF in the XPages Jakarta EE Support Project

Fri Feb 11 14:31:38 EST 2022

Tags: jakartaee jsf
  1. Updating The XPages JEE Support Project To Jakarta EE 9, A Travelogue
  2. JSP and MVC Support in the XPages JEE Project
  3. Migrating a Large XPages App to Jakarta EE 9
  4. XPages Jakarta EE Support 2.2.0
  5. DQL, QueryResultsProcessor, and JNoSQL
  6. Implementing a Basic JNoSQL Driver for Domino
  7. Video Series On The XPages Jakarta EE Project
  8. JSF in the XPages Jakarta EE Support Project
  9. So Why Jakarta?
  10. XPages Jakarta EE 2.5.0 And The Looming Java-Version Wall
  11. Adding Concurrency to the XPages Jakarta EE Support Project
  12. Adding Transactions to the XPages Jakarta EE Support Project
  13. XPages Jakarta EE 2.9.0 and Next Steps
  14. XPages JEE 2.11.0 and the Javadoc Provider
  15. The Loose Roadmap for XPages Jakarta EE Support
  16. XPages JEE 2.12.0: JNoSQL Views and PrimeFaces Support
  17. XPages JEE 2.13.0
  18. XPages JEE 2.14.0
  19. XPages JEE 2.15.0 and Plans for JEE 10 and 11

When I talked about adding Jakarta NoSQL to the JEE support project, I mentioned how that in a way completed the pitch for this project as a full development mechanism. With data access and MVC+JSP in place, now it provides tools to build REST-based or server-rendered UIs backed cleanly by Domino data. Neat!

But, much like I had previously danced around the question of data access, there was still a spectre lurking in the background, a long-standing part of the Java/Jakarta EE platform: JSF, now named Jakarta Server Faces. This has unsurprisingly been gnawing at the back of my mind, since "upgrade JSF" has been one of the longest-running requests for XPages, starting basically right after JSF 1.2 came out with a smattering of technologies XPages didn't get.

While the notion of un-forking XPages is intriguing, it's its own big pile of work, and not in this project's bailiwick to address. What this project can do, though, is bring current JSF to an NSF alongside XPages. This proposition has a lot going on, so I think it'll be worth discussing both the practical elements and the implications.

How Is JSF Doing Nowadays, Anyway?

To start out with, it will be instructive to look at how JSF has fared since XPages split off from it in the JSF 1.1 era. We'll use the version history from Wikipedia as a starting point:

  1. 2004 - JSF 1.0
  2. 2004 - JSF 1.1 (minor release)
  3. 2006 - JSF 1.2 (feature release, included in Java EE)
  4. 2009 - JSF 2.0 (major release with significant improvements)
  5. 2010 - JSF 2.1 (minor release)
  6. 2013 - JSF 2.2 (feature release)
  7. 2017 - JSF 2.3 (feature release)

So... this is a real mixed bag, huh? Since XPages, JSF received a few major feature releases, progressing to 2.3. But, uh, the gaps in the years, though - that's ominous. And 2017 was the last feature update? Hrm.

But wait - when I tweeted about it, the screenshot says "JSF 4.0". What's going on there?

Well, like all active JEE specs, Faces received a major-version bump for semantic-versioning purposes when moving to the jakarta.* namespace. Version 3.0 came out in 2020, but was functionally identical to 2.3 from 2017, just with the API package names changed.

After that point, though, the future is in the Eclipse Foundation's hands, and each Jakarta spec is deciding its fate for future releases. Some specs, like JSP, are focusing on non-breaking releases that focus on just a handful of fixes and clarifications. Faces, though, seems to have be the focus of some pent-up desires for change that are coming to fruition now that there's headroom for it.

Faces 4.0 gets another major-version bump because it involves some breaking changes that go beyond just the package names, including outright removing old deprecated features in favor of better modern options. Beyond that, it gains some outright new features - not the sort of features that would change a developer's life, but useful stuff.

Does this mean that JSF is in the prime of its life? Well, time will tell, and merely having developers settle what was presumably very old business doesn't guarantee a vibrant future, but it's a nice sign. In any event, it's not dead, so it has that going for it. Plus, the repositories for "ecosystem" libraries like PrimeFaces and Tobago are quite active, which is important.

So Is It Good? Is This How We Should Write Apps?

Maybe! I'm not sure. I mean, there's no getting around the general popularity of client-JS-first app dev, and the server side of that is covered well by JAX-RS. That said, current popularity isn't everything when it comes to development: it's all about the problems it solves. For the type of development usually done for Domino, server-side apps fit quite well, and even an exceedingly-adept developer can often get results quicker in an integrated stack of this type.

In any event, I expect I'll give it a shot for a bit and see how it shakes out.

How Does This Even Work? Doesn't It Conflict With XPages?

I'm glad you asked! Part of the problem I'd tangled with with this project from the start was that the aging versions of the various JEE specs - and the XPages JSF fork - are essentially unmovable obstacles. If I want to have a project that enhances Domino in place rather than replacing it in whole or in part, I have to deal with the stuff that's there.

Fortunately, like with pseudo-updating the core Servlet spec, the javax.*-to-jakarta.* namespace move is my savior. Honestly, being a trademark stickler may be the best thing Oracle's ever done from my perspective. Thanks to this switch, all the concerns API-side about distinguishing versions disappeared. While I might know that javax.faces.context.FacesContext and jakarta.faces.context.FacesContext are competing versions of the same spec, Java doesn't know - they're completely unrelated.

However, as with some other components, the API isn't the whole story. While the API packages all changed for Jakarta EE 9, the implementations generally kept their original packages. XPages is based on the original JSF implementation, which is now Mojarra over at Eclipse. This implementation uses the com.sun.faces namespace. While most of the XPages-specific stuff is under com.ibm.xsp, all those Sun-branded classes are still floating around, like com.sun.faces.RIConstants and com.sun.faces.context.FacesContextImpl. Moreover, XPages still uses com.sun.faces prefixes for stored application properties. For example, there's an ApplicationAssociate object that Sun-JSF and XPages use to keep track of app-specific information, and it stashes this in the conceptual NSF application. Using Mojarra, these sorts of classes and properties conflict, and things got hairy, with me trying to stash the com.sun stuff to the side per-request, with only partial success.

Fortunately, Mojarra isn't the only game in town: Apache MyFaces is alive and well, including with an actively-developed 4.0 branch. This implementation does not derive from the original, and doesn't use the com.sun.faces namespace. As an intriguing tidbit, Notes (but not Domino) actually includes MyFaces 1.1; I'm guessing it's another thing in there to support some kind of "Social" foofaraw.

Anyway, MyFaces was my ticket to success. Not only did it remove the problem of needing to walk on eggshells with package and attribute names, but the fact that it's a wholly-different implementation removed some of the other weird problems I was dealing with from the table.

Required Infrastructure Improvements

Unlike some of the more staid specs (like JSP) or cutting-edge ones (like MVC and NoSQL), both JSF implementations lean heavily on Servlet-spec features that showed up after Domino's 2.4 implementation. While they fortunately don't require filters, they do heavily use listeners of many stripes.

So, like when I had to hack together RequestDispatcher support to implement MVC, I had to do similarly here to support attribute listeners and manually kick of lifecycle event notifications.

I realize I'm kind of baking my way into making a full Servlet 5 container on top of Domino's rickety old one. There's way more work to make that an actual thing, but it's intriguing seeing it take shape just as incidental work from implementing other parts.

Next Steps

For the next steps, I'm not quite sure. For what it is, I think it's all working pretty well. However, while JSF on its own provides a lot of functionality, it's stuff like the component libraries from PrimeFaces that makes it a full toolkit. In a normal case, the server itself wouldn't provide component packs like PrimeFaces or Tobago - the app itself would bring them in as Maven/etc. dependencies and they would be included as part of the WAR. This would probably work in an NSF, but the experience of developing in an NSF when you have JARs in the classpath is... not great. Accordingly, I'm pondering including one or both of those in the project. It'd have tradeoffs, but it might make sense, or maybe it'd make sense to have associated projects that package them as distinct libraries to include. We'll see.

Video Series On The XPages Jakarta EE Project

Mon Feb 07 15:54:15 EST 2022

  1. Updating The XPages JEE Support Project To Jakarta EE 9, A Travelogue
  2. JSP and MVC Support in the XPages JEE Project
  3. Migrating a Large XPages App to Jakarta EE 9
  4. XPages Jakarta EE Support 2.2.0
  5. DQL, QueryResultsProcessor, and JNoSQL
  6. Implementing a Basic JNoSQL Driver for Domino
  7. Video Series On The XPages Jakarta EE Project
  8. JSF in the XPages Jakarta EE Support Project
  9. So Why Jakarta?
  10. XPages Jakarta EE 2.5.0 And The Looming Java-Version Wall
  11. Adding Concurrency to the XPages Jakarta EE Support Project
  12. Adding Transactions to the XPages Jakarta EE Support Project
  13. XPages Jakarta EE 2.9.0 and Next Steps
  14. XPages JEE 2.11.0 and the Javadoc Provider
  15. The Loose Roadmap for XPages Jakarta EE Support
  16. XPages JEE 2.12.0: JNoSQL Views and PrimeFaces Support
  17. XPages JEE 2.13.0
  18. XPages JEE 2.14.0
  19. XPages JEE 2.15.0 and Plans for JEE 10 and 11

Over the last two weeks, Graham Acres and I recorded a video series for OpenNTF about my XPages Jakarta EE Support project, which has seen a flurry of development in the last few months. The 15-part series is up on YouTube:

The project itself saw the release of version 2.3.0 today, which is the first release with the Jakarta NoSQL driver I blogged about recently.

I think the project has turned into a pretty-interesting "platform update" for XPages, and I hope the video series captures that a bit. I'm still mulling over a sort of "thesis statement" about the whole thing, but for now describing the various new capabilities and how they interact will have to suffice.

Implementing a Basic JNoSQL Driver for Domino

Tue Jan 25 13:36:06 EST 2022

  1. Updating The XPages JEE Support Project To Jakarta EE 9, A Travelogue
  2. JSP and MVC Support in the XPages JEE Project
  3. Migrating a Large XPages App to Jakarta EE 9
  4. XPages Jakarta EE Support 2.2.0
  5. DQL, QueryResultsProcessor, and JNoSQL
  6. Implementing a Basic JNoSQL Driver for Domino
  7. Video Series On The XPages Jakarta EE Project
  8. JSF in the XPages Jakarta EE Support Project
  9. So Why Jakarta?
  10. XPages Jakarta EE 2.5.0 And The Looming Java-Version Wall
  11. Adding Concurrency to the XPages Jakarta EE Support Project
  12. Adding Transactions to the XPages Jakarta EE Support Project
  13. XPages Jakarta EE 2.9.0 and Next Steps
  14. XPages JEE 2.11.0 and the Javadoc Provider
  15. The Loose Roadmap for XPages Jakarta EE Support
  16. XPages JEE 2.12.0: JNoSQL Views and PrimeFaces Support
  17. XPages JEE 2.13.0
  18. XPages JEE 2.14.0
  19. XPages JEE 2.15.0 and Plans for JEE 10 and 11

A few weeks back, I talked about my use of DQL and QRP in writing a JNoSQL driver for Domino. In that, I left the specifics of the JNoSQL side out and focused on the Domino side, but that former part certainly warrants some expansion as well.

Background

As a quick overview, Jakarta NoSQL is an approaching-finalization spec for working with NoSQL databases of various stripes in a Jakarta EE app. This is as opposed to the venerable JPA, which is a long-standing API for working with RDBMSes in JEE.

JNoSQL is the implementation of the Jakarta NoSQL spec, and is also an Eclipse project. As a historical note, the individual components of the implementation used to have Greek-mythological names, which is why older drivers like my Darwino driver or original Domino driver are sprinkled with references to "Diana" and "Artemis". The "JNoSQL" name also pre-dates its reification into a Jakarta spec - normally, spec names and implementations aren't quite so similarly named.

The specification is broken up into two main categories. The README for the implementation describes this well, but the summary is:

  1. "Communication" handles interpreting JNoSQL CRUD operations and actually applying them to the database.
  2. "Mapping" handles what the app developer interacts with: annotating classes to relate them to the back-end database and querying object repositories.

An individual driver may include code for both sides of this, but only the Communication side is obligatory to implement. A driver would contribute to the Mapping side as well if they want to provide database-specific higher-level concepts. For example, the Darwino driver does this to provide explicit annotations for its full-text search, stored-cursor, and JSQL capabilities. I may do similarly in the Domino driver to expose FT search, view operations, or DQL queries directly.

Jakarta NoSQL handles Key-Value, Column, Graph, and Document data stores, but we only care about the last category for now.

Implementation Overview

Now, on to the actual implementation in question. The handful of classes in the implementation fall into a few categories:

Implementation Details

The core entrypoint for data operations is DefaultDominoDocumentCollectionManager, and JNoSQL specifies a few main operations to implement, basically CRUD plus total count:

  • insert and its overrides handle taking an abstract DocumentEntity from JNoSQL and turning it into a new lotus.domino.Document in the target database.
  • update does similarly, but with the assumption that the incoming entity represents a modification to an existing document.
  • delete takes an incoming abstract query and deletes all documents matching it.
  • select takes an incoming abstract query, finds matching documents, converts them to a neutral format, and returns them to JNoSQL. This is what my earlier post was all about.
  • count retrieves a count for all documents in a "collection". "Collection" here is a MongoDB-ism and the most-practical Domino equivalent is "documents with a specific Form name".

Entity Conversion

The insert, update, and select methods have as part of their jobs the task of translating between Domino's storage and JNoSQL's intermediate representation, and this happens in EntityConverter.

Now, this point of the code has some... nomenclature-based issues. There's lotus.domino.Document, our legacy representation of a Domino document handle. Then, there's jakarta.nosql.document.Document: this oddly-named interface actually represents a single key-value pair within the conceptual document - roughly, this corresponds to lotus.domino.Item. Finally, there's jakarta.nosql.document.DocumentEntity, which is the higher-level representation of a conceptual document on the JNoSQL side, and this contains many jakarta.nosql.document.Documents. This all works out in practice, but it's important to know about when you look into the implementation code.

The first couple methods in this utility class handle converting query results of different types: QRP result JSON, QRP result views, and generic DocumentCollections. Strictly speaking, I could remove the first one now that it's unused, but there's a non-zero chance that I'll return to it if it ends up being efficient down the line.

Each of those methods will eventually call to toDocuments, which converts a lotus.domino.Document object to an equivalent List of JNoSQL Documents (i.e. the individual key-value pairs). Due to the way JNoSQL works, this method has no way to know what the actual desired fields the higher level will want are, so it attempts to convert all items in the document to more-common Java types. There's much more work to do here, some of it based on just needing to add other types (like improving rich text handling) and some of it based on needing a better Notes API (like proper conversion from Notes times to java.time).

In the other direction, there's the method that converts from a JNoSQL DocumentEntity to a Domino document, which is used by the insert and update methods. This converts some known common incoming types and converts them to Domino item values. Like the earlier methods, this could use some work in translating types, but that's also something that a better Notes API could handle for me.

Query Conversion

The QueryConverter class has a slightly-simpler job: taking JNoSQL's concept of a query and translating it to a document selection.

The jakarta.nosql.document.DocumentQuery type does a bit of double duty: it's used both for arbitrary queries (Foo='Bar'-type stuff) as well as for selecting documents by UNID. The select method covers that, producing a QueryConverterResult object to ferry the important information back to DefaultDominoDocumentCollectionManager.

The core work of QueryConverter is in getCondition, where it performs an AST-to-DQL conversion. JNoSQL has a couple mechanisms for querying entities: explicit Java-based queries, implicit queries based on repository definition, or a SQL-like query language. Regardless of what the higher level does to query, though, it comes to the Communication driver as this tree of objects (technically, the driver can handle the last specially, but by default it arrives parsed).

Fortunately, this sort of work is a common sort of idiom. You start with the top node of the tree, handle it based on its type and, as needed, recurse down into the next node. So if, for example, the top node is an EQUALS, all this converter needs to do is return the DQL representation of "this field equals that value", and so forth for other comparison operators. If it encounters AND, OR, or NOT, then the job changes to making a composite query of that operator plus the results of converting whatever the operator is applied to - which is where the recursion back into the same method comes in.

Future Work

The main immediate work to do here is enhancing the data conversion: handling more outgoing Domino item types and incoming Java object types. A good deal of this can be done as-is, but doing some other parts reliably will be best done by changing out the specific Notes API in use. I used lotus.domino because it's present already, but it's a placeholder for sure.

There are also a bunch of efficiency tweaks I can make: more lazy loading in conversion, optimizing data fetching for specific queries, and logging DQL explain results for developers.

Beyond that, I'll have to consider if it's worth adding extensions to the mapping side. As I mentioned, the Darwino driver has some extensions for its JSQL language and similar concepts, and it's possible that it'd be worth adding similar things for Domino, in particular direct FT searching. That said, DQL does a pretty good job being the all-consuming target, and so translating JNoSQL queries to DQL may suffice to extract what performance Domino can provide.

So we'll see. A lot of this will be based on what I need when I actually put this into real use, since right now it's partly hypothetical. In any event, I'm looking forward to finding places where I can use this instead of explicitly coding to Notes API objects for sure.

DQL, QueryResultsProcessor, and JNoSQL

Thu Jan 13 14:32:04 EST 2022

  1. Updating The XPages JEE Support Project To Jakarta EE 9, A Travelogue
  2. JSP and MVC Support in the XPages JEE Project
  3. Migrating a Large XPages App to Jakarta EE 9
  4. XPages Jakarta EE Support 2.2.0
  5. DQL, QueryResultsProcessor, and JNoSQL
  6. Implementing a Basic JNoSQL Driver for Domino
  7. Video Series On The XPages Jakarta EE Project
  8. JSF in the XPages Jakarta EE Support Project
  9. So Why Jakarta?
  10. XPages Jakarta EE 2.5.0 And The Looming Java-Version Wall
  11. Adding Concurrency to the XPages Jakarta EE Support Project
  12. Adding Transactions to the XPages Jakarta EE Support Project
  13. XPages Jakarta EE 2.9.0 and Next Steps
  14. XPages JEE 2.11.0 and the Javadoc Provider
  15. The Loose Roadmap for XPages Jakarta EE Support
  16. XPages JEE 2.12.0: JNoSQL Views and PrimeFaces Support
  17. XPages JEE 2.13.0
  18. XPages JEE 2.14.0
  19. XPages JEE 2.15.0 and Plans for JEE 10 and 11

As I've been adding new technologies to and talking about the XPages Jakarta EE project, I've kind of danced around a major missing layer: data access.

Technically, the toolchain has provided Domino data access all along, by way of having the same contextual sessions and database as XPages. You could use those to access whatever data you want, and they'd do the job as well as they ever do (c'est-?-dire: poorly). Beyond that, though, there's no equivalent to the (questionable) xp:dominoDocument and xp:dominoView components of XPages, and definitely no pre-provided object-to-database mapper.

The answer is pretty clear: Jakarta NoSQL. This API isn't quite finalized, but it's been usable for a long time: I wrote a Darwino driver for it years ago, and that driver is powering this very blog. I also wrote a Domino driver years ago, but it was very much a proof-of-concept: since it pre-dated DQL, it used formula queries for everything, and thus would scale extremely poorly. It was a nice exercise, but not anything useful.

For XPages JEE, I decided to take another swing at that. The implementation of the driver will warrant a tale on its own, but for now I'd like to focus on the DQL side of it.

DQL

I talked a bit about DQL when it came out, back when it wasn't well-understood, but since then I haven't actually had much occasion to put it to use. For the times I've needed complex Domino data access since then, it's been built on pre-existing operations on top of views. While adding DQL has been something I've considered from time to time, it'd never hit the threshold of being worth it: our needs involve extracting tons of data to bulk send it to service clients, and so views have remained necessary. While we could in theory alter our querying and filtering to select documents and project those selections onto the views, it'd have been a lot of work for partial benefits.

DQL itself has gotten more capable in the intervening years, and just on its own it's a perfect match for JNoSQL needs. Since all JNoSQL operations are sent to the driver as either individual doc IDs or an arbitrary query, something like DQL is required, and it's up to the task now.

It's half of the story, though. What DQL (by way of the DominoQuery object) gives you is a DocumentCollection, effectively just the list of note IDs. You can, as I'd hypothesized about doing, apply that against a view to extract data, but that still requires you to separate out the act of view management from the act of doing queries. If you want to have data sorted or categorized, you would still have to create an equivalent or superset view.

QueryResultsProcessor

So that's where the addition of QueryResultsProcessor comes in. QRP is technically distinct from DQL - you can use it to process arbitrary document collections, for example - but they're certainly a conceptual match. If you're comparing it to a SQL statement, DQL is the "FROM foo" and "WHERE x" parts, while QRP is the "SELECT a,b,c", "ORDER BY y", and "GROUP BY z" parts.

The general way it works is that you:

  1. Create a QueryResultsProcessor from a Database instance (as opposed to Session - this distinction becomes important later)
  2. Feed it sources of documents: DQL queries or arbitrary document collections
  3. Add any desired columns to extract data. These are Domino-style columns, and you can also specify sorting and categorization here, as you would when building a view
  4. Since data may come from multiple databases, you can also customize column formulas to account for that
  5. Execute the process and retrieve the results, currently either as JSON or as a "view". More on these "views" later

When I first heard about QRPs, I had a concern with step 2: I'd thought that you could only pass a built DocumentCollection to the processor, which would significantly limit the room for Domino to add behind-the-scenes efficiencies. However, my fears were unfounded; the ability to pass in a DominoQuery object and the DQL directly and let the QRP execute it means that HCL is free to do whatever they want to make it fast. That's the sort of thing that makes SQL queries potentially so stupidly efficient: because you're just asking the database for results, the DB is free to optimize the heck out of them. This pairing potentially brings that to Domino, and that's what makes it important.

JSON Output

The executeToJson method is pretty straightforward if a somewhat-peculiar choice. It has no parameters, and returns the results of your query as reasonably-formatted JSON. It's unfortunate that this returns a String and not an InputStream, which adds some inherent inefficiency to dealing with it on the Java side, but that will only really hurt with very-large data sets.

Along with the requested fields, formula results, and aggregations, the document entries include the note ID (oddly in "formula" format) and the database file path, so you can use that to open up the document.

Anyway, this is a workmanlike format and can be potentially just sent to REST clients directly, though it'd be good form to at least strip out the DB paths and note IDs.

View Output

Now here's the spicy one. The executeToView method stores the results in a very-weird type of view. This has a few big advantages over the JSON mechanism:

  • The view persists in the database, up to a number of hours you specify programmatically. This allows you to essentially offload some extra caching to the database, which is ideal
  • You can use ViewNavigator and other efficient mechanisms to work with the view data, meaning you don't have the "here's a big result blob in memory" problem you have with the JSON format
  • Since it's a "view", anything that works with view data can work with it. This is presumably the reason it's implemented this way at all, rather than as some new kind of entity - building on existing mechanisms
  • The "anything that works with view data" doesn't just mean things like ViewNavigator: it also means the Notes client and view data sources

Now, these "views" have a lot of weird characteristics. It's useful to see the specifics listed out like that, but they all derive from a core lesson to ingest:

This is not a stored query; it is a cached result.

These views are not auto-updated, nor is there any mechanism I know of to refresh them outside of deleting and re-creating them. They're equivalent in concept to if you took the JSON from the first type and stored it in a document somewhere: it'll only change if you change it. The only way Domino will act on them is to delete them when they're expired.

Anyway, the data in these views is the same data that would go to the JSON format, just stored as Notes collation data instead of a string. It contains columns, potentially categorized and aggregated, for the data you requested, as well as hidden "$DBPath" and "$NoteID" columns at the end. The entry-level note ID (the one from entry.getNoteID()) is arbitrary and intended to not represent an actual document - since, after all, the documents may come from distinct databases. I've found the value of entry.getUniversalID() to be the doc's original UNID, but this is best treated as not a guarantee and so should not be used.

Designer Rights

So here's a fun catch: though any Reader can perform a query, you need Designer access to create a view. This seemed like a problem to me at first, since I'd want the generated results to be from a specific user for reader-field purposes, but it's not really an impediment, at least when you're in an environment like XPages.

Above, I mentioned that the fact that you create a QueryResultsProcessor object from a Database is important, and this is one of the reasons why. Though traditionally you wouldn't mix descendants of session and sessionAsSigner together, there's no actual rule against it. You can re-open your context database with sessionAsSigner, make a QRP object from that, and then feed it a DominoQuery object created from the non-signer database:

1
2
3
4
5
6
7
8
Database database = ExtLibUtil.getCurrentDatabase();
Session sessionAsSigner = ExtLibUtil.getCurrentSessionAsSigner();
Database databaseAsSigner = sessionAsSigner.getDatabase(database.getServer(), database.getFilePath());

DominoQuery dominoQuery = database.createDominoQuery();
QueryResultsProcessor qrp = databaseAsSigner.createQueryResultsProcessor();
qrp.addDominoQuery(dominoQuery, "some DQL", null);
View result = qrp.executeToView("some view name");

Because the QueryResultsProcessor uses the provided DominoQuery object as the "engine" for the DQL search, the query will use the normal user's rights while the processing will use the signer rights.

Naming and Expiring Results

As seen there, you have to name your views. While you could in theory use this mechanism to kind of manage your own views for general use and name them things like "People By First Name" or whatever, you'll likely want to work with them programmatically and name them based on your query input.

In the case of this JNoSQL driver, I compute a predictable-from-input hash-based name from the name of the creating class, the current user, and the sort/skip/limit attributes of the incoming query. You could really do whatever you want here, but having at least some sort of hash like this is likely the way to go.

Now there's the matter of detecting when you need to refresh the data. In some applications, it may suffice to go with the "expire in X hours" parameter when creating the view, though that's extremely coarse and only really useful on its own for specific needs (like a daily report).

The tack I took here was to try to do an efficient check of view creation time compared to the last data modification time from the source database. The Database class only has a "last modified" time in general, but I can't very well use that when my results caches are being added as design elements: a second distinct query would "invalidate" the first even when the data hasn't changed. There might be a proper way to get this in lotus.domino, the NAPI has a wrapper for NSFDbModifiedTimeByName: NotesSession.getLastDataModificationDateByName. That lets you get the last data-mod time in epoch seconds, and you can then compare that to the creation time of the view.

While it's unfortunate that you have to remove the view outright to refresh it instead of doing a delta update like NIF would do, I get it, and it's generally fast enough. Plus, there's enough hand-wavy stuff going on with feeding the DQL query to the QRP that Domino would be free to secretly retain results for a bit and do deltas internally if it so desires.

Storing Result Views

The other interesting aspect of creating a QRP object from a Database and not a Session is that that DB serves as the destination to house the views. While in a single-DB environment it would seem very natural to just store the views in the same place as the data, there's no particular requirement to do so. Moreover, if you're querying multiple databases, you're naturally not going to do this for all docs anyway, so you'll be forced to conceptualize this anyway.

Now, personally, I'm fine with a bunch of temporary machine-named views hanging out in the NSF (especially since the names are wrapped in parentheses to hide them from default UI listings), I can see why it could be annoying. For one, these views sync to an ODP in Designer, which I put in as an Aha idea to change, but might actually rightly be called a bug. Beyond that, while these views won't meaningfully contribute to NIF's workload (since NIF will skip them), they're unsightly and would get in the way if you're trying to tend to the design of your NSF like a garden.

So you might want to have a side database to store these views, and this could also be a way to get around the "needing Designer access" requirement if you're in an environment where you don't have a signer session. In the Notes client, you could store the results in a local NSF; on the server, you could make a "scratch" NSF somewhere to house them, and then add readers to the view design note when doing so to prevent leaking data across users and apps.

Conclusion

Anyway, this is all pretty neat. Reusing view design elements to just be static containers for collation data is weird, but I get the practical reasons why it makes sense. Importantly, this pairing solves some very-real problems with querying and extracting data from Domino. For example, if you do all of your querying via this route, you can use DQL's "EXPLAIN" capability to actually get some insight into database performance for once. You could imagine having an optional mode where you log the EXPLAIN results and execution times for all queries your app is performing, and then manually create "index" views to fix hotspots. It's quite satisfying to finally get that kind of ability in Domino. It'd be neat if that also came to QueryResultsProcessor.

I'm looking forward to expanding the JNoSQL driver further and then either using that directly in client work or adapting the code I use there. I'll definitely add such a logging capability, which will go a long way to put some numbers to the "feels slow" problem that can crop up. Beyond that, barring any show stoppers, I'm thoroughly excited by the prospect of moving away from fetching explicitly-named views in code and switching to an idiom of querying the pool of documents and letting the database make it work for me.

XPages Jakarta EE Support 2.2.0

Tue Jan 11 16:19:16 EST 2022

  1. Updating The XPages JEE Support Project To Jakarta EE 9, A Travelogue
  2. JSP and MVC Support in the XPages JEE Project
  3. Migrating a Large XPages App to Jakarta EE 9
  4. XPages Jakarta EE Support 2.2.0
  5. DQL, QueryResultsProcessor, and JNoSQL
  6. Implementing a Basic JNoSQL Driver for Domino
  7. Video Series On The XPages Jakarta EE Project
  8. JSF in the XPages Jakarta EE Support Project
  9. So Why Jakarta?
  10. XPages Jakarta EE 2.5.0 And The Looming Java-Version Wall
  11. Adding Concurrency to the XPages Jakarta EE Support Project
  12. Adding Transactions to the XPages Jakarta EE Support Project
  13. XPages Jakarta EE 2.9.0 and Next Steps
  14. XPages JEE 2.11.0 and the Javadoc Provider
  15. The Loose Roadmap for XPages Jakarta EE Support
  16. XPages JEE 2.12.0: JNoSQL Views and PrimeFaces Support
  17. XPages JEE 2.13.0
  18. XPages JEE 2.14.0
  19. XPages JEE 2.15.0 and Plans for JEE 10 and 11

I've just released version 2.2.0 of the XPages Jakarta EE project, and this contains some fun additions.

Part of what makes this project satisfying to work on is the fact that it has a clear maximum scope: there are only so many Jakarta EE and MicroProfile specifications. Moreover, their delineated nature makes for satisfying progress "chunks": setting aside any later tweaks and improvements, each new spec is slotted into place in a gratifying way.

For this release, I focused on adding a number of capabilities from MicroProfile. MicroProfile is an interesting beast. Its initial and overall goal is to create a toolkit geared towards writing microservices, and it does this by taking a subset of Jakarta EE specs and then adding on its own new capabilities. Now, personally, I don't give two hoots about microservices, but the technologies added to MicroProfile aren't really specific to those needs, and most of them really just fill in gaps in the JEE lineup in ways that are just as useful to bloated monoliths as they are to microservices.

From the list, I added in five new ones, in addition to the already-present OpenAPI generator. Each one is extremely powerful and deserves a bit of discussion, I feel.

Config

The MicroProfile Config spec is a way to externalize your app's configuration and then inject configuration values via CDI. For example:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
@ApplicationScoped
public class ConfigExample {
    @Inject
    @ConfigProperty(name="java.version")
    private String javaVersion;
    
    @Inject
    @ConfigProperty(name="xsp.library.depends")
    private String xspDepends;
    
    /* use the above */
}

When this bean is used, the javaVersion value will be populated with the java.version system property and xspDepends will get the list of libraries used by the app from Xsp Properties. The latter also shows the pluggable nature of the spec: as part of adding it to the project, I added a custom configuration source to read from the Xsp Properties of the app, as well as one to read from notes.ini. One could in theory (and I absolutely will) write a custom source to read configuration from a view or other Notes-type source.

Rest Client

One of the ways that Domino has long been deficient has been in accessing remote REST services. There's some stuff in the ExtLib, I think, and then there are HTTP primitives in LotusScript, but they don't really do much work for you. Java on its own has URLConnection and friends, and that works, but it's basically as low level as LotusScript.

What the Rest Client spec does is build on familiar Jakarta REST annotations (@Path, @GET, etc.) to allow you to declare how your service works and then access it like a normal Java object. For example:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
@ApplicationScoped
public class RestClientExample {
    public static class JsonExampleObject {
        private String foo;
        
        public String getFoo() {
            return foo;
        }
        public void setFoo(String foo) {
            this.foo = foo;
        }
    }
    
    @Path("someservice")
    public interface JsonExampleService {
        @GET
        @Produces(MediaType.APPLICATION_JSON)
        JsonExampleObject get(@QueryParam("foo") String foo);
    }
    
    public Object get() {
        URI serviceUri = URI.create("http://example.com/api/v1/");
        JsonExampleService service = RestClientBuilder.newBuilder()
            .baseUri(serviceUri)
            .build(JsonExampleService.class);
        JsonExampleObject responseObj = service.get("some value");
        Map<String, Object> result = new LinkedHashMap<>();
        result.put("called", serviceUri);
        result.put("response", responseObj);
        return result;
    }
}

Here, I've defined an imaginary remote resource available at a URL like "http://example.com/api/v1/someservice?foo=some%20value" and created an interface with REST annotations to define its expected behavior. The MP Rest Client takes it from there: it converts provided arguments to the expected data types (query parameters, body as JSON/XML, etc.), makes the HTTP call, parses the response into the expected type, and returns it to you, while you get to use it like any old Java object.

It's extremely convenient.

Fault Tolerance

The Fault Tolerance spec allows you to add annotations to methods to handle and describe failure situations. That's pretty vague, but an example may help:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
@ApplicationScoped
public class FaultToleranceBean {
    @Retry(maxRetries = 2)
    @Fallback(fallbackMethod = "getFailingFallback")
    public String getFailing() {
        throw new RuntimeException("this is expected to fail");
    }
    
    private String getFailingFallback() {
        return "I am the fallback response.";
    }
    
    @Timeout(value=5, unit=ChronoUnit.MILLIS)
    public String getTimeout() throws InterruptedException {
        TimeUnit.MILLISECONDS.sleep(10);
        return "I should have stopped.";
    }
    
    @CircuitBreaker(delay=60000, requestVolumeThreshold=2)
    public String getCircuitBreaker() {
        throw new RuntimeException("I am a circuit-breaking failure - I should stop after two attempts");
    }
}

There are three capabilities in use here:

  • @Retry and @Fallback allow you to specify that a method that throws an exception should be automatically re-tried X number of times and then, if it still fails, the caller should instead call a different method. This could make sense when calling a remote service that may be intermittently unavailable.
  • @Timeout allows you to specify a maximum amount of time that a method should be allowed to run. If execution exceeds that amount, then it will throw an InterruptedException. This could make sense when performing a task that normally takes a short amount of time, but has a known failure state where it stalls - again, calling a remote service is a natural case for this.
  • @CircuitBreaker allows you to put a cap on the number of times per X milliseconds that a method is called when it fails. This could be useful for a situation where a method might fail for a user-generated reason (say, a user attempted to modify a document they can't edit) but also might fail for a systemic reason (say, a DB is corrupt) and where repeated attempts to perform the task might be damaging or otherwise very undesirable.

The cool thing about these annotations is the way they make use of CDI's capabilities. If you @Inject this bean into another class, you'll simply call bean.getFailing(), etc., and the stack will handle actually enforcing the retry and fallback behavior for you. You don't have to write any code to handle these checks beyond the annotations.

Metrics

The Metrics API allows you to annotate your objects to tell the runtime to track statistics about the call, such as invocation count and execution time. For example:

1
2
3
4
5
@GET
@SimplyTimed
public Response hello() {
    /* Perform the work */
}

Once the method is marked as @SimplyTimed, you can retrieve statistics at the /xsp/app/metrics endpoint in your NSF, which will get you a bunch of information, including lines like this:

1
2
3
4
# TYPE application_rest_Sample_hello_total counter
application_rest_Sample_hello_total 2.0
# TYPE application_rest_Sample_hello_elapsedTime_seconds gauge
application_rest_Sample_hello_elapsedTime_seconds 0.0025678

That's OpenMetrics format, which means you could consume this data in visualizer tools readily.

Health

And speaking of data for visualizers, that brings us to the last MP spec I added in this version: Health. This API allows you to write classes that will be used to query the health of your application: whether individual components are up or down, whether the app is started ready to receive requests, and any custom attributes you want to define. Now, admittedly, an app on Domino will usually either be "100% up" or "catastrophically down", so making use of this spec will take a little finesse. Still, it's not too hard to envision using this to emit some dashboard-type information. For example:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
@ApplicationScoped
@Liveness
public class PassingHealthCheck implements HealthCheck {
    @Override
    public HealthCheckResponse call() {
        HealthCheckResponseBuilder response = HealthCheckResponse.named("I am the liveliness check");
        try {
            Database database = NotesContext.getCurrent().getCurrentDatabase();
            NoteCollection notes = database.createNoteCollection(true);
            notes.buildCollection();
            return response
                .status(true)
                .withData("noteCount", notes.getCount())
                .build();
        } catch(NotesException e) {
            return response
                .status(false)
                .withData("exception", e.text)
                .build();
        }
    }
}

While "count of notes in an NSF" isn't usually too useful, you can imagine replacing that with something more app-specific: open support tickets, pending vacation requests, or the like. You could also combine this with the Rest Client spec to make a coordinating NSF with no business logic that makes calls to your other, more-likely-to-break apps to check their health and report it here.

Once you write these checks, the runtime will automatically pick up on them and make them available at /xsp/app/health and some sub-paths:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
{
    "status": "DOWN",
    "checks": [
        {
            "name": "I am the liveliness check",
            "status": "UP",
            "data": {
                "noteCount": 63
            }
        },
        {
            "name": "I am a failing readiness check",
            "status": "DOWN"
        },
        {
            "name": "started up fine",
            "status": "UP"
        }
    ]
}

Other Improvements

Beyond those major additions, I made some improvements to clean some aspects up and refine some details (such as making REST services emit stack traces as JSON in the response instead of printing to the console).

Feature-wise, the main addition of note is support for the @RolesAllowed annotation on REST services, which allows you to restrict access by Notes-ACL-type name:

1
2
3
4
5
@GET
@RolesAllowed({ "*/O=SomeOrg", "LocalDomainAdmins", "[Admin]" })
public Object get() {
    // ...
}

When a user doesn't match one of those names-list entries, they're given a 401 response. You can also use the pseudo-name "login" to require that the user be at least logged in and not Anonymous.

What's Next?

Well, I'm not sure. All of the above have immediate uses in my work, so I plan to get rid of some of our old workarounds for this sort of thing and bring in the new abilities.

Beyond that, there are two shipped MicroProfile specs remaining: OpenTracing (which I don't know what it is, but maybe it's useful) and JWT Propagation (which would only make sense when paired with a whole JWT thing).

There's also MP GraphQL, which seems to be like MVC in Jakarta EE, where it's a solid spec but not part of the official standard yet. I may take a swing at it just because it may be easy to add, though for my client needs we already have a more-dynamic GraphQL implementation.

Back on the Jakarta side, the spec list contains quite a few that aren't in there, though a lot of those are either essentially not applicable to Domino (like Authentication or WebSocket) or are primarily of interest to legacy applications (like Enterprise Beans or XML services).

I'd also like to do some breaking reorganization. Most of these specs are added as individual XPages libraries, but that's gotten really unwieldy. Moreover, I'm not sure what the situation would be where you'd want to include, say, REST but not CDI. I'll probably look into making it a single "make my dev experience good" library to select. That'll take some work, though, since there's a lot of OSGi-dependency stuff to balance there.

Beyond that, it'll be refinements, bug fixes, and improvements for use in OSGi-based apps. I have a bunch of things tracked and that'll give me plenty to do as I have time.

Migrating a Large XPages App to Jakarta EE 9

Thu Jan 06 19:44:57 EST 2022

  1. Updating The XPages JEE Support Project To Jakarta EE 9, A Travelogue
  2. JSP and MVC Support in the XPages JEE Project
  3. Migrating a Large XPages App to Jakarta EE 9
  4. XPages Jakarta EE Support 2.2.0
  5. DQL, QueryResultsProcessor, and JNoSQL
  6. Implementing a Basic JNoSQL Driver for Domino
  7. Video Series On The XPages Jakarta EE Project
  8. JSF in the XPages Jakarta EE Support Project
  9. So Why Jakarta?
  10. XPages Jakarta EE 2.5.0 And The Looming Java-Version Wall
  11. Adding Concurrency to the XPages Jakarta EE Support Project
  12. Adding Transactions to the XPages Jakarta EE Support Project
  13. XPages Jakarta EE 2.9.0 and Next Steps
  14. XPages JEE 2.11.0 and the Javadoc Provider
  15. The Loose Roadmap for XPages Jakarta EE Support
  16. XPages JEE 2.12.0: JNoSQL Views and PrimeFaces Support
  17. XPages JEE 2.13.0
  18. XPages JEE 2.14.0
  19. XPages JEE 2.15.0 and Plans for JEE 10 and 11

Last month, I moved my XPages Jakarta EE project to JEE 9, which involved the large hurdle of switching package names from javax.* to jakarta.*. That was all well and good for the project and opened the door to further improvements, but it's one thing to do it in a support library and another to move an actual large project over.

So I set my sights on my workhorse client project, the one with the sprawling OSGi bundle set and complicated XPages project, which I have running regularly on both Domino and Open Liberty. Over the last few days, I did the porting work and came out successful, and I think it ended up being another tale worth telling.

There are two main topics when it comes to this project: why and how.

Why

Now, why are we gonna do this? I mean, the app is running fine as it is, and the immediate goal of the switch is to keep it functionally the same. Why is it worth going through this hassle for an active project?

There are a few reasons.

The first and biggest is that the jakarta.* switch isn't going to go backwards. There is never going to be a feature update for any of the javax.* versions of the JEE specs, and so staying on that version is stagnation. While we can do what we want to do today, that won't hold true tomorrow unless we make the move. Since that's inevitable, it meant that every new line of javax.* code is technical debt, and the sooner you can stop creating that, the better.

The second reason is that, while the Jakarta EE spec move from 8 to 9 retained the same functionality, we actually gained technical improvements in the switch.

One technical gain was very immediate: the version of RESTEasy I had put into the XPages JEE project previously was a couple major versions old now, and this let me bump it up to the latest.

Another technical gain had to do with the nightmare that is dealing with OSGi dependencies on Domino. Due to the lack of proper versioning of standard specs in the XPages stack, the increasingly-corrupt classpath, and the bundling of specs with utility libraries, I've always had to lose a lot of time dealing with manually tweaking OSGi imports and dependencies. While this move doesn't eliminate all of it, it removes a lot. Terror packages like javax.mail and javax.activation can now be left behind in favor of jakarta.mail and jakarta.activation without worry of conflicting imports from the XPages libraries and the ndext classpath directory. The move to JEE 9 resulted in a significant reduction in such code and configuration.

And finally, it's going to help bring in new features we haven't introduced yet but will benefit from. For example, I have my eye on the MicroProfile REST client, which makes consuming REST services in a clear and type-safe way a dream. While it's possible that I'd be able to add that in earlier versions, the switch to jakarta.* will remove huge tasks from my plate entirely.

How

So now that I'd convinced myself that it's a good idea, the only remaining problem was actually doing it. Fortunately, I already solved most of it in the XPages JEE project itself. Moreover, the way I'd solved things there allowed me to remove extra dependencies that kind of "double-solved" the problem in this specific app, things like making sure that javax.inject was compatible between that project and the other third-party dependencies we use.

One of the big things that was different between the XPages JEE project on its own and this app was the way this runs across multiple platforms.

Historically, the fact that Liberty was running Servlet 4 and Domino was running Servlet 2.4/2.5 didn't come much into play. The newer versions of the javax.servlet classes were entirely compatible with the old ones: XPages could consume a Servlet 4 HttpServletRequest without issue and would just ignore the new methods added. Now, though, I had to do some shimming in two directions.

Domino, since I don't control the lowest layers, would remain a Servlet 2.5-ish container natively, while for Liberty I would move to a true Servlet 5 container. Since both the XPages markup and app code must remain the same, that meant a double emulation setup:

Diagram of the Domino and Liberty Stacks

In these diagrams, "JEE 9 App" represents the app's use of Jakarta standards other than XPages: CDI, JAX-RS, JSON-B, and so forth.

The first part of this work took place in the XPages Jakarta EE project. There, I created wrapper versions of all pertinent javax.servlet/jakarta.servlet classes going both from "old to new" and "new to old". Most of these classes are just direct delegations, but some parts involve either throwing exceptions for trying to use new features on an old stack, emulating new behavior on top of the old, or quietly not supporting some capabilities. These classes get me most of the way. When I need to move in one direction or the other, I call the appropriate method from ServletUtil and it takes care of the differences. This project also handled a lot of fiddly details to do with things like the JavaMail to Jakarta Mail switch, so I could just bring that in too and not worry about it.

That handled some distinctions. The next big one was to use this to make sure XPages can survive in a Servlet 5 world. The good news here is that it was simpler than I thought it would be. XPages only has a few actual entrypoints - there's a Servlet to handle global resources (the URLs involving /xsp/.ibm* and the like), another to handle actual XPages *.xsp requests, and maybe one or two others I'm not thinking of. As long as you can handle those URLs and send a legacy javax.servlet object to the closed-source code, the stack will take it from there: things like externalContext.getRequest() will return the wrapped object you passed in in the first place, and don't try to do any weird magic to fetch the request object from the container or anything.

Previously, I had Servlets that extended the stock ones like DesignerFacesServlet directly, but this had to change. Fortunately, it's a simple matter of delegation. Instead of subclassing the original ones, now I create an instance internally and pass along incoming requests to that, appropriately dumbing down the Servlet 5 objects to 2.5-level ones. I was able to do this with fewer functional changes than one might think and put it into new versions of the XPages Runtime project.

Beyond those big-ticket items, there were a few cases where I had to take care to appropriately handle knowing when my code was going to receive a javax.servlet object or a jakarta.servlet one, but otherwise there wasn't much to do beyond updating dependencies and a find-and-replace on class imports. None of the XSP markup changed, very little of the in-NSF code changed, and the only large in-app code changes I made were to remove workarounds that were no longer needed.

Conclusion

All in all, I'd say this went better than I'd expected. There's naturally still room for trouble (it's not in production yet, for one), but overall I feel that this bore out my intent in making the move in the first place.

It's also an interesting case study in the way my XPages Jakarta EE and Open Liberty Runtime projects conceptually interact. They both approach the same ideal from different directions: write open-standards-based applications that make use of Domino data. With this move to JEE 9, the addition of new specs to the XPages JEE project, and the shedding of (some) old limitations, they're converging all the more. More code can be directly shared between the two app types, and the code that isn't shared unmodified can at least be written all the more similarly. JAX-RS is the in-common way to do REST services; CDI is the in-common way to do managed beans; OpenAPI annotations are an in-common way to document services. These are technologies that have wide support with multiple implementations and, crucially, are open standards. XPages was one step towards a non-proprietary stack, and this is several more.

JSP and MVC Support in the XPages JEE Project

Mon Dec 20 11:20:06 EST 2021

Tags: jakartaee java
  1. Updating The XPages JEE Support Project To Jakarta EE 9, A Travelogue
  2. JSP and MVC Support in the XPages JEE Project
  3. Migrating a Large XPages App to Jakarta EE 9
  4. XPages Jakarta EE Support 2.2.0
  5. DQL, QueryResultsProcessor, and JNoSQL
  6. Implementing a Basic JNoSQL Driver for Domino
  7. Video Series On The XPages Jakarta EE Project
  8. JSF in the XPages Jakarta EE Support Project
  9. So Why Jakarta?
  10. XPages Jakarta EE 2.5.0 And The Looming Java-Version Wall
  11. Adding Concurrency to the XPages Jakarta EE Support Project
  12. Adding Transactions to the XPages Jakarta EE Support Project
  13. XPages Jakarta EE 2.9.0 and Next Steps
  14. XPages JEE 2.11.0 and the Javadoc Provider
  15. The Loose Roadmap for XPages Jakarta EE Support
  16. XPages JEE 2.12.0: JNoSQL Views and PrimeFaces Support
  17. XPages JEE 2.13.0
  18. XPages JEE 2.14.0
  19. XPages JEE 2.15.0 and Plans for JEE 10 and 11

Over the weekend, I wrapped up the transition to jakarta.* for my XPages JEE Support project and uploaded it to OpenNTF.

With that in the bag, I decided to investigate adding some other things that I had been itching to get working for a while now: JSP and MVC.

JSP? Isn't That, Like, A Billion Years Old?

Okay, first: shut up.

Expanding on that point, it is indeed pretty old - arriving in 1999 - and its early form was pretty bad. It was designed as an answer to things like PHP and ASP and bore all those scars: it used actual Java syntax on the page to control output, looping, conditionals, and the like. It even had special directives to import Java classes for the page! All that stuff is still in there, too, which isn't great.

However, JSP used judiciously - focusing on JSTL tags for control/looping and EL references to CDI beans for data access - is a splendid little thing, and it has the advantage that it remains part of the JEE spec.

Domino flirted with JSP for a long time. It's what Garnet was all about and was part of how OpenNTF got off the ground. IBM did eventually ship the custom tags, and they ship with Domino to this day, sitting in the data/domino/java directory, gathering dust. Domino also inherited JSP from WebSphere as part of XPages... kind of. It has hooks for using JSP files in Expeditor-container webapps, but the implementation is conspicuously missing - present only in Notes, presumably for some sort of Social nonsense reason.

For better or for worse, none of that matters now anyway: it's all crusty and old and, critically, uses javax.*. I had to go a different route.

JSP Implementation

From what I gather, there's basically only one real open-source JSP implementation: Jasper, which is a part of Tomcat. Basically everyone just uses that, and that works well enough. There are various re-bundlings of it to remove the Tomcat dependencies, and I went with the GlassFish one, since it was pretty clean.

Diving into it, there were a few things that were potential and actual problems.

First, JSP files aren't evaluated directly. Instead, they're compiled into Servlet class implementations, either on the fly or ahead of time. This process is basically the same as how XPages work: the JSP is translated into a Java file, which is then compiled into a class, which is then reused by the runtime for subsequent requests. Jasper has a dependency on Eclipse JDT, which worried me: when I looked into this in the past, I found that JDT (at least how it was used for JSP) makes a lot of assumptions about working with the actual filesystem. I lucked out here, though: Jasper actually uses the JavaCompiler API, which is more flexible. The JDT dependency seems like either a vestige of an older version or a fallback option.

However, despite the fact that JavaCompiler can work purely in memory, Jasper does do a lot of filesystem-bound work when it comes to loading tag libraries, such as JSTL. I ended up having to deploy a bunch of stuff to the filesystem. Ideally, I'll find a better way around this.

Hooking It Up To Domino

Having a JSP interpreter is one thing, but having it respond to URLs like "http://example.com/foo.nsf/bar.jsp" is another, especially if that should also participate in the XPages class space of the NSF.

I originally considered an HttpService implementation that would accept incoming *.jsp URLs. This could work, but it would be less than ideal: the HttpService, while working in the XPages OSGi layer, wouldn't know about the internal layout of the NSF. I'd have to either reinvent it or wrangle my way over to the active NSFService (the one that runs XPages), find or load the NSF's module, and root around in there. Possible, but not ideal.

Fortunately, I lucked out tremendously: the NSFService class has an addHandledExtensions static method that I can just call to tell it that incoming ".jsp" requests should go to the XPages runtime. This looks like it was added for more Social-nonsense reasons, but I'm happy it's there regardless. Better still, when the runtime finds a URL it was told to handle, it queries IServletFactory implementations like those you can use in an NSF for custom servlets. I already had one in place for JAX-RS, so I made another one (refactored since that commit) for JSP.

Once that was in place (plus some other fiddly details), I got to what I wanted: writing JSPs inside an NSF and having them share the XPages class space:

Screenshot of Designer and a browser showing an in-NSF JSP

Next Up: MVC

Once I had JSP in place (and after some troublesome fiddling with JSF), I decided to take a swing at adding my beloved MVC to the stack.

This had its own complications, this time for the inverse problem as JSP. While Jasper is a creature of the early 2000s and uses older, less-flexibile Java APIs that I had to write around, MVC is the opposite. It's a pure child of the modern, CDI-based world and thus does everything through CDI and ServiceLoaders. However, though I've had CDI support in the project for a long time, actually tying together anything to do with CDI or ServiceLoaders in OSGi is eternally difficult, especially on Domino.

Service Loading

I had to wrangle for this for a while, but I eventually came up with a functional-but-odd workaround: I made use of my own custom ServiceParticipant extension capability that lets me have an object perform pre/post behavior around each JAX-RS request in order to futz with the ClassLoader. I had trouble where the NSF ClassLoader didn't find classes from the MVC implementation, though it should have, so I ended up overriding the ClassLoader to first look explicitly there. It's not pretty, but it works and at least it doesn't require filesystem stuff.

Servlets and Request Dispatchers

Another aspect of being a more-modern child than Jasper is that Krazo makes ready use of Servlet capabilities that have been there for a while but which don't exist on Domino.

For example, Krazo uses a ServletContainerInitializer instance to do pre-research in the app to find classes that should get MVC behavior. Without this scan, MVC won't be applied. This is a Servlet 3.0 feature dating to 2009 and Domino doesn't support it - or any kind of annotation-based classpath scanning, for that matter.

Fortunately, I didn't really need to fully support this concept - I really just needed to make sure this ran whenever the JAX-RS support was being loaded for an NSF. So I made it possible to contribute these via an extension point and added my own scanning implementation to gather the applicable types. Essentially, a backport of this feature to apply in an NSF. With that in place, I was able to register the initializer and have it do its work.

My next hurdle was to do with the way Krazo delegates to JSPs. Specifically, it queries the ServletContext (essentially, the app container) for Servlet registrations that can handle the desired extensions (".jsp" and ".jspx" here) and routes to that using a RequestDispatcher. Well, Domino supports none of this. Trying to get a RequestDispatcher is hard-coded to throw an exception saying "Domino doesn't support this" and the bit about getting ServletRegistrations was new in 3.0. Originally, I stubbed these out, but I decided to give a swing at backporting this as well.

While an NSF doesn't have "Servlet registrations" as such, it does have a list of the aforementioned IServletFactory instances, so I decided to write my own. I wrote a getRequestDispatcher implementation that queries the current module's Servlet factories for a match and, when found, return a basic implementation. Then, I wrote a custom subtype of IServletFactory to provide additional information and made use of that to emulate the Servlet 3+ behavior, at least well enough to let Krazo do what it needs.

Seeing It Together

Once I figured out all these hurdles, I got to what I wanted: I can make a JAX-RS service in an NSF that acts as an MVC controller:

Screenshot of Designer and a terminal showing an MVC controller in an NSF

Neat! There are still some rough edges to clean, but it's great to see in action.

Conclusion and Next Steps

So why is this good? Well, there's a certain amount of box-checking going on: the more JEE specs I can get going, the better.

But beyond that, this is helping to crystallize some of my thinking about what Domino (web) developers are even supposed to freaking do nowadays. This remains an extremely-vexing problem, but I know the answer isn't XPages as it exists now. Maybe the answer is to move XPages to a better container or maybe it's to add a better container to Domino (or both of those, I guess). This is another option, one that preserves the "just fire up Designer and edit some code" niceties of the XPages experience while gaining better, more modern capabilities. I could see writing an app with this, doing all my work in CDI beans and using JSP as the front end - pure open-source solutions with active developers - all inside the NSF. Is it the real best answer? I don't know. Maybe. It's something, though, and specifically something worth trying.

Updating The XPages JEE Support Project To Jakarta EE 9, A Travelogue

Tue Dec 14 16:41:59 EST 2021

  1. Updating The XPages JEE Support Project To Jakarta EE 9, A Travelogue
  2. JSP and MVC Support in the XPages JEE Project
  3. Migrating a Large XPages App to Jakarta EE 9
  4. XPages Jakarta EE Support 2.2.0
  5. DQL, QueryResultsProcessor, and JNoSQL
  6. Implementing a Basic JNoSQL Driver for Domino
  7. Video Series On The XPages Jakarta EE Project
  8. JSF in the XPages Jakarta EE Support Project
  9. So Why Jakarta?
  10. XPages Jakarta EE 2.5.0 And The Looming Java-Version Wall
  11. Adding Concurrency to the XPages Jakarta EE Support Project
  12. Adding Transactions to the XPages Jakarta EE Support Project
  13. XPages Jakarta EE 2.9.0 and Next Steps
  14. XPages JEE 2.11.0 and the Javadoc Provider
  15. The Loose Roadmap for XPages Jakarta EE Support
  16. XPages JEE 2.12.0: JNoSQL Views and PrimeFaces Support
  17. XPages JEE 2.13.0
  18. XPages JEE 2.14.0
  19. XPages JEE 2.15.0 and Plans for JEE 10 and 11

I think it's been a little while since I talked about the XPages Jakarta EE Support project of mine. The goal of that is sort of the inverse of the XPages Runtime project: rather than bringing XPages to a proper modern app server, the JEE Support project brings a handful of current Jakarta EE specs to XPages. It started out a few years ago as a sort of proof-of-concept, but I've since been using it for client work to do things like use newer Jakarta REST (n?e JAX-RS), CDI, and newer EL in XPages and OSGi bundles.

The Specification Move

Originally, I targeted a set of specifications from Java/Jakarta EE 8. Some of these were new to Domino outright, while some (such as JAX-RS) existed in the XPages stack already but in very old forms. I implemented those and for a good while just used the project as a source of parts for client work, tweaking it here and there as needed.

However, the long-prophesized package-name switch from javax.* to jakarta.* came to fruition in Jakarta EE 9, released a bit over a year ago. In the intervening year, most implementations of the specs made the switch, and the versions I was using started to show their age (for example, I was using RESTEasy 3, which was already old when I adopted it, and it's going to 6 now). Beyond just the philosophical sadness of my project being behind, I started to grow specific needs to upgrade components: we switched to JSON-B a while ago, but then some new bug fixes in Yasson were coming only to post-jakarta.* builds.

The Initial Work

I first gave a shot to this in August, initially planning to move only JSON-P and JSON-B over to the new namespace. However, I quickly hit the limits of that, since a lot of these specs are interdependent. JAX-RS using JSON-P and JSON-B to emit JSON content, Yasson has some ties to CDI, and so forth. I realized that it was going to have to be all-or-nothing.

So I rolled up my sleeves and assessed the task ahead of me. At a basic level, there was the job of updating my dependencies, which immediately had some good aspects and bad aspects:

  • Previously, I was using a hodgepodge of spec packages like the JBoss bundling of JAX-RS in order to get something that would work and be license-friendly. Now that it was all over at Eclipse, I could switch to the nice, clean official versions and have no license worries.
  • I also used to have all sorts of OSGi rule overrides to account for Domino-specific oddities like ancient versions of various specs being supplied by the default classpath or other, conflicting bundles, all with no versioning. Once I was looking for e.g. jakarta.annotation instead of javax.annotation, I was no longer bound to that particular nightmare.
  • Not all of my dependencies were ready. When I first started, RESTEasy (my JAX-RS provider of choice) had not yet uploaded a JEE-9-compatible version. My main choices were to try using Eclipse Transformer, which would add a whole new layer to the task, or to switch to another provider.

Then there's the elephant in the room: the freaking Servlet API, which much of this depends on. Since the Servlet API is the job of the web container, I can't realistically upgrade it. Fortunately, that's only half true: I can't give it new capabilities (like Web Sockets), but I can wrap the old stuff with the new. And, like the other specs, the switch of the package name was a tremendous blessing, allowing me to deploy the official Servlet 5 API unchanged. Then, I did the tedious work of writing a slew of adapter classes, half wrapping a javax.servlet component and pretending it's jakarta.servlet and half going the other direction. Since the methods are either direct analogs, optional features, or can be emulated, this was actually much easier than I thought it would be. And there: Servlet 5 on Domino! Kind of!

The Showstopper

However, I soon hit what seemed to be a show-stopper: a LinkageError problem when using CDI that didn't show up previously. My search for the topic found only one hit: an issue in Open Liberty referencing almost exactly the same problem. My heart sank when I read that their fix was to upgrade the Equinox runtime - something that's outside my powers on Domino (probably).

So, disheartened, I set it aside for a couple months. I figured there was a small chance that Weld (the CDI implementation at the heart of the trouble) would put out an update that fixed it - after all, an older version worked.

Resuming Work

After setting it aside, it kept eating away at the back of my mind, and two things kept pushing me to go back to it:

  • I'll need to do it eventually. I (and my client projects) can't just be stuck at the old style forever.
  • I really didn't want to admit defeat and switch back to Gson for JSON processing.

So I went back to it. My initial hope - that a new version of Weld would magically fix the problem - proved to not have come to fruition. Still, though, I wasn't sure that it was the exact same problem Liberty encountered. For one, my use of CDI studiously avoids actually telling it about OSGi, since I've had little luck making use of that with Domino's OSGi stack. That was enough cause to make me think I could work around it.

And work around it I did! The trouble turned out to be, unsurprisingly, a bit esoteric, but boiled down to the runtime re-registering proxy classes for the same core components. My guess is that, somewhere along the line, Weld changed some sort of internal cache in a way that would break when using a bunch of ephemeral per-NSF containers as I do. I implemented my own (since it's an intended extension point) and added a bit of a cache, and I was back to the races.

As a convenient blessing, RESTEasy released 6.0.0.Beta1 just days before I got back to it, a major release targeted at JEE 9. That meant that I could save a ton of work by not having to re-work everything for another JAX-RS implementation. I had been looking into Jersey, which I'm sure would have done the job, but it's fiddly work trying to put all these pieces together on Domino, and I was all the happier to not have to re-do it all.

JavaMail

But then I hit a new problem: the javax.mail API, now jakarta.mail. The first part of this is easy enough: bring in the new spec bundle and everything will point to it. Great! I hit an immediate problem, and one I had been dreading dealing with. Though the spec changed package names, the implementation didn't. That brought me face-to-face again with an old nemesis of mine, sitting there in Domino's classpath, corrupting it:

A screenshot of Domino's ndext directory

The way the Mail API works is that there's a file, called "mailcap", that lists implementations for common data types, like:

1
2
3
4
text/plain;;		x-java-content-handler=com.sun.mail.handlers.text_plain
text/html;;		x-java-content-handler=com.sun.mail.handlers.text_html
text/xml;;		x-java-content-handler=com.sun.mail.handlers.text_xml
multipart/*;;		x-java-content-handler=com.sun.mail.handlers.multipart_mixed; x-java-fallback-entry=true

So, while all the entrypoint classes are jakarta.mail.* now, the implementations remain com.sun.mail.*, all with the same class names. And, since this little jerk of a JAR is sitting in the system classpath, it has a way of showing up all the time, complaining that com.sun.mail.handlers.text_plain is incompatible with jakarta.activation.DataContentHandler.

This is extremely fiddly to deal with, potentially involving writing a special ClassLoader implementation that blocks calls down to the lower-level JAR. While maybe possible, I'm not sure it'd be possible in a way that would be practical for normal use in an app.

And so, with a heavy heart, I forked the thing and added an "org.openntf" in front of all the package names. And that... works! It works just fine. It means that I'm on the hook for manually integrating any upstream changes, but at least it works without having to worry about any conflicts.

That wasn't the end of my trouble with this spec, though. The spec package itself, in jakarta.mail.Session uses ServiceLoader to look for services, and it uses it in the form that looks them up with the current thread's ClassLoader. Because I'm working in OSGi, that ClassLoader - the XPage app's loader - won't know about the implementation classes directly, and this call fails. And, while there's a whole sub-spec in OSGi for dealing with this, I've never had success actually getting it working in Domino.

So I forked that freaking thing too and modified the calls to use its own ClassLoader, which could find the implementation by way of it being a fragment bundle attached to it.

And, with that, finally, I had Jakarta Mail properly hooked up and working without having to jump through too many hoops. I'd still prefer to not have forked the source, but it was the best of a bad lot of choices.

The Final Tally

That brings the specs updated/added in this project to:

  • Expression Language 4.0
  • Contexts and Dependency Injection 3.0
    • Annotations 2.0
    • Interceptors 2.0
    • Dependency Injection 2.0
  • RESTful Web Services (JAX-RS) 3.0
  • Bean Validation 3.0
  • JSON Processing 2.0
  • JSON Binding 2.0
  • XML Binding 3.0
  • Mail 2.1
    • Activation 2.1

Not too shabby, if I say so myself. Technically, Servlet 5.0 is in there, but it doesn't actually bring any newer-than-2.4 powers to the Servlet container, so it's really just infrastructural details.

Now I'll just have the work of updating my client project and finally getting to use whatever that Yasson bug fix was that prompted this in the first place.

Journeys Debugging Open Liberty and MVC

Tue Nov 30 16:40:24 EST 2021

I mentioned in my last post that I've been tinkering with a modern structure for OpenNTF's web site as a side project. In that, I talked about how I've been going with Jakarta MVC for the front end, but ran into an odd problem with the latest versions in Open Liberty, and that was the impetus to tinkering with ERB.

Well, I decided to go back and take a swing at trying to make JSP work in this case, since it's (still) a good engine for this purpose, and it could be a fun experiment. I was indeed able to do it, and I think the path I took is worth chronicling here.

Context

In previous projects, such as this blog, I've used older versions of the software stack involved - basically, Jakarta 8, which is before the "big bang" switch from the javax.* to jakarta.* package namespace. Since this is a clean new app, I really want to lean to the newest versions across the board, so I pegged my plans to that.

Though Jakarta EE 9 and 9.1 (the Java 11 official version) have been out for a bit, the switchover comes with the sort of turbulence one would expect. Until just this past week, Open Liberty supported JEE 9 only in beta releases - these have historically been plenty stable for me, but it's always asking for trouble. Even with that non-beta version out, I found myself still on the beta track: I'm addicted to using MicroProfile Config, and MicroProfile's move to JEE 9 support is still itself in the RC stage.

So, okay, betas it is.

The Problem

Once I set everything up on JEE 9 and MP 5, I hit this exception when trying to render a JSP via an MVC Controller object:

java.lang.RuntimeException: SRV.8.2: RequestWrapper objects must extend ServletRequestWrapper or HttpServletRequestWrapper
  at com.ibm.wsspi.webcontainer.util.ServletUtil.unwrapRequest(ServletUtil.java:89)
  at [internal classes]
  at org.eclipse.krazo.engine.ServletViewEngine.forwardRequest(ServletViewEngine.java:135)
  at org.eclipse.krazo.engine.JspViewEngine.processView(JspViewEngine.java:58)
  at org.eclipse.krazo.core.ViewableWriter.writeTo(ViewableWriter.java:159)
  at org.eclipse.krazo.core.ViewableWriter.writeTo(ViewableWriter.java:1)
  at org.jboss.resteasy.core.interception.jaxrs.ServerWriterInterceptorContext.lambda$writeTo$1(ServerWriterInterceptorContext.java:79)
  ... 4 more

The short of it is that Krazo (the MVC implementation) passes JSP rendering along to the app container, rather than doing its own JSP work, which makes perfect sense. This, however, hits trouble within Liberty's ServletUtil class, which attempts to "unwrap" the incoming HttpServletRequest object to find the core Liberty-specific object to use extended methods on.

Normally, this sort of thing would work fine: every app server has its own variant of HttpServletRequest for its own uses, and it's perfectly reasonable to do this kind of unwrapping. However, for some reason, this was going awry.

The specific code from Krazo that calls down into Liberty code does do new HttpServletRequestWrapper(request), but that's also legal: the unwrapRequest method is intended specifically to unwrap spec-standard HttpServletRequestWrapper objects like that. So that's not our culprit, and I had to dig deeper.

Investigation

To start my investigation, I knew I'd need to work with the Krazo layer. Fortunately, though many of the moving parts here are baked into the Liberty server, Krazo is not - I include it as a Maven dependency in my app. So I cloned the Krazo source, added it to my workspace, and set my dependency on the SNAPSHOT version, allowing me to do my work inside Krazo's classes.

Context

So what was going on? At first, I thought that maybe something had snuck in a javax.* class somewhere - old code that wasn't fully migrated to JEE 9. That would certainly cause the trouble: javax.servlet.ServletRequestWrapper and jakarta.servlet.ServletRequestWrapper are, to the JVM's eyes, entirely-unrelated classes with no compatibility whatsoever. And, indeed, looking at the source of ServletUtil could give one that impression right away, since the code uses javax.*.

That's not the trouble, however. Though the class is written to javax.*, I gather that it's run through Eclipse Transporter during packaging of the app server, and the actual class that's involved uses jakarta.*. Okay, that's good to know and makes sense, but it also doesn't get us any closer to the root problem.

For my next step, I wanted to figure out what, specifically, it was looking for. The unwrapRequest method takes a Class<?> parameter to find the needed request type, but the stack trace above hid the path it took to get there. By attaching a debugger to the server, I gleaned that it was being called by the unwrapRequest variant above it that looks specifically for a com.ibm.wsspi.webcontainer.servlet.IExtendedRequest.

Okay, so I have the name of the interface it's looking for - I can work with this. My next step was to try to get a programmatic handle on it. The basic approach, when you don't have the class as part of your project, it to look it up via:

1
Class<?> requestClass = Class.forName("com.ibm.wsspi.webcontainer.servlet.IExtendedRequest");

That doesn't work here, though: though the app server definitely has that class, it (properly) shields the running application from accessing the class directly, so that the app runtime isn't contaminated by the surrounding server code.

What I needed to do next was find a ClassLoader that does know about it, and the best way to do that is to find a class provided from outside the running app and ask that. Fortunately, the incoming request is exactly that. So:

1
Class<?> requestClass = Class.forName("com.ibm.wsspi.webcontainer.servlet.IExtendedRequest", false, request.getClass().getClassLoader());

What that does is ask whatever ClassLoader the request object comes from - that is to say, the container's loader - to find the class. And it worked! Now I could test to verify whether the core request matches the type needed:

1
2
Class<?> requestClass = Class.forName("com.ibm.wsspi.webcontainer.servlet.IExtendedRequest", false, request.getClass().getClassLoader());
System.out.println("does it match? " + requestClass.isInstance(request));

As expected, that resolved to false. In this case, that's good, since it'd have been a much-worse problem if it hadn't. But what is the request object, anyway? Well:

1
2
System.out.println(request.getClass());
// output: jdk.proxy15.$Proxy68

Right, okay, that makes sense: all sorts of stuff uses proxy objects in Java, not the least of which being the CDI environment running the whole show.

Cracking Open Proxies

So I had some good information at this point: the HttpServletRequest that Krazo is handed is a proxy object, but Liberty has a hard requirement that its dispatcher is given an instance of IExtendedRequest, which this is not. That means that something in the stack is taking the original Liberty request object and making a proxy for it - fair enough, but inconvenient for me.

My next thought was that maybe I could track down the type of proxy object it is and, with that knowledge, get the underlying delegate request. That's a common-enough pattern: have an instance property in your proxy class that contains the delegate, and (if I'm lucky) have it accessible via a getter. Java's java.lang.reflect.Proxy class has a static method for determining the object that actually handles called methods:

1
2
System.out.println(Proxy.getInvocationHandler(request));
// output: org.jboss.resteasy.core.ContextParameterInjector$GenericDelegatingProxy@fc04422d

This was starting to come together all the more. Liberty recently switched from Apache CXF to RESTEasy for its JAX-RS implementation, and that could explain why this is trouble now when it wasn't before. More importantly for my immediate needs, that also gave me a lead to track down the proxy class.

However, though it was easy enough to find, my heart sank a bit at what I found: rather than having an easy instance property to get the real request, it uses an object from its container class and in turn asks that for the request. Maybe I'd be able to get to that via reflection, but the prospect of figuring out how to work with nested class contexts caused me to try to look around elsewhere instead.

CDI

Another potential answer came to me in a flash: CDI! Access to the CDI environment is standardized, and maybe I could fetch the original request from there. It'd be extremely likely that it'd just hand me back a similar proxy, but it'd be worth a shot. So here we go:

1
2
3
HttpServletRequest cdiReq = CDI.current().select(HttpServletRequest.class).get();
System.out.println(cdiReq);
// output: com.ibm.ws.webcontainer40.srt.SRTServletRequest40@1a22712c

Oh! Good! That's one of Liberty's internal types! Is it the object I need, though? Well... no. Crap. requestClass.isInstance(cdiReq) is false, so this didn't really get me very far. That's a shame, too, since that solution could have involved no implementation-specific code at all.

Internal Liberty Classes

My next thought was that I should try to find another way to get around to finding the true request object. I looked back through the debug stack to find where it was originally calling unwrapRequest to get a bit more context:

1
2
3
WebContainerRequestState reqState = WebContainerRequestState.getInstance(false);

wasReq = (IExtendedRequest) ServletUtil.unwrapRequest(request);

Okay, so what's this with WebContainerRequestState? That sure smells like an object that's meant to be a request-wide object to get to all sorts of state. If I were to write such a class, I'd use it to stash the incoming request as well as any other incidental data that I wouldn't want to ferret away in a way that might leak into the app. I was a little wary that WebAppRequestDispatcher didn't use it to get the IExtendedRequest, but maybe I'd luck out.

And boy, did I! Looking down the source of the file, I found my mark: public IExtendedRequest getCurrentThreadsIExtendedRequest().

The (Provisional) Solution

Now, I had all the tools I needed. Towards the bottom of Krazo's ServletViewEngine class, I conjured up this reflective incantation:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
try {
  Class<?> requestStateClass = Class.forName("com.ibm.wsspi.webcontainer.WebContainerRequestState", false, request.getClass().getClassLoader());
  Method getInstance = requestStateClass.getDeclaredMethod("getInstance", boolean.class);
  Object requestState = getInstance.invoke(null, false);
  Method getCurrentThreadsIExtendedRequest = requestStateClass.getDeclaredMethod("getCurrentThreadsIExtendedRequest");
  request = (HttpServletRequest)getCurrentThreadsIExtendedRequest.invoke(requestState);
} catch (Throwable e1) {
  // Not on WAS
}
rd.forward(new HttpServletRequestWrapper(request), new HttpServletResponseWrapper(response));

So what I'm doing here is breaking into the container's ClassLoader in order to get a handle on WebContainerRequestState. From there, I'm able to call the method to get the current instance, and then in turn call the method to get the IExtendedRequest. I overwrite the request variable we're working with, and then pass that along to the dispatcher. If any of that were to fail, I just throw up my hands, assume I'm not on Liberty, and continue on as before.

And... it works! It actually works! The pages now render properly, with all the niceness of modern JSP at my fingertips! It was fun to toy with the idea of ERB, but I like this better for an otherwise-pure-Java app for sure.

Next Steps

So I have a solution that works for me, but it's so ugly and implementation-specific that I can't exactly be comfortable with it. Still, the trouble comes from an implementation-specific source, so that may be required. Maybe I'll have to leave it like this.

More responsibly, though, what I should do is narrow this down into a reproducible case without all the other moving parts in the app to make sure that it's actually a bug/incompatibility, and thus something that I can report. This is all open-source software, after all, and it'd do nobody any good for me to let a potential actual problem linger. I'll just have to properly identify where the true culprit is first. Is it because Krazo uses RequestDispatcher in a somewhat-unusual way? Is it because RESTEasy is too aggressive about wrapping requests with no proper way to get to the delegate? Is it that Liberty should handle this case better internally? Or maybe it's just some side effect of the other things I have going on. Research is warranted.

In the mean time, that was a fun one. I don't know that I'll have a need for this specific solution again, but it was good to find, and it's always good to get some troubleshooting practice like this for sure.

Writing A Custom ViewEngine For Jakarta MVC

Tue Nov 02 14:31:20 EDT 2021

One of the very-long-term side projects I have going on is a rewrite of OpenNTF's site. While the one we have has served us well, we have a lot of ideas about how we want to present projects differently and similar changes to make, so this is as good a time as any to go whole-hog.

The specific hog in question involves an opportunity to use modern Jakarta EE technologies by way of my Domino Open Liberty Runtime project, as I do with my blog here. And that means I can, also like this blog, use the delightful Jakarta MVC Spec.

However, when moving to JEE 9.1, I ran into some trouble with the current Open Liberty beta and its handling of JSP as the view template engine. At some point, I plan to investigate to see if the bug is on my side or in Liberty's (it is beta, in this case), but in the mean time it gave my brain an opportunity to wander: in theory, I could use ERB (a Ruby-based templating engine) for this purpose. I started to look around, and it turns out the basics of such a thing aren't difficult at all, and I figure it's worth documenting this revelation.

MVC ViewEngines

The way the MVC spec works, you have a JAX-RS endpoint that returns a string or is annotated with a view-template name, and that's how the framework determines what page to use to render the request. For example:

1
2
3
4
5
6
7
8
9
@Path("home")
@GET
@Produces(MediaType.TEXT_HTML)
public String get() {
  models.put("recentReleases", projectReleases.getRecentReleases(30)); //$NON-NLS-1$
  models.put("blogEntries", blogEntries.getEntries(5)); //$NON-NLS-1$

  return "home.jsp"; //$NON-NLS-1$
}

Here, the controller method does a little work to load required model data for the page and then hands it off to the view engine, identified here by returning "home.jsp", which is in turn loaded from WEB-INF/views/home.jsp in the app.

In the framework, it looks through instances of ViewEngine to find one that can handle the named page. The default spec implementation ships with a few of these, and JspViewEngine is the one that handles view names ending with .jsp or .jspx. The contract for ViewEngine is pretty simple:

1
2
3
4
public interface ViewEngine {
  boolean supports(String view);
  void processView(ViewEngineContext context) throws ViewEngineException;
}

So basically, one method to check whether the engine can handle a given view name and another one to actually handle it if it returned true earlier.

Writing A Custom Engine

With this in mind, I set about writing a basic ErbViewEngine to see how practical it'd be. I added JRuby to my dependencies and then made my basic class:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
@ApplicationScoped
@Priority(ViewEngine.PRIORITY_APPLICATION)
public class ErbViewEngine extends ViewEngineBase {

  @Inject
  private ServletContext servletContext;

  @Override
  public boolean supports(String view) {
    return String.valueOf(view).endsWith(".erb"); //$NON-NLS-1$
  }

  @Override
  public void processView(ViewEngineContext context) throws ViewEngineException {
    // ...
  }
}

At the top, you see how a custom ViewEngine is registered: it's done by way of making your class a CDI bean in the application scope, and then it's good form to mark it with a @Priority of the application level stored in the interface. Extending ViewEngineBase gets you a handful of utility classes, so you don't have to, for example, hard-code WEB-INF/views into your lookup code. The bit with ServletContext is there because it becomes useful in implementation below - it's not part of the contractual requirement.

And that's basically the gist of hooking up your custom engine. The devil is in the implementation details, for sure, but that processView is an empty canvas for your work, and you're not responsible for the other fiddly details that may be involved.

First-Pass ERB Implementation

Though the above covers the main concept of this post, I figure it won't hurt to discuss the provisional implementation I have a bit more. There are a couple ways to use JRuby in a Java app, but the way I'm most familiar with is using JSR 223, which is a generic way to access scripting languages in Java. With it, you can populate contextual objects and settings and then execute a script in the target language. The Krazo MVC implementation actually comes with a generic Jsr223ViewEngine that lets you use any such language by extension.

In my case, the task at hand is to read in the ERB template, load up the Ruby interpreter, and then pass it a small script that uses the in-Ruby ERB class to render the final page. This basic implementation looks like this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
@Override
public void processView(ViewEngineContext context) throws ViewEngineException {
  Charset charset = resolveCharsetAndSetContentType(context);

  String res = resolveView(context);
  String template;
  try {
    // From Apache Commons IO
    template = IOUtils.resourceToString(res, StandardCharsets.UTF_8);
  } catch (IOException e) {
    throw new ViewEngineException("Unable to read view", e);
  }

  ScriptEngineManager scriptEngineManager = new ScriptEngineManager();
  ScriptEngine scriptEngine = scriptEngineManager.getEngineByExtension("rb"); //$NON-NLS-1$
  Object responseObject;
  try {
    Bindings bindings = scriptEngine.createBindings();
    bindings.put("models", context.getModels().asMap());
    bindings.put("__template", template);
    responseObject = scriptEngine.eval("require 'erb'\nERB.new(__template).result(binding)", bindings);
  } catch (ScriptException e) {
    throw new ViewEngineException("Unable to execute script", e);
  }

  try (Writer writer = new OutputStreamWriter(context.getOutputStream(), charset)) {
    writer.write(String.valueOf(responseObject));
  } catch (IOException e) {
    throw new ViewEngineException("Unable to write response", e);
  }
}

The resolveCharsetAndSetContentType and resolveView methods come from ViewEngineBase and do basically what their names imply. The rest of the code here reads in the ERB template file and passes it to the script engine. This is extremely similar to the generic JSR 223 implementation, but diverges in that the actual Ruby code is always the same, since it exists just to evaluate the template.

If I continue down this path, I'll put in some time to make this more streamable and to provide access to CDI beans, but it did the job to prove that it's quite doable.

All in all, I found this exactly as pleasant and straightforward as it should be.

Implementing Custom Token-Based Auth on Liberty With Domino

Sat Apr 24 12:31:00 EDT 2021

This weekend, I decided to embark on a small personal side project: implementing an RSS sync server I can use with NetNewsWire. It's the delightful sort of side project where the stakes are low and so I feel no pressure to actually complete it (I already have what I want with iCloud-based syncing), but it's a great learning exercise.

Fair warning: this post is essentially a travelogue of not-currently-public code for an incomplete side app of mine, and not necessarily useful as a tutorial. I may make a proper example project out of these ideas one day, but for the moment I'm just excited about how smoothly this process has gone.

The Idea

NetNewsWire syncs with a number of services, and one of them is FreshRSS, a self-hosted sync tool that uses PHP backed by an RDBMS. The implementation doesn't matter, though: what matters is that that means that NNW has the ability to point at any server at an arbitrary URL implementing the same protocol.

As for the protocol itself, it turns out it's just the old Google Reader protocol. Like Rome, Reader rose, transformed the entire RSS ecosystem, and then crumbled, leaving its monuments across the landscape like scars. Many RSS sync services have stuck with that language ever since - it's a bit gangly, but it does the job fine, and it lowers the implementation toll on the clients.

So I figured I could find some adequate documentation and make a little webapp implementing it.

Authentication

My starting point (and all I've done so far) was to get authentication working. These servers mimic the (I assume antiquated) Google ClientLogin endpoint, where you POST "Email" and "Passwd" and get back a token in a weird little properties-ish format:

1
2
3
4
POST /accounts/ClientLogin HTTP/1.1
Content-Type: application/x-www-form-urlencoded

Email=ffooson&Passwd=secretpassword

Followed by:

1
2
3
4
5
6
HTTP/1.1 200 OK
Content-Type: text/html; charset=UTF-8

SID=null
LSID=null
Auth=somename/8e6845e089457af25303abc6f53356eb60bdb5f8

The format of the "Auth" token doesn't matter, I gather. I originally saw it in that "name/token" pattern, but other cases are just a token. That makes sense, since there's no need for the client to parse it - it just needs to send it back. In practice, it shouldn't have any "=" in it, since NNW parses the format expecting only one "=", but otherwise it should be up to you. Specifically, it will send it along in future requests as the Authorization header:

1
2
GET /reader/api/0/stream/items/ids?n=1000&output=json&s=user/-/state/com.google/starred HTTP/1.1
Authorization: GoogleLogin auth=somename/8e6845e089457af25303abc6f53356eb60bdb5f8

This is pretty standard stuff for any number of authentication schemes: often it'll start with "Bearer" instead of "GoogleLogin", but the idea is the same.

Implementing This

So how would one go about implementing this? Well, fortunately, the Jakarta EE spec includes a Security API that allows you to abstract the specifics of how the container authenticates a user, providing custom user identity stores and authentication mechanisms instead of or in addition to the ones provided by the container itself. This is as distinct from a container like Domino, where the HTTP stack handles authentication for all apps, and the only way to extend how that works is by writing a native library with the C-based DSAPI. Possible, but cumbersome.

Identity Store

We'll start with the identity store. Often, a container will be configured with its own concept of what the pool of users is and how they can be authenticated. On Domino, that's generally the names.nsf plus anything configured in a Directory Assistance database. On Liberty or another JEE container, that might be a static user list, an LDAP server, or any number of other options. With the Security API, you can implement your own. I've been ferrying around classes that look like this for a couple of years now:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
/* snip */

import javax.security.enterprise.credential.Credential;
import javax.security.enterprise.credential.UsernamePasswordCredential;
import javax.security.enterprise.identitystore.CredentialValidationResult;
import javax.security.enterprise.identitystore.IdentityStore;

@ApplicationScoped
public class NotesDirectoryIdentityStore implements IdentityStore {
    @Inject AppConfig appConfig;

    @Override public int priority() { return 70; }
    @Override public Set<ValidationType> validationTypes() { return DEFAULT_VALIDATION_TYPES; }

    public CredentialValidationResult validate(UsernamePasswordCredential credential) {
        try {
            try(DominoClient client = DominoClientBuilder.newDominoClient().build()) {
                String dn = client.validateCredentials(appConfig.getAuthServer(), credential.getCaller(), credential.getPasswordAsString());
                return new CredentialValidationResult(null, dn, dn, dn, getGroups(dn));
            }
        } catch (NameNotFoundException e) {
            return CredentialValidationResult.NOT_VALIDATED_RESULT;
        } catch (AuthenticationException | AuthenticationNotSupportedException e) {
            return CredentialValidationResult.INVALID_RESULT;
        }
    }

    @Override
    public Set<String> getCallerGroups(CredentialValidationResult validationResult) {
        String dn = validationResult.getCallerDn();
        return getGroups(dn);
    }

    /* snip */
}

There's a lot going on here. To start with, the Security API goes hand-in-hand with CDI. That @ApplicationScoped annotation on the class means that this IdentityStore is an app-wide bean - Liberty picks up on that and registers it as a provider for authentication. The AppConfig is another CDI bean, this one housing the Domino server I want to authenticate against if not the local runtime (handy for development).

The IdentityStore interface definition does a little magic for identifying how to authenticate. The way it works is that the system uses objects that implement Credential, an extremely-generic interface to represent any sort of credential. When the default implementation is called, it looks through your implementation class for any methods that can handle the specific credential class that came in. You can see above that validate(UsernamePasswordCredential credential) isn't tagged with @Override - that's because it's not implementing an existing method. Instead, the core validate looks for other methods named validate to take the incoming class. UsernamePasswordCredential is one of the few stock ones that comes with the API and is how the container will likely ask for authentication if using e.g. HTTP Basic auth.

Here, I use some Domino API to check the username+password combination against the Domino directory and inform the caller whether the credentials match and, if so, what the user's distinguished name and group memberships are (with some implementation removed for clarity).

Token Authentication

That's all well and good, and will allow a user to log into the app with HTTP Basic authentication with a Domino username and password, but I'd also like the aforementioned GoogleLogin tokens to count as "real" users in the system.

To start doing that, I created a JAX-RS resource for the expected login URL:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
@Path("accounts")
public class AccountsResource {
    @Inject TokenBean tokens;
    @Inject IdentityStore identityStore;

    @PermitAll
    @Path("ClientLogin")
    @POST
    @Consumes(MediaType.APPLICATION_FORM_URLENCODED)
    @Produces(MediaType.TEXT_HTML)
    public String post(@FormParam("Email") @NotEmpty String email, @FormParam("Passwd") String password) {
        CredentialValidationResult result = identityStore.validate(new UsernamePasswordCredential(email, password));
        switch(result.getStatus()) {
        case VALID:
            Token token = tokens.createToken(result.getCallerDn());
            String mangledDn = result.getCallerDn().replace('=', '_').replace('/', '_');
            return MessageFormat.format("SID=null\nLSID=null\nAuth={0}\n", mangledDn + "/" + token.token()); //$NON-NLS-1$ //$NON-NLS-2$
        default:
            // TODO find a better exception
            throw new RuntimeException("Invalid credentials");
        }
    }

}

Here, I make use of the IdentityStore implementation above to check the incoming username/password pair. Since I can @Inject it based on just the interface, the fact that it's authenticating against Domino isn't relevant, and this class can remain blissfully unaware of the actual user directory. All it needs to know is whether the credentials are good. In any event, if they are, it returns the weird little format in the response and the RSS client can then use it in the future.

The TokenBean class there is another custom CDI bean, and its job is to create and look up tokens in the storage NSF. The pertinent part is:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
@ApplicationScoped
public class TokenBean {
    @Inject @AdminUser
    Database adminDatabase;

    public Token createToken(String userName) {
        Token token = new Token(UUID.randomUUID().toString().replace("-", ""), userName); //$NON-NLS-1$ //$NON-NLS-2$
        adminDatabase.createDocument()
            .replaceItemValue("Form", "Token") //$NON-NLS-1$ //$NON-NLS-2$
            .replaceItemValue("Token", token.token()) //$NON-NLS-1$
            .replaceItemValue("User", token.user()) //$NON-NLS-1$
            .save();
        return token;
    }

    /* snip */
}

Nothing too special there: it just creates a random token string value and saves it in a document. The token could be anything; I could have easily gone with the document's UNID, since it's basically the same sort of value.

I'll save the @Inject @AdminUser bit for another day, since we're already far enough into the CDI weeds here. Suffice it to say, it injects a Database object for the backing data DB for the designated admin user - basically, like opening the current DB with sessionAsSigner in XPages. The @AdminUser is a custom annotation in the app to convey this meaning.

Okay, so great, now we have a way for a client to log in with a username and password and get a token to then use in the future. That leaves the next step: having the app accept the token as an equivalent authentication for the user.

Intercepting the incoming request and analyzing the token is done via another Jakarta Security API interface: HttpAuthenticationMechanism. Creating a bean of this type allows you to look at an incoming request, see if it's part of your custom authentication, and handle it any way you want. In mine, I look for the "GoogleLogin" authorization header:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
@ApplicationScoped
public class TokenAuthentication implements HttpAuthenticationMechanism {
    @Inject IdentityStore identityStore;
    
    @Override
    public AuthenticationStatus validateRequest(HttpServletRequest request, HttpServletResponse response,
            HttpMessageContext httpMessageContext) throws AuthenticationException {
        
        String authHeader = request.getHeader("Authorization"); //$NON-NLS-1$
        if(StringUtil.isNotEmpty(authHeader) && authHeader.startsWith(GoogleAccountTokenHandler.AUTH_PREFIX)) {
            CredentialValidationResult result = identityStore.validate(new GoogleAccountTokenHeaderCredential(authHeader));
            switch(result.getStatus()) {
            case VALID:
                httpMessageContext.notifyContainerAboutLogin(result);
                return AuthenticationStatus.SUCCESS;
            default:
                return AuthenticationStatus.SEND_FAILURE;
            }
        }
        
        return AuthenticationStatus.NOT_DONE;
    }

}

Here, I look for the "Authorization" header and, if it starts with "GoogleLogin auth=", then I parse it for the token, create an instance of an app-custom GoogleAccountTokenHeaderCredential object (implementing Credential) and ask the app's IdentityStore to authorize it.

Returning to the IdentityStore implementation, that meant adding another validate override:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
@ApplicationScoped
public class NotesDirectoryIdentityStore implements IdentityStore {
    /* snip */

    public CredentialValidationResult validate(GoogleAccountTokenHeaderCredential credential) {
        try {
            try(DominoClient client = DominoClientBuilder.newDominoClient().build()) {
                String dn = client.validateCredentialsWithToken(appConfig.getAuthServer(), credential.headerValue());
                return new CredentialValidationResult(null, dn, dn, dn, getGroups(dn));
            }
        } catch (NameNotFoundException e) {
            return CredentialValidationResult.NOT_VALIDATED_RESULT;
        } catch (AuthenticationException | AuthenticationNotSupportedException e) {
            return CredentialValidationResult.INVALID_RESULT;
        }
    }
}

This one looks similar to the UsernamePasswordCredential one above, but takes instances of my custom Credential class - automatically picked up by the default implementation. I decided to be a little extra-fancy here: the particular Domino API in question supports custom token-based authentication to look up a distinguished name, and I made use of that here. That takes us one level deeper:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
public class GoogleAccountTokenHandler implements CredentialValidationTokenHandler<String> {
    public static final String AUTH_PREFIX = "GoogleLogin auth="; //$NON-NLS-1$
    
    @Override
    public boolean canProcess(Object token) {
        if(token instanceof String authHeader) {
            return authHeader.startsWith(AUTH_PREFIX);
        }
        return false;
    }

    @Override
    public String getUserDn(String token, String serverName) throws NameNotFoundException, AuthenticationException, AuthenticationNotSupportedException {
        String userTokenPair = token.substring(AUTH_PREFIX.length());
        int slashIndex = userTokenPair.indexOf('/');
        if(slashIndex >= 0) {
            String tokenVal = userTokenPair.substring(slashIndex+1);
            Token authToken = CDI.current().select(TokenBean.class).get().getToken(tokenVal)
                .orElseThrow(() -> new AuthenticationException(MessageFormat.format("Unable to find token \"{0}\"", token)));
            return authToken.user();
        }
        throw new AuthenticationNotSupportedException("Malformed token");
    }

}

This is the Domino-specific one, inspired by the Jakarta Security API. I could also have done this lookup in the previous class, but this way allows me to reuse this same custom authentication in any API use.

Anyway, this class uses another method on TokenBean:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
@ApplicationScoped
public class TokenBean {    
    @Inject @AdminUser
    Database adminDatabase;

    /* snip */

    public Optional<Token> getToken(String tokenValue) {
        return adminDatabase.openCollection("Tokens") //$NON-NLS-1$
            .orElseThrow(() -> new IllegalStateException("Unable to open view \"Tokens\""))
            .query()
            .readColumnValues()
            .selectByKey(tokenValue, true)
            .firstEntry()
            .map(entry -> new Token(entry.get("Token", String.class, ""), entry.get("User", String.class, ""))); //$NON-NLS-1$ //$NON-NLS-2$ //$NON-NLS-3$ //$NON-NLS-4$
    }
}

There, it looks up the requested token in the "Tokens" view and, if present, returns a record indicating that token and the user it was created for. The latter is then returned by the above Domino-custom GoogleAccountTokenHandler as the authoritative validated user. In turn, the JEE NotesDirectoryIdentityStore considers the credential validation successful and returns it back to the auth mechanism. Finally, the TokenAuthentication up there sees the successful validation and notifies the container about the user that the token mapped to.

Summary

So that turned into something of a long walk at the end there, but the result is really neat: as far as my app is concerned, the "GoogleLogin" tokens - as looked up in an NSF - are just as good as username/password authentication. Anything that calls httpServletRequest.getUserPrincipal() will see the username from the token, and I also use this result to spawn the Domino session object for each request.

Once all these pieces are in place, none of the rest of the app has to have any knowledge of it at all. When I implement the API to return the actual RSS feed entries, I'll be able to just use the current user, knowing that it's guaranteed to be properly handled by the rest of the system beforehand.

Bonus: Java 16

This last bit isn't really related to the above, but I just want to gush a bit about newer techs. My plan is to deploy this app using my Open Liberty Runtime, which means I can use any Open Liberty and Java version I want. Java 16 came out recently, so I figured I'd give that a shot. Though I don't think Liberty is officially supported on it yet, it's worked out just fine for my needs so far.

This lets me use the features that have come into Java in the last few years, a couple of which moved from experimental/incubating into finalized forms in 16 specifically. For example, I can use records, a specialized type of Java class intended for immutable data. Token is a perfect case for this:

1
2
public record Token(String token, String user) {
}

That's the entirety of the class. Because it's a record, it gets a constructor with those two properties, plus accessor methods named after the properties (as used in the examples above). Neat!

Another handy new feature is pattern matching for instanceof. This allows you to simplify the common idiom where you check if an object is a particular type, then cast it to that type afterwards to do something. With this new syntax, you can compress that into the actual test, as seen above:

1
2
3
4
5
6
7
@Override
public boolean canProcess(Object token) {
    if(token instanceof String authHeader) {
        return authHeader.startsWith(AUTH_PREFIX);
    }
    return false;
}

Using this allows me to check the incoming value's type while also immediately creating a variable to treat it as such. It's essentially the same thing you could do before, but cleaner and more explicit now. There's more of this kind of thing on the way, and I'm looking forward to the future additions eagerly.

Using Server-Sent Events on Domino

Tue Mar 30 08:57:20 EDT 2021

Tags: jakartaee java

Though Domino's HTTP stack infamously doesn't support WebSocket, WebSocket isn't the only game in town when it comes to getting push-type information to HTTP clients. HTML5 also brought with it the less-famous Server-Sent Events standard, which is basically half of WebSocket: it allows the server to push events to the client, but it's still a one-way communication channel.

The Standard

The technique that SSE uses is almost ludicrously simple: the client makes a request and the server replies that it will provide text/event-stream content and keeps the connection open. Then, it starts emitting events delimited by blank lines:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
HTTP/1.1 200 OK
Content-Type: text/event-stream;charset=UTF-8



event: timeline
data: hello

event: timeline
data: hello

Unlike WebSocket, there's no Upgrade header, no two-way communication, and thereby no special requirements on the server. It's so simple that you don't even really need a server-side library to use it, though it still helps.

In Practice

I've found that, though SSE is intentionally far less capable than WebSocket, it actually provides what I want in almost all cases: the client can receive messages instantaneously from the server, while the server can receive messages from the client by traditional means like POST requests. Though this is less efficient and flexible than WebSocket, it suits perfectly the needs of apps like server monitors, chat rooms, and so forth.

Using SSE on Domino

JAX-RS, the Java REST service framework, provides a mechanism for working with server-sent events pretty nicely. Baeldung, as usual, has a splendid tutorial covering the API, and a chunk of what I say here will be essentially rehashing that.

However, though Domino ships with JAX-RS by way of the ExtLib, the library only implements JAX-RS 1.x, which predates SSE support. Fortunately, newer JAX-RS implementations work pretty well on Domino, as long as you bring them in in a compatible way. In my XPages Jakarta EE Support project, I did this by way of RESTEasy, and there did the legwork to make it work in Domino's OSGi environment. For our example today, though, I'm going to skip that and build a small webapp using the com.ibm.pvc.webcontainer.application extension point. In theory, this should also work XPages-side with my project, though I haven't tested that; it might require messing with the Servlet response cache.

The Example

I've uploaded my example to GitHub, so the code is available there. I've aimed to make it pretty simple, though there's always some extra scaffolding to get this stuff working on Domino. The bulk of the "pom.xml" file is devoted to two main things: packaging an app as an OSGi bundle (with RESTEasy embedded) and generating an update site with site.xml to import into Domino.

Server Side

The real work happens in TimeStreamResource, the JAX-RS resource that manages client connections and also, in this case, happens to emit the messages as well.

This resource, when constructed, spawns two threads. The first one monitors a BlockingQueue for new messages and passes them along to the SseBroadcaster:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
try {
    String message;
    while((message = messageQueue.take()) != null) {
        // The producer below may send a message before setSse is called the first time
        if(this.sseBroadcaster != null) {
            this.sseBroadcaster.broadcast(this.sse.newEvent("timeline", message)); //$NON-NLS-1$
        }
    }
} catch(InterruptedException e) {
    // Then we're shutting down
} finally {
    this.sseBroadcaster.close();
}

Here, I'm using the Sse#newEvent convenience method to send a basic text message. In practice, you'll likely want to use the builder you get from Sse#newEventBuilder to construct more-complicated events with IDs and structured data types (usually JSON).

A BlockingQueue implementation (such as LinkedBlockingDeque) is ideal for this task, as it provides a simple API to add objects to the queue and then wait for new ones to arrive.

The second one emits a new message every 10 seconds. This is just for the example's sake, and would normally be actually looking something up or would itself be a listener for events it would like to broadcast.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
try {
    while(true) {
        String eventContent = "- At the tone, the Domino time will be " + OffsetDateTime.now();
        messageQueue.offer(eventContent);

        // Note: any sleeping should be short enough that it doesn't block HTTP restart
        TimeUnit.SECONDS.sleep(10);
    }
} catch(InterruptedException e) {
    // Then we're shutting down
}

Browsers can register as listeners just by issuing a GET request to the API endpoint:

1
2
3
4
5
@GET
@Produces(MediaType.SERVER_SENT_EVENTS)
public void get(@Context SseEventSink sseEventSink) {
    this.sseBroadcaster.register(sseEventSink);
}

That will register them as an available listener when broadcast events are sent out.

Additionally, to simulate something like a chat room, I added a POST endpoint to send new messages beyond the periodic ten-second broadcast:

1
2
3
4
5
6
@POST
@Produces(MediaType.TEXT_PLAIN)
public String sendMessage(String message) throws InterruptedException {
    messageQueue.offer(message);
    return "Received message";
}

That's really what there is to it as far as "business logic" goes. There's some scaffolding in the Servlet implementation to get RestEasy working nicely and manage the ExecutorService and the obligatory "plugin.xml" to register the app with Domino and "web.xml" to account for Domino's old Servlet spec, but that's about it.

Client Side

On the client side, everything you need is built into every modern browser. In fact, the bulk of "index.html" is CSS and basic HTML. The JavaScript involved in blessedly slight:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
function sendMessage() {
    const cmd = document.getElementById("message").value;
    document.getElementById("message").value = "";
    fetch("api/time", {
        method: "POST",
        body: cmd
    });
    return false;
}
function appendLogLine(line) {
    const output = document.getElementById("output");
    output.innerText += line + "\n";
    output.scrollTop = output.scrollHeight;
}
function subscribe() {
    const eventSource = new EventSource("api/time");
    eventSource.addEventListener("timeline",  (event) => {
        appendLogLine(event.data);
    });
    eventSource.onerror = function (err) {
        console.error("EventSource failed:", err);
    };
}

window.addEventListener("load", () => subscribe());

The EventSource object is the core of it and is a standard browser component. You give it a path to watch and then listen for events and errors. fetch is also standard and is a much-nicer API for dealing with HTTP requests. In a real app, things might get a bit more complicated if you want to pass along credentials and the like, but this is really it.

Gotchas

The biggest thing to keep in mind when working with this is that you have to be very careful to not block Domino's HTTP task from restarting. If you don't keep everything in an ExecutorService and account for InterruptedExceptions as I do here, you're highly likely to run into a situation where a thread will keep chugging along indefinitely, leading to the dreaded "waiting for session to finish" loop. The ExecutorService's shutdownNow method helps you manage this - as long as your threads have escape hatches for the InterruptedException they'll receive, you should be good.

I also, admittedly, have not yet tested this at scale. I've tried it out here and there for clients, but haven't pulled the trigger on actually shipping anything with it. It should work fine, since it's using standard JAX-RS stuff, but there's always the chance that, say, the broadcaster registry will fill up with never-ending requests and will eventually bloat up. The stack should handle that properly, but you never know.

Beyond any worries about the web container, it's also just a minefield of potential threading and duplicated-work trouble. For example, when I first wrote the example, I found that messages weren't shared, and then that the time messages could get doubled up. That's because JAX-RS, by default, creates a new instance of the resource class for each request. Moving the declaration from the Application class's getClasses() method (which creates new objects) to getSingletons() (which reuses single objects) fixed the first problem. After that, I found that the setSse method was called multiple times even for the singleton, and so I moved the thread spawning to the constructor to ensure that they're only launched once.

Once you have the threading sorted out, though, this ends up being a pretty-practical path to accomplishing the bulk of what you would normally do with WebSocket, even with an aging HTTP stack like Domino's.

Getting Started with Hotwire in a Java Webapp

Tue Jan 12 17:19:11 EST 2021

Whenever I have a great deal of discretion over how a web app is made these days, I like to push to see how simple I can make the front end portion. I spend some of my client time writing heavy client-JS front ends in React and Angular and what-have-you, and, though I get why they are good, I kind of hate them all.

One of the manifestations of my desires has been this very blog, where I set out to try not only some interesting current tools on the Java side, but also challenged myself heavily to use little to no JavaScript. On that front, I was tremendously successful - and, in fact, the only JavaScript on here is the Turbolinks library, which intercepts same-app links and updates the changed parts inline, without the server knowing about the "partial refresh" going on.

Since then, Turbolinks merged with its cousin Stimulus and apotheosized into Hotwire, which is somewhere in between a JavaScript framework and a manifesto. Specifically, it's a manifesto to my liking, so I've been champing at the bit to use it more.

Hotwire Overview

The "Hotwire" name is a cheeky truncation of HTML-over-the-wire, which itself is a neologism for how the web has historically worked: your server sends HTML, and then your browser does stuff with that. It "needs" a new name to set it apart from full-JS apps, which amount to basically sending an application to the browser, having it initialize the app, and then having the app do what would otherwise be the server's job by way of shuttling JSON around.

Turbo is that part that subsumed Turbolinks, and it focuses on enhancing existing HTML and providing a few web components to bring single-page-application niceties to server-rendered apps. The "Drive" part is Turbolinks, so that was familiar to me. What interested me next was Turbo Frames.

Turbo Frames

If you've ever used the XPages Dojo Tab Container's partialRefresh property before, Turbo Frames will be familiar. There are two main ways you can go about using it: making a "frame" that contains some navigable content (say, a form) that will then refresh in-place or making a lazy-loaded frame that pulls from another URL. The latter is what interested me now, and is what carries similar benefits to the Tab Container. It lets you serve the main page and then defer complex complication of an inner part without having to write your own JavaScript to do an API call or otherwise populate the section.

In my case, I wanted to do something very similar to the example. I have my main page, then a sidebar that can be potentially complicated to generate. So, I set up a Turbo Frame using this bit of JSP:

1
<turbo-frame id="links" src="${pageContext.request.contextPath}/links"></turbo-frame>

The only difference from the example, really, is the bit of EL in ${...}, which just makes sure that the final URL adapts to wherever the app is hosted.

The "links" resource there is another MVC controller that renders a different JSP page, truncated like:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
<html>
    <head>
        <script type="text/javascript" src="${pageContext.request.contextPath}/webjars/hotwired__turbo/7.0.0-beta.2/dist/turbo.es5-umd.js"></script>
    </head>
    <body>
        <turbo-frame id="links">
            <!-- expensive content here -->
        </turbo-frame>
    </body>
</html>

The <turbo-frame id="links"> on the initiating page matches up with the one in the embedded page to figure out what to extract and render.

One little side note here is my use of WebJars to bring in Turbo. This isn't an NPM-based project, so there's no package.json bringing the dependency in, but I also didn't want to just paste the JS into my project. Fortunately, WebJars does yeoman's work: it makes various JS libraries available in Servlet-friendly Java JAR format, giving you a JAR with the JS from whatever the library is in META-INF/resources. In turn, an at-least-reasonably-modern servlet container will serve files up from there as if they're part of your main app. That way, you can just use a Maven dependency and not have to worry.

A Hitch: 406 Not Acceptable

Edit 2021-01-13: Thanks to a new release of Turbo, this workaround is no longer needed.

When I first put this together, I saw that Turbo was doing its job of fetching from the remote URL, but it was getting a 406 Not Acceptable response from the server. It took me a minute to figure out why - the URL was correct, it was just a normal GET request, and nothing immediately stood out as a problem in the headers.

It turned out that the trouble was in the Accept header. To work with other Turbo components, Frames makes a request with a header like Accept: text/html; turbo-stream, text/html, application/xhtml+xml. That first one - text/html; turbo-stream - is problematic. I'm not sure if it's the presence of a qualifier at all on text/html, the space, or the lack of an = (as in text/html;charset=UTF-8), but Liberty didn't like it.

Since I'm not (yet, at least) using Turbo Streams, I decided to filter this out on the server. Since MVC is built on JAX-RS, I wrote a JAX-RS request filter to find any Accept values of this type and strip them out:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
@Provider
@PreMatching
public class TurboStreamAcceptFilter implements ContainerRequestFilter {
    @Override
    public void filter(ContainerRequestContext requestContext) throws IOException {
        MultivaluedMap<String, String> headers = requestContext.getHeaders();
        if(headers.containsKey(HttpHeaders.ACCEPT)) {
            List<String> cleaned = headers.get(HttpHeaders.ACCEPT).stream()
                .map(accept -> {
                    String[] vals = accept.split(",\\s*"); //$NON-NLS-1$
                    List<String> localClean = Arrays.stream(vals)
                        .filter(val -> val.indexOf(';') < 0)
                        .collect(Collectors.toList());
                    return String.join(", ", localClean); //$NON-NLS-1$
                })
                .collect(Collectors.toList());
            headers.put(HttpHeaders.ACCEPT, cleaned);
        }
    }
}

Since those filters happen before almost anything else, this cleared up the trouble.

Summary

Setting the Accept quirk aside, this was a pleasant success, and I look forward to using this more. I've found the modern Java stack of JAX-RS + CDI + MVC + simple JSP to be a delight, and Hotwire slots perfectly-smoothly into it. I still quire enjoy rendering HTML on the server and the associated perk of not having to duplicate business logic on both sides. Next time I have an app that requires a bit of actual JavaScript, I'll likely throw Stimulus into the mix here.

Managed Beans to CDI

Fri Jun 19 13:50:44 EDT 2020

  1. Java Services (Not the RESTful Kind)
  2. Java ClassLoaders
  3. Managed Beans to CDI
  4. The Myriad Idioms For Finding Implementations In Java

When I was getting familiar with modern Java server development, one of the biggest conceptual stumbling blocks by far was CDI. Part of the trouble was that I kind of jumped in the deep end, by way of JNoSQL's examples. JNoSQL is a CDI citizen through and through, and so the docs would just toss out things like how you "create a repository" by just making an interface with no implementation.

Moreover, CDI has a bit of the "Maven" problem, where, once you do the work of getting familiar with it, the parts that are completely baffling to newcomers become more and more difficult to remember as being unusual.

Fortunately, like how coming to Maven by way of Tycho OSGi projects is "hard mode", coming to CDI by way of a toolkit that uses auto-created proxy objects is a more difficult path than necessary. Even better, XPages developers have a clean segue into it: managed beans.

JSF Managed Beans

XPages inherited the original JSF concept of managed beans, where you put definitions for your beans in faces-config.xml like so:

1
2
3
4
5
6
7
8
9
<managed-bean>
	<managed-bean-name>someBean</managed-bean-name>
	<managed-bean-class>com.example.SomeBeanClass</managed-bean-class>
	<managed-bean-scope>application</managed-bean-scope>
	<managed-property>
		<property-name>database</property-name>
		<value>#{database}</value>
	</managed-property>
</managed-bean>

Though the syntax isn't Faces-specific, the fact that it is defined in faces-config.xml demonstrates what a JSF-ism it is. Newer versions of JSF (not XPages) let you declare your beans inline in the class, skipping the XML part:

1
2
3
4
5
6
7
8
package com.example;
// ...
@ManagedBean(name="someBean")
@ApplicationScoped
public class SomeBeanClass {
	@ManagedProperty(value="#{database}")
	private Database someProp;
}

These annotations were initially within the javax.faces package, highlighting that, while they're a new developer convenience, it's still basically the same JSF-specific thing.

While all this was going on (and before it, really), the Enterprise JavaBeans (EJB) spec was chugging along, serving some similar concepts but it really is kind of its own, all-consuming beast. I won't talk about it much here, in large part because I've never used it, but it has an important part in this history, especially when we get to the "dependency injection" parts.

Move to CDI

Since it turns out that managed beans are a terrifically-useful concept beyond just JSF, Java EE siphoned concepts from JSF and EJB to make the obtusely named Contexts and Dependency Injection spec, or CDI. CDI is paired with some associated specs like Common Annotations and Inject to make a new bean system. With a switch to CDI, the bean above can be tweaked to something like:

1
2
3
4
5
6
7
8
package com.example;
// ...
@Named(name="someBean")
@ApplicationScoped
public class SomeBeanClass {
	@Inject @Named("database")
	private Database someProp;
}

Not wildly different - some same-named annotations in a different package, and some semantic switches, but the same basic idea. The difference here is that this is entirely divorced from JSF, and indeed from web apps in general. CDI specifically has a mode that works outside of a JEE/Servlet container and could work in e.g. a command-line program.

Newer versions of JSF (and other UI engines) deprecated their own version of this to allow for CDI to be the consistent pool of variable resolution and creation for the UI and for the business logic.

The Conceptual Leap

One of the things blocking me from properly grasping CDI at first was that @Inject annotation on a property. If it's just some Java object, how would that property ever be set? Certainly, CDI couldn't be so magical that I could just do new SomeBeanClass() and have someProp populated, right? Well, yes, that's right. No matter how gussied up your class definition is with CDI annotations, constructing an instance with new will pay no attention to any of it.

What got me over the hurdle is realizing that, in a modern web app in particular, almost everything you do runs through CDI. JSP request? That can resolve CDI. JAX-RS resource? That's managed by CDI. Filters? CDI. And, because those objects are all being instantiated by CDI, the CDI runtime can do whatever the heck it wants with them. That's why the managed property in the original example is so critical: it's the same idea, just managed by the JSF runtime instead of CDI.

That's how you can get to a class like the controller that manages the posts in this blog. It's annotated with all sorts of stuff: the JAX-RS @Path, the MVC spec @Controller, the CDI @RequestScoped, and, importantly, the @Inject'ed properties. Because the JAX-RS environment instantiates its resource classes through CDI in a JEE container, those will be populated from various sources. HttpServletRequest comes from the servlet environment itself, CommentRepository comes from JNoSQL as based on an interface in my non-JEE project (more on that in a bit), and UserInfoBean is a by-the-numbers managed bean in the CDI style.

There's certainly more indirect "magic" going on here than in the faces-config.xml starting point, but it's a clear line from there to here.

The Weird Stuff

CDI covers more ground, though, and this is the sort of thing that tripped me up when I saw the JNoSQL examples. Among CDI's toolset is the creation of "proxy" objects, which are dynamic objects that intercept normal method calls with new behavior. This is a language-level Java feature that I didn't even know this was a thing in this way, but it's been there since 1.3.

Dynamic scripting languages do this sort of thing as their bread and butter. In Ruby, you can define method_missing to be called when code calls a method that wasn't already defined, and that can respond however you'd like. Years ago, I used this to let you do doc.foo to get a document item value, for example. In Java, you get a mildly-less-loosey-goosey version of this kind of behavior with a proxy's InvocationHandler.

CDI does this extensively, even when you might think it's not. With CDI, all instances are dynamic proxy objects, which allows it to not only inject field values, but also add wrapper code around method calls. This allows tools like MicroProfile Metrics to do things like count invocations, measure timings, and so forth without requiring explicit code beyond the annotations.

And then there are the whole-cloth new objects, like the JNoSQL repositories. To take one of the examples from jnosql.org, here's a full definition of a JNoSQL repository as far as the app developer is concerned:

1
2
3
4
5
6
public interface PersonRepository extends Repository<Person, Long> {

  List<Person> findByName(String name);

  Stream<Person> findByPhones(String phone);
}

Without knowledge of CDI, this is absolute madness. How could it possibly work? There's no code! The trick to it is that CDI ends up creating a dynamic proxy implementation of the interface, which is in turn backed by an InvocationHandler instance. That instance receives the incoming method call as a string and array of parameters, parses the method to look for a concept it handles, and either generates a result or throws an exception. Once you see the capabilities the stack has, the process to get from a JAX-RS class using @Inject PersonRepository foo to having that actually work makes more sense:

  • The JAX-RS servlet receives a request for the resource
  • It asks the CDI environment to create a new instance of the resource class
  • CDI runs through the fields and methods of the class to look for annotations it can handle, where it finds @Inject
  • It looks through its contributed extensions and finds JNoSQL's ServiceLoader-provided extension
  • One of the beans from that extension can handle creating Repository instances
  • That bean creates a proxy object, which handles method calls via invoke

Still pretty weird, but at least there's a path to understanding.

The Overall Importance

The more I use modern JEE, the more I see CDI as the backbone of the whole development experience. It's even to the point where it feels unsafe to not have it present, managing objects, like everything is held together by shoestring. And its importance is further driven home by just how many specs depend on it. In addition to many existing technologies either switching to or otherwise supporting it, like JSF above, pretty much any new Jakarta EE or MicroProfile technology at least has it as the primary mechanism of interaction. Its importance can't be overstated, and it's worth taking some time either building an app with it or at least seeing some tutorials of it in action.