The RuntimeEnvironment Idiom

Jun 18, 2020, 9:16 AM

Tags: java xpages
  1. XPages: The UI Toolkit and the App Framework
  2. The RuntimeEnvironment Idiom
  3. NSF ODP Tooling 3.1.0: Dynamically Including Web Resources

One of the specific problems that we encountered with my aforementioned client app first when expanding it to include REST services and then later to be portable outside an NSF entirely is dealing with varying mechanisms for interacting with the surrounding environment.

The Problem to Solve

The immediate way this distinction comes up when adding JAX-RS services or other OSGi servlets is trying to get a handle on the current Domino user session or context database. In an XPages app (including in code called in a plugin-based library), you can just do:

1
Session s = ExtLibUtil.getCurrentSession();

However, this will return null if called while processing an OSGi servlet. Instead, servlet code should call:

1
Session s = ContextInfo.getUserSession();

Same idea - they both return a session based on the current authenticated user from the HTTP stack - but they have different backing implementations. So my first pass was to coordinate these inside an AppUtil class in a method like this:

1
2
3
4
5
6
7
public static Session getSession() {
	if(FacesContext.getCurrentInstance() != null) {
		return ExtLibUtil.getCurrentSession();
	} else {
		return ContextInfo.getUserSession();
	}
}

This worked pretty well, until I added Tycho-based compile-time unit tests, which is an OSGi environment where neither of those paths would return a session. So I had to add a fallback that would just eventually spawn a new NotesFactory.createSession() if it couldn't find another one.

It's one thing for a getSession() method to balloon in logic, but Notes runtime access isn't the only problem like this. Take the case of validating model objects as part of the "save" process. In an XPages environment, validation errors should be reported as FacesMessages on the view root or, ideally, attached directly to the form control that represents the invalid field. In a REST service, though, the ConstraintViolationException should bubble right up to the top and be returned as an appropriately-formatted JSON object with a corresponding HTTP status code. Originally, we handled this similarly: we moved the FacesMessage stuff out of the model objects and into the AppUtil class and handled it with an if tree.

The RuntimeEnvironment Class

Eventually, though, there was enough customizable behavior that these branching methods in one class got out of hand, and that's even before getting into cases where a class (like FacesContext) may not even be available at runtime at all. So I implemented a RuntimeEnvironment class as a service. It started out like this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
public interface RuntimeEnvironment {
	static final List<RuntimeEnvironment> knownEnvironments = AppUtil.findExtensions(RuntimeEnvironment.class).stream()
			.sorted((a, b) -> Integer.compare(b.getWeight(), a.getWeight()))
			.collect(Collectors.toList());

	public static RuntimeEnvironment current() {
		return knownEnvironments.stream()
			.filter(RuntimeEnvironment::isCurrent)
			.findFirst()
			.orElseGet(UnknownEnvironment::new);
	}

	boolean isCurrent();
	int getWeight();
}

The AppUtil.findExtensions method is a simplified wrapper around the IBM Commons ExtensionManager call to find services in a type-safe way.

This allows me to define a series of RuntimeEnvironment implementations that may or may not be included in a given packaging of the app, while the isCurrent() and getWeight() methods allow me to distinguish between multiple valid environments to find the most specific. To get an idea of what I mean, here is the current suite of environment implementations:

RuntimeEnvironment type hierarchy

These run a wide gamut. XPagesEnvironment and OSGiServletEnvironment are the big ones that kicked it off, but TychoEnvironment is there to handle compile-time tests, while NotesEnvironment lets the same code work in some utilities we launch from within the Notes client - and SWTRuntimeEnvironment allows those same tools to run outside of OSGi.

Once I broke ground on these classes, the number of situations where they're useful began to become obvious. Take, for example, resolving a variable. The on-Domino XPages implementation looks like what you'd expect:

1
2
3
4
5
@Override
public <T> @Nullable T resolveVariable(final String varName) {
	FacesContext context = FacesContext.getCurrentInstance();
	return (T) context.getApplication().getVariableResolver().resolveVariable(context, varName);
}

In the OSGiServletEnvironment and JakartaRuntimeEnvironment cases, though, I can use CDI instead:

1
2
3
4
5
@Override
public <T> @Nullable T resolveVariable(final String varName) {
	Instance<Object> instance = CDI.current().select(NamedLiteral.of(varName));
	return instance.isResolvable() ? (T)instance.get() : null;
}

It gets down to little things, too, like how the POST destination for form-based login can be /names.nsf?Login on Domino but /j_security_check on other webapp servers.

Seeing It Elsewhere

This sort of idiom is by no means anything I came up with. You can see it pretty frequently - in fact, I highlighted the way the IBM Commons stack does a very similar thing when running XPages outside Domino:

IBM Commons Platform hierarchy

This serves essentially the same purpose, being filled with mechanisms for getting output streams, finding resource locations, and retrieving named objects.

Regular Use

Should you implement something like this for most apps? Probably not, no - for most even-moderately-complex XPages applications, having some if tests in a central util to distinguish between XPages and OSGi servlets should be enough. I think it's a useful instructional example, though, and it sure was critical in getting this massive thing working outside Domino. As we make our apps more portable, this is the sort of technique we should keep in mind.

XPages: The UI Toolkit and the App Framework

Jun 17, 2020, 9:19 PM

Tags: java xpages
  1. XPages: The UI Toolkit and the App Framework
  2. The RuntimeEnvironment Idiom
  3. NSF ODP Tooling 3.1.0: Dynamically Including Web Resources

Lately, one of my client projects has been picking up the pace on the years-long effort of taking a giant XPages app, making the business logic portable, and incrementally cutting down on the "XPage-iness" of it all. I expect that this will be a recurring source of blog posts, and this one is about distinguishing between "XPages the UI toolkit" and "XPages the web app framework".

XPages in an NSF

Coming from a Domino perspective, there is no distinction between the two, and that's largely because of the path we took to get here. Other than existing inside the same NSF, the distinction between "classic" Notes apps (web or client) and XPages couldn't be more stark. Legacy design elements are only shared in ways that are completely divorced from their original UI presentation (for the better), and the runtimes - as much as legacy elements can be said to have a "runtime" - are entirely distinct.

An XPages app can be thought of as conceptually a "normal" Java WAR-based webapp housed inside an NSF, and it has a lot of the trappings: classes in WEB-INF/classes, libraries in WEB-INF/lib, and an OSGi-style WebContent folder for miscellaneous files. It's not technically a normal webapp - there's no "web.xml" and the XPages outer "LCD" runtime is actually more like one giant webapp that acts like many - but it's close.

Critically, though, the Domino HTTP router only routes requests for ".xsp" files or "/xsp/" folders to your app's XPages environment, and this is the biggest technical and conceptual impediment. You can't (within an NSF) intercept just any incoming request and process it as you would in a normal webapp. You can kind of shim your way into it with servletFactory, but it's a fiddly process and limited to "/xsp/..." URLs.

Additionally, a running XPages app only exists in a very constrained way between requests. While the JSF-level "Application" and the Servlet-level session exist, you don't work with a per-app ServletContext the way you do in a Servlet webapp. You can hook in with ApplicationListeners and similar constructs, but they're still based on the lifecycle of the XPages app, which comes into existence only on the first request and dies (usually) half an hour after the last.

These, plus the specifics of Domino data access, combine to make the "XAgent" - an abomination of a concept - the catchall replacement for specialized rendering, batch processing, and even scheduled tasks.

XPages the View Engine

These are all accidents of history, though. They stem from the firm requirement that existing Domino HTTP behavior remain intact even with its brain transplant, as well as the "soft" requirement that XPages in an NSF pretend to be "forms with repeats and partial refresh".

At its core, XPages is "just" a web view engine: its only job is to accept a request from an HTTP client and return some HTML. The concepts it uses to accomplish this - components, renderers, managed beans, themes - are all incidental to the main task. This is the "V" part of MVC. Admittedly, even without the NSF compromises, XPages bleeds beyond its assigned third of the triad, and it inherited this from JSF. JSF is also billed as MVC, but it completely subsumes the "Controller" part and partially eats the "Model" part with its bean management.

Still, though, even a domineering framework like JSF slots in as just one component of a normal webapp, rather than being the whole thing as XPages is in an NSF. For example, take the app behind this blog, which partially looks like this:

Java and JSP resources in the blog

It uses JSP as its view template engine, but is it a "JSP app"? Not really. The fact that it uses MVC 1.0 is more important to understanding it, but that's really an extension to JAX-RS. You could make a strong case that it's a "JAX-RS app", especially when you expand the "Services" section in Eclipse:

REST services in the blog

That covers more of it, but still leaves parts out. It has application-wide beans by way of CDI, entirely-UI-free scheduled tasks kicked off from a ServletContextListener, and core business logic and model objects that are kept in a module that doesn't even know about the Servlet API.

It's layered, but the layers are explicable and the distinctions create a tremendous amount of flexibility. I could, if I wanted, change to ThymeLeaf for the front end with essentially no friction, JSF or Vaadin with only mildly more, or to a client JS REST UI by chopping off the top two layers outright.

Okay, So?

This description isn't a call to action - there's nothing inherently wrong about an XPages app in an NSF, especially a small-to-medium one - but this will be an important part of the conceptual groundwork in the months to come. To figure out what to do with all these piles of XSP markup and framework-specific business logic we have, we'll have to do a lot of deconstruction.

Java ClassLoaders

Jun 5, 2020, 10:47 AM

Tags: java osgi xpages
  1. Java Services (Not the RESTful Kind)
  2. Java ClassLoaders
  3. Managed Beans to CDI

In my last post, I casually mentioned the concept of ClassLoaders a couple times, and I think that they deserve their own post. ClassLoaders are exactly the kind of thing where, once you do Java long enough, you start to take for granted, but which aren't necessarily immediately obvious for people not as immersed.

The Basics

The core job of a ClassLoader is what it says on the tin: it loads classes. Say you have this bit of code using a class from the core Java library:

1
long now = System.currentTimeMillis();

This uses two types: long, which is a built-in primitive type and not a class at all, and java.lang.System. long doesn't have to come from anywhere, but java.lang.System does, and that's the job of a ClassLoader. In this case, the Java VM will ask the contextual ClassLoader for a class by that name, and the ClassLoader will (at least in Java 8 - things got weird later) look into the core library and find a file named "java/lang/System.class" within "rt.jar", parse its binary contents into an executable class, and hand it back to the VM.

ClassLoaders are also the source of two problem reports you've likely seen: ClassNotFoundException and NoClassDefFoundError. These two basically mean the same thing: the running app tried to load a class by name, but it wasn't found - they just differ in context (the former generally when a class is asked for dynamically, the latter when it's referenced as part of compiled code). This sort of thing can occur when you write code using a class that's present in your development environment but is not present when run later - among XPages developers, this happens quite a bit when people drop some JARs into jvm/lib/ext in their Designer installation but don't do the same on Domino.

Resource Loading

In addition to finding classes, ClassLoaders have a few other tasks, the main one of which of interest to us is loading resources. In my previous post, I talked about how ServiceLoader looks for service files by a given name, like META-INF/services/com.sprockets.data.FizzBuzzConverter. It does this by checking with the current ClassLoader and calling cl.getResources("META-INF/services/com.sprockets.data.FizzBuzzConverter"), which will return a listing of resources from JARs (and JAR-like sources, like an NSF) that it knows about matching that name. In that way, multiple JARs can declare services with the same name without conflicting.

ClassLoader Trees

Though conceptually your running program has "a ClassLoader", in reality it's almost definitely a chained series of ClassLoaders, rooted in the core system ClassLoader and then drilling down more specifically to your app's code. For example, take an application running in Apache Tomcat. In that case, Tomcat's documentation describes four basic tiers:

  • The core JVM ("bootstrap") ClassLoader that comes with any running Java program. As Tomcat's docs note, this implementation may vary
  • The central ("system") ClassLoader that contains the "just above the metal" classes, such as those you may add in the "CLASSPATH" environment variable
  • The Tomcat-specific ("common") ClassLoader, containing classes shared among all running applications. For example, javax.servlet.Servlet would be found here
  • Your app's ClassLoader, containing classes you write as well as any third-party JARs you bundled into your WAR file in WEB-INF/lib

When your code executes and requests a new class, the runtime will check first with your app's local ClassLoader and return what it finds there if present - if the class isn't present there, then that ClassLoader will delegate up to its parent, and so forth until it either finds a class or hits the root and throws a NoClassDefFoundError.

The way that each app has its own ClassLoader is also how you can have multiple apps on the same server that can each know about common core classes, but don't step on each others' toes with their own custom classes. Though javax.servlet.Servlet is the same class for two running apps, one app could have an internal class named "com.example.SomeBusinessLogic" and it wouldn't be visible by other running apps.

Dynamic ClassLoaders

Though the normal case of ClassLoaders is that sort of "do I have this class? If not, ask my parent" chain, the fact that a ClassLoader is itself a custom Java class means that its behavior can be pretty arbitrary. This is present in a normal web app ClassLoader: it knows to look in the WEB-INF/classes path within the WAR file instead of the normal behavior of checking from the root of a JAR, and it knows how to look in WEB-INF/lib for additional JARs to search.

In an XPages application, the active ClassLoader is roughly similar to Tomcat's app ClassLoader example, but with a couple additional capabilities. The main one is that the NSF's ClassLoader - an instance of com.ibm.domino.xsp.module.nsf.ModuleClassLoader - has knowledge of how to treat an NSF as if it were a WAR file. In Designer's "Package Explorer" pane, you get a view of the NSF that makes it look basically like a normal WAR, where classes go in WEB-INF/classes and JARs go in WEB-INF/lib. However, it's still really a nebulous pool of notes floating around, and so the ModuleClassLoader does design-collection lookups for file resources of various types and loads the class bytecode or resource data from there.

It also, in a move presumably designed to inconvenience me personally, has explicit restrictions on what classes it can load: even though it knows about, for example, org.eclipse or com.ibm.domino.napi classes, it has a check to explicitly bar loading these. That's why, even if you configure Designer to see those classes and compile XPages code that references them, they won't be available at runtime.

OSGi ClassLoaders

OSGi ClassLoaders are a particular kind of dynamic ClassLoader. In addition to the normal hierarchical view of the world, they take on special responsibilities for ensuring that your OSGi module (which an XPages app kind of is) sees classes from other bundles based on its dependency rules, but not necessarily their resources. For example, take rules like this in an OSGi bundle's META-INF/MANIFEST.MF:

1
2
Require-Bundle: com.ibm.xsp.core
Import-Package: com.ibm.commons.util

These simple lines hide some beguiling complexity. With this definition, a running class in your bundle will be able to see:

  • All classes at the system level, such as java.lang.System
  • All classes contained within and exported by "com.ibm.xsp.core", such as com.ibm.xsp.FacesExceptionEx and com.ibm.xsp.url.UrlHandler
    • There's also special behavior going on here, because those classes are contained within an embedded JAR in the bundle, referenced as Bundle-ClassPath: lwpd.xsp.core.jar - this is an OSGi-ism
    • Though this bundle lists all of its packages in its Export-Package header, this is not a requirement: it's common for an OSGi bundle to have classes internally that are not accessible from outside
  • All classes exported by its bundle dependency that it marks as visibility:=reexport: "com.ibm.pvc.servlet" and "com.ibm.designer.lib.jsf"
    • This is why you can have a dependency on just "com.ibm.xsp.core" and access javax.faces.context.FacesContext even though it's not in the core XSP bundle
    • This is also transitive, though neither of those re-exported dependencies themselves re-export any dependencies
  • The classes from the "com.ibm.commons" bundle in the "com.ibm.commons.util" package. This means that com.ibm.commons.util.StringUtil is visible, but com.ibm.commons.extension.ExtensionManager is not, despite both being within the same bundle JAR

There are also tons of weird visibility and dependency details as well in OSGi, but that's the gist of it. Note that I specifically mentioned that the resources aren't visible. Though the Require-Bundle: com.ibm.xsp.core line makes all classes exported from the XSP core visible to your code, calling ServiceLoader.load(com.ibm.xsp.acf.HtmlFilteringFactory.class) will not find the DefaultHtmlFilteringFactory implementation declared in there, even though it's done in a ServiceLoader-compatible way. This is why IBM Commons papers over that difference with its "plugin.xml" extension declarations. OSGi actually contains a Service Loader Mediator specification to bridge this gap, but Domino doesn't include an implementation of that part.

Fragment Bundles

There's one special case with OSGi bundles that's worth highlighting: fragments. Normally, each bundle effectively has its own ClassLoader space, walled off from all others by OSGi's broker. However, if you declare your bundle as having a Fragment-Host of another active bundle, your code acts as if it's within the parent, gaining access to not just all of the parent bundle's classes, but also its non-class resources. Moreover, this works in the reverse: the parent also gains access to the fragment's classes as resources, though it generally won't "know" about them at the time of development.

This is a technique that's come in handy for me many times, in particular in cases like the XPages Jakarta EE Support project, where API bundles will use ServiceLoader to find their implementations. In those cases, one of the ways I get it to work in OSGi is to create a fragment bundle out of the implementation, meaning that the bundles remain distinct but now the API can find the META-INF/services files and classes it needs to operate.

This has a good number of other uses, too, such as providing platform-specific native code to an otherwise-platform-independent core bundle. The Notes.jar wrapper used in XPages land uses this type of technique. Though, to my knowledge, Notes.jar doesn't contain any actual native code, it's still delivered in two pieces:

  1. The "com.ibm.notes.java.api" bundle, which lists all of the exported packages but holds no code itself
  2. The "com.ibm.notes.java.api.win32.linux" bundle, which contains the actual Notes.jar and declares Fragment-Host: com.ibm.notes.java.api
    • I'm not sure why this is the case, but maybe Notes.jar is different on System i or something

If you have a bundle that needs access to lotus.domino classes, you then can either do Require-Bundle: com.ibm.notes.java.api or Import-Package: lotus.domino and it'll be resolved out of the fragment. There's also an Eclipse-ism in here: the first bundle has Eclipse-ExtensibleAPI: true, which is a tip-off to the IDE that it should specifically allow fragments to contribute available classes to the development environment. This is generally required when developing with Eclipse's plug-in tooling (shared with Designer), but it's not actually enforced one way or the other by the runtime.

Wrapping It Up

This is all definitely in the category of "you don't normally need to worry about it, but it's very helpful to know", like the previous ServiceLoader topic. Until you're implementing some low-level stuff, you're not likely to interact with the ClassLoader directly, especially to a level beyond finding Thread.currentThread().getContextClassLoader() or Foo.class.getClassLoader(). Knowing about it can help make clear what's going on in situations where a class shows up in development but not at runtime, or when the XPages ClassLoader tries to get to fancy and throws up on itself.

Java Services (Not the RESTful Kind)

Jun 4, 2020, 4:42 PM

Tags: java
  1. Java Services (Not the RESTful Kind)
  2. Java ClassLoaders
  3. Managed Beans to CDI

The concept of "services" in Java is fairly critical, but, especially with the XPages stack we've grown used to, the term covers quite a few different technologies.

Definition

Before I continue on, I want to make clear what I mean by "service" in this context. It's unrelated to REST services or even remote access of any kind; instead, it's about how an app can find implementations of some kind of class or interface within its runtime.

A very-common type of this sort of thing is a data adapter or converter. Say you have your own object FizzBuzz that you use within your app, one that represents data storable in multiple ways. One way to handle converting from various types to FizzBuzz would be a giant if tree, like:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
public FizzBuzz convert(Object input) {
	if(input instanceof String) {
		// ...
	} else if(input instanceof JsonObject) {
		// ...
	} else if(input instanceof org.w3c.dom.Document) {
		// ...
	} else {
		throw new IllegalArgumentException("Cannot convert to FizzBuzz: " + input);
	}
}

That'd work well enough, especially for a small app. You can imagine, though, how this might get out of hand in an even moderately-complicated case, with the if tree turning into a tangled mess. Moreover, this doesn't allow for any extensibility without directly modifying the convert method - any new type will have to go into this, making management of a large team more cumbersome and completely cutting off the possibility of third-party additions.

So, to keep things scalable, it'd make sense to create an interface that would specify a generic way to convert some type of object to a FizzBuzz:

1
2
3
4
5
6
package com.sprockets.data;

public interface FizzBuzzConverter {
	boolean canConvert(Object o);
	FizzBuzz convert(Object o);
}

Then the code that actually needs to convert would look more like this:

1
2
3
4
5
6
7
8
public FizzBuzz convert(Object input) {
	Stream<FizzBuzzConverter> converters = moreOnHowToFetchLater();
	return converters
		.filter(converter -> converter.canConvert(input))
		.findFirst()
		.map(converter -> converter.convert(input))
		.orElseThrow(() -> new IllegalArgumentException("Cannot convert to FizzBuzz: " + input));
}

In a small case like this, that's not necessarily going to be a big deal, but it doesn't take too long for it to become desirable to break it apart. Take the case of JAX-RS providers, which do exactly this kind of entity conversion when processing HTTP requests. Everything over HTTP comes in as plain text (more or less), but programmers want to be able to accept an int input parameter, or to automatically convert their custom business-logic object to JSON. Without a separation like this, the code to handle all known types would be impossible to manage all in one place, and there'd be no way to handle custom types that didn't exist when the code was written.

Types

There are quite a few distinct types of services that I've run across, and I'll list them here in roughly the likelihood that a programmer coming from an XPages background will encounter them.

ServiceLoader Services

This is the most-common kind of service you're likely to encounter in a Java application, and you can generally identify it by its use of the META-INF/services directory inside a JAR. java.util.ServiceLoader itself was added to Java in 1.6 but was designed to codify habits that become common beforehand.

The way this works is designed to be simple: you create a plain-text file within META-INF/services named after the service class you're implementing, and then put the names of your implementing classes within it, one on each line. So, in our above example, you'd create a file named META-INF/services/com.sprockets.data.FizzBuzzConverter and fill it with something like:

com.sprockets.data.impl.StringFizzBuzzConverter
com.sprockets.data.impl.JsonObjectFizzBuzzConverter
com.sprockets.data.impl.DomDocumentFizzBuzzConverter

Code that calls ServiceLoader.load(FizzBuzzConverter.class) will find all of those files within the current ClassLoader space (more fun with that down the line) and instantiate the named classes, returning an Iterator to loop through them.

IBM Commons Services

Within the XPages stack, the bulk of service interactions are managed by the IBM Commons ExtensionManager class, which is a generic way to ask for a service type by String name.

In the normal case, this acts as a slightly-old-timey variant of the now-standard ServiceLoader mechanism, likely by dint of preceding the standard's introduction. Like ServiceLoader, it looks for files with the name you pass it in the META-INF/services directory in your app and adds instances of all the names it finds within.

What makes it important (and what gives it longevity in non-XPages OSGi apps on Domino) is that it also bridges into the Equinox OSGi service infrastructure when available and looks for services registered there by the com.ibm.commons.Extension name. The reason this is important is that, in an OSGi context, one bundle can't by default see the files in another bundle in its ClassLoader, which means that services registered via META-INF/services in one won't be picked up by a ServiceLoader call in another.

Since XPages's life spanned a pre-OSGi era and the 8.5.2 "Extensibility API" era, it bears the signifiers of both, smoothly papered over by IBM Commons:

com.ibm.xsp.core bundle services

The Equinox loader looks in both places, in fact, which is why you can declare XPages services within an application using META-INF/services as well as within an OSGi bundle's plugin.xml file.

Equinox plugin.xml Extensions

I mentioned above that IBM Commons bridges the difference between ServiceLoader and Equinox, but now I'd better go into a little more detail about the latter.

"Equinox" refers to the particular OSGi implementation that underlies both Eclipse-the-IDE (and thus Notes) and Domino's web stack. While Equinox is the fully-fledged reference implementation of OSGi, plugin.xml is specific to it and I believe pre-dates Eclipse's migration to OSGi (which we still see reflected in 9.0.1FP10+'s plugin trouble).

plugin.xml used to house a lot of information that was moved over to META-INF/MANIFEST.MF, but its primary remaining function is to declare services for the Equinox environment. Eclipse itself uses this extensively, and it remains the primary way to extend the IDE's capabilities.

One important thing to note here is that plugin.xml's extensions aren't limited to just providing a service class implementation. While many do that, it's also used heavily to provide configuration information without executable classes at all.

Multi-type "FactoryFinder" style

This type of service locator is similar to the IBM Commons ExtensionManager, but is usually confined to an individual domain, like a specific Jakarta EE spec. The way this idiom works is that there's a central coordinating class, usually named FactoryFinder, whose job it is to locate implementations of services from one or more sources, and often using a known fallback implementation.

I encountered one of these when diving deep into the XPages stack. javax.faces.FactoryFinder is responsible for finding implementations of very-low-level entities, like the services that spit out JSF applications at the start of initialization, or those that create FacesContext objects.

These will often have specialized behavior. For example, the standard SOAP API looks through a system property, then an external "jaxm.properties" file, then ServiceLoader, then an older META-INF/services name, then OSGi, and finally falls back to a default class name.

Java 9 Modules

I have to admit that I haven't actually used this, but it's too important to skip. Java 9 and above include a module system that is sort of like an OSGi bundle in that it lets you declare what your module exports and other characteristics about its interactions with the outside world.

Along with this support came a new way to declare services. Since this is also baked in to Java itself, it gets the advantage of also working with ServiceLoader. In this case, instead of writing a text file in META-INF/services, you declare the type of service you're providing and the class implementing it in the module definition. This not only unifies the service with other module information, but it also makes it more type-safe and programmatically clear. It's neat-looking.

OSGi

I already mentioned that Equinox can use the "plugin.xml" file to do cross-bundle services in OSGi, but I also mentioned that it's specific to that one implementation and not actually part of the OSGi spec.

Instead, OSGi has a couple (for some reason) standard mechanisms for providing and consuming services. I encountered these mechanisms in practice when I created a UserRegistry implementation for Open Liberty.

In my first version, I declared my services programmatically in the bundle's activator (which is a class that you can write to run when your bundle is loaded/unloaded). In that way, you can dynamically tell the runtime that your bundle provides any number of services.

In my second revision, I changed to using what's dubbed Declarative Services. These do basically the same thing, but are defined for the runtime in a combination of the META-INF/MANIFEST.MF file and some service-definition files in the bundle - essentially, like a re-thought version of plugin.xml.

Summary

Okay! So, what's the upshot? Well, in my work, I use the first two all the time: inside a non-Domino app, META-INF/services is king; when working with Domino, IBM Commons ExtensionManager handles everything I need.

As far as implementing your own services, it's definitely a critical concept to keep in your pocket (I can only assume it's somewhere in Design Patterns). You could certainly go crazy with it and make a real mess of incomprehensible indirection, but it's probably useful more often than you'd think at first. Give it a shot next time you find yourself writing a big if tree with complicated branches.

My Active Open-Source Projects

May 8, 2020, 11:01 AM

Over the years, I've spawned a number of open source projects, both in my personal GitHub account and in OpenNTF's, but it'd be fair to say that not all of them are actively updated or see common use.

Nowadays, I have a set of tools that I actively develop (either solo or with a team) and which make up critical parts of my development infrastructure, and I figured it'd be useful to give an overview of them.

NSF ODP Tooling

This is my current favorite project by virtue of how much time it saves me every day and for its future potential. I wrote a series on this project a while ago, so I won't go over all the details of it here. The gist of it, though, is that this project lets me have a Maven tree for one of my big client projects that includes an array of OSGi bundles and have the Maven install project build all of those, assemble an update site with them and a bevy of dependencies, compile over a dozen NSFs (most with complicated Java code), and end up with a distribution ZIP containing importable update sites and deployable NTFs, all from my Mac with no Designer involved.

I have visions of this project forming the central infrastructure for a post-Designer world, and that's shaping up in a couple ways so far. One of those ways is the DXL and XPages LSP contributor component that allows for pretty-solid editing of, uh, DXL and XPages in tools that use the XML Language Server, such as Eclipse and Visual Studio Code. And that plays in to the other project I use daily, the XPages JEE Runtime.

XPages JEE Runtime

This is the project that started as a frenzied descent into madness and which I eventually hammered into shape enough to run real apps (with a side path where I also got XPages running on Android and iOS).

Now, this is the main way I do development on that client app. I have an Open Liberty server set up in Eclipse and a webapp variant of the XPages app that points to the same XPages, Custom Controls, and Java code from the NSF's ODP representation, and I have some hooks to direct all database references to the DB running in my dev VM. Since it's not a 100% perfect representation of the Domino environment, I still need to periodically sync it back to the NSF and test how it runs in there (and with the OSGi environment that I'm not using in the webapp), but I'm experienced enough at this point to generally know the potential pitfalls.

There's also a dark part of me that keeps being tempted to actually use this for production at some point, since it works so well now, and pushes aside so many hassles of loading and deploying on Domino itself. That would play in to the next project, the one that's hosting this very blog right now.

Domino Open Liberty Runtime

This is my project where I set up a sidecar Open Liberty instance alongside Domino, which allows for using native local NSF access while also having a full, modern Jakarta EE server with all the bells and whistles.

Though this project is a bit more staid than some of the others, I've gone in and made some interesting improvements lately. One was my journey into RunJava the other month, which I still think is a little too cute to put into production, but which actually should do the job just fine.

The other improvement, though, has some more immediate benefits. I added the ability to specify and auto-download AdoptOpenJDK Java runtimes to use instead of Domino's provided JVM. These runtimes still gain the same benefit of running with local Domino NSF access, but aren't constrained by Domino's once-again-long-in-the-tooth JVM. So you can, for example, specify that you'd rather bring in Java 14 and the runtime will auto-download it for you and launch Liberty using that. I haven't quite rolled that one out to this blog server yet, but it's on the docket. I'd love to bring in Java records, for example, and now there's nothing stopping me from doing so.

XPages Jakarta EE Support

I didn't have a good segue for this one.

This is a project I started a couple years ago initially as a way to expand on Martin Pradny's original plugin to make writing JAX-RS resources inside an NSF easy. It's grown into my project to essentially try to bring the XPages runtime up to code, at least in the parts that I want to use for work. Though it's constrained by the hard limit of the ancient Servlet API Domino's container provides, I've been able to bring in some important updates for EL and JAX-RS, and also allow for using CDI for managed beans and JAX-RS resources.

CDI is actually a whole huge topic that I have some draft posts for. As far as Java development is concerned, CDI is Important with a capital "I".

ODA

There's not a lot of fanfare with the OpenNTF Domino API, but that's largely intentional: as an improvement on the normal lsxbe API, it does its job and doesn't currently need any radical changes. I'm mostly including it here because, though it doesn't change much, it's periodically updated to cover the sprinkling of new Java methods HCL adds with each release.

generate-domino-update-site

While I don't use this project as such daily, I sure do benefit from its output. This is the Maven plugin that generates new update sites, which is required for up-to-date OSGi development for Domino in lieu of IBM/HCL ever updating their own release.

Other than being something I run every new Domino release, I've also made some improvements recently. Some of those just related to improving behavior in edge cases, but a nice one I added the other week was downloading of source components from Eclipse Neon. Though the source for the XPages runtime and the whole Expeditor scaffolding remain unavailable, I am able to look up and download the source for the unmodified Eclipse components, and this results in a more-pleasant development experience in Eclipse.


I have a few other projects that I use periodically, such as the NSF File Server, but those are the big-ticket ones.

The Lay of the Java Land, Early 2020

Mar 4, 2020, 2:34 PM

Tags: domino

I was musing earlier (somewhat incorrectly) about the weird state of Java versions, and it got me thinking about how odd the landscape looks in general lately, even as things are progressing splendidly. And, since not everyone follows a dozen Java-related Twitter feeds to keep up on all this stuff - and because Domino has ignored so much of this for so long - I figured it'd be useful to have a summary.

Java Itself

For starters, there's Java-the-language, which has gone through some changes both in how it's licensed and how it evolves recently, largely for the better.

Release Cadence

The most notable change is how the version pace has picked up. Originally, Java was updated pretty regularly, but things slowed down after Java 6's release in late 2006. For whatever reasons, Sun took a IE-6-style break for a while and didn't come out with 7 until 2011 and then 8 in 2014. Starting with 9 in September 2017, though, some big changes came in:

  • New integer Java releases come out every six months
  • Starting with 11, "long-term support" releases will come out every three years, or every six integer versions

So Java 10 came out in March 2019 and 11 came out in September 2018, but 11 is the next "real" release after 8. Similarly, 12 and 13 have come out in the intervening time, but it won't be until Java 17 in September 2021 that the next LTS will arrive. Similar to other rapid+LTS release systems like Ubuntu's, the "intermediate" releases are expected to be stable and ready for production, but you can only expect to get commercial support for the current intermediate release plus any LTS releases still in their support window.

Along those lines, one could reasonably expect Java server vendors to stick to the LTS releases, even ones like HCL that aren't licensing supported builds from IBM or Oracle.

Preview Features

One of the main points of the faster release cadence is to allow quicker delivery of new features, and paired with that is Oracle's new willingness to put "preview" features in a release before they can call them officially solid and giving them cover if they decide they want to break the syntax. These are enabled with special flags at compile- and run-time, and currently include a handful of nifty features like yield in switch blocks and multi-line strings (a type of "heredocs") that will almost definitely become "official" features by the time the next LTS rolls around.

Removal of Features

They also gave themselves permission to remove features from the core JRE distribution, in particular a handful of specs that have gotten less used over time or otherwise make more sense to be part of Java/Jakarta EE specifically. Of note for Domino developers is the CORBA API, which we almost never use directly but which is required to load Notes.jar. That's gone in Java 11, and so (for now at least) using Notes.jar on Java 11+ requires including a replacement implementation.

AdoptOpenJDK

I mentioned it in a Java grab bag before, but it's worth reiterating: Java (the language and the platform) is free and open-source for all to use, but Oracle's specific JDK/JRE builds are not. For all intents and purposes, AdoptOpenJDK is the sole go-to place to get Java for private and commercial use now, unless you specifically want to pay Oracle or IBM for support.

Additionally, there are now two core JVMs to choose from when downloading Java: HotSpot and OpenJ9. HotSpot is essentially the "normal" one, the Java core that Sun/Oracle have been shipping forever. OpenJ9 is an open-source version of the J9 JVM that IBM has long maintained and put into all of their products (Domino included. IBM open-sourced it and contributed it to the Eclipse Foundation (they'll come up again shortly), and it's making a name for itself as a solid lean and quick-to-start runtime for containerized systems in particular.

Servers

The world of Java server technology, separate from the language has also been going through some churn with positive results. In this case, I mostly mean Java/Jakarta EE - Spring has been chugging along without too much apparent turmoil. Additionally, I don't know too much about Spring, so I won't cover it here.

Over the last decade, Oracle seemed to generally lose a lot of interest in developing Java EE - while they still put out new releases, they started getting further apart and the overall sense was that Oracle would be much happier if they just didn't have to deal with it anymore.

Microprofile

It was around this time that the community outside of Oracle's JEE team got a little antsy about this slowdown and lack of focus and started the Eclipse Microprofile project. It started out as a thin subset of JEE technologies - just JAX-RS, JSON-P, and CDI - tailored for the purposes of making it easier to write microservices with an extremely-low footprint. Quickly, though, it grew beyond just a selection of existing specs and started to grow new specifications that JEE didn't have to support the mission. Beyond just microservices-specific improvements, these specs also bring along tools that are handy in any old application, like an improved REST client and an annotation-based Configuration API. Microprofile turned into the go-to place for new development in the JEE world while Oracle was slowing down.

Jakarta EE

Oracle managed to get the (splendid) Java EE 8 release out the door eventually, but decided they had had enough of shepherding the platform. Fortunately, instead of consigning it to a slow death, they handed the reigns over to Eclipse, which formed the awkwardly-named EE4J project to oversee it. Since Oracle didn't give them the rights to use the name "Java", the actual platform itself was rebranded as "Jakarta EE", which had its own "Eclipse-washed" Jakarta EE 8 release.

That transition has gone very well, though there's one hurdle on the horizon: Oracle also didn't grant the rights to use the javax.* namespace for any new specifications, and so Eclipse decided to make the jump in Jakarta EE 9 to switch all of the specifications over to jakarta.*. What this means in practice is that all of those javax classes (like Servlet) we use now will, in their JEE 9 incarnations, be renamed in the style of jakarta.servlet.Servlet. There will likely be tools in various IDEs and toolchains to help with the conversion, and I suspect that app servers like Liberty will still support the old names for a good while, but it'll be a weird time.

Eclipse

It's also a bit of a weird time for the pairing of Jakarta EE and Microprofile. The latter came about when the former was moribund, but now they're both active and within the same open-source organization. Microprofile's remit isn't exactly the same as Jakarta's, so they're both going to continue as-is for at least a good while. Still, it feels a bit odd to have two Java server frameworks in the same place, and so it's possible that Microprofile will either be subsumed into JEE (like how JEE already has "web" and "full" profiles) or will be something of an innovation area to push new technology faster before sending it back upstream to JEE. That'll be interesting to watch.

GraalVM and Quarkus

Finally, I'll mention a couple "miscellaneous" technologies that have been developing in the background while all of this was happening.

GraalVM (which is presumably like the French word for "grail" and not pronounced like something a goblin would say) is a variant of the JVM core that Oracle has been working on for a few years. I think it's meant to be roughly equivalent to LLVM but for the Java world: higher performance than before, multi-language support, and compilation down to native binaries. It's an open-source (GPL) project and, while Oracle offers a paid Enterprise Edition, the Community Edition is legal for production use.

Alongside this has come along Quarkus, which is a Microprofile implementation laser-focused on speed and resource usage. It doesn't require GraalVM's native compilation, but the pairing of the two is a prime part of Quarkus's message. Quarkus isn't a full Jakarta EE server, but it's a very-intriguing purpose-built stack for developing speedy Java server apps primarily for containers. It's also a good conceptual example of allowing developers to write fairly-dynamic code (like CDI injection) but then turning that into concrete bindings at compilation time instead of deferring all lookups to runtime.

It's been on my list to kick the tires on these things specifically for a little while now. They're both hitting their stride lately, so I suspect that they'll have interesting effects over time.

Lessons From Fiddling With RunJava

Mar 3, 2020, 9:49 AM

Tags: java websphere

The other day, Paul Withers wrote a blog post about RunJava, which is a very-old and very-undocumented mechanism for running arbitrary Java tasks in a manner similar to a C-based addin. I had vaguely known this was there for a long time, but for some reason I had never looked into it. So, for both my sake and general knowledge, I'll frame it in a time line.

History

I'm guessing that RunJava was added in the R5 era, presumably to allow IBM to use existing Java code or programmers for writing server addins (with ISpy being the main known one), and possibly as a side effect of the early push for "Java everywhere" in Domino that fell prey to strategy tax.

Years later, David Taib made the JAVADDIN project as a "grown up" version of this sort of thing, bringing the structure of OSGi to the idea. Eventually, that morphed into DOTS, which became more-or-less supported in the "Social Edition" days before meeting a quiet death in Domino 11.

The main distinction between RunJava and DOTS (other than RunJava still shipping with Domino) is the thickness of the layer above C. DOTS loads an Equinox OSGi runtime very similar to the XPages environment, bringing in all of the framework support and dependencies, as well as services of its own for scheduled task and other options. RunJava, on the other hand, is an extremely-thin layer over what writing an addin in C is like: you use the public static void main structure from runnable Java classes and you're given a runNotes method that are directly equivalent to the main and AddinMain function used by C/C++ addins.

Utility

Reading back up on RunJava got my brain ticking, and it primarily made me realize that this could be a perfect fit for the Open Liberty Runtime project. That project uses the XPages runtime's HttpService class to load immediately at HTTP start and remain resident for the duration of the lifecycle, but it's really a parasite: other than an authentication-helper servlet, the fact that it's running in nHTTP is just because that's the easiest way to run complicated, long-running Java code. For a while, I considered DOTS for this task, but it was never a high priority and has aged out of usefulness.

So I decided to roll up my sleeves and give RunJava a shot. Fortunately, I was pretty well-prepared: I've been doing a lot of C-level stuff lately, so the concepts and functions are familiar. The main run loop uses a message queue, for which Notes.jar provides an extremely-thin wrapper in the form of lotus.notes.internal.MessageQueue. And, as Paul reminded me, I had actually done basically this same thing before, years ago, when I wrote a RunJava addin to maintain a Minecraft server alongside Domino. I'd forgotten about that thing.

Lessons

Getting to the thrust of this post, I think it's worth sharing some of the steps I took and lessons I learned writing this, since RunJava is in a lot of ways much more hostile a place for code than the cozy embrace of Equinox.

#1: Don't Do This

The main lesson to learn is that you probably don't want to write a RunJava task. It was already the case that DOTS was too esoteric to use except for those with particular talent and needs, and that one at least had the advantage of being kind-of documented and kind-of open source. RunJava gives you almost no affordances and imposes severe restrictions, so it's really just meant for a situation where you were otherwise going to write an addin in C but don't want to have to set up a half-dozen compiler toolchains.

#2: Lower Your Dependencies Dramatically

The first big general thing to keep in mind is that RunJava tasks, if they're not just a single Java class file, are deployed right to the main domino JRE, either in jvm/lib/ext or in ndext. What this means is that any class you include in your package will be present in absolutely everything Java-related on Domino, which means you're in a minefield if you want to bring in any logging packages or third-party frameworks that could conflict with something present in the XPages stack or in your own higher-level Java code.

This is a fiddlier problem than you'd think. A release or so ago, IBM or HCL added a version of Guava to the ndext folder and it wreaked havoc on the version my client's app was using (which I think came along for the ride from ODA). You can easily get into situations where one class for a library is loaded from XPages-level code and another is loaded from this low level, and you'll end up with mysterious errors.

Ideally, you want no possible class conflicts at all. I took the approach of outright white-labeling some (compatibly-licensed) code from Apache and IBM Commons to avoid any possibility of butting heads with other code on the server. I was also originally going to use the Darwino NAPI or Domino JNA for a nicer Message Queue implementation, but scuttled that idea for this reason. It's Notes.jar or bust for safe API access, unfortunately.

#3: Use the maven-shade-plugin

This goes along with the above, but it's more a good tool than a dire warning. The maven-shade-plugin is a standard plugin for a Maven build that lets you blend together the contents of multiple JARs into one, so you don't have to have a big pool of JARs to copy around. That on its own is handy for deployment, but the plugin also lets you rename classes and aggregate and transform resources, which can be indispensable capabilities when making a safe project.

#4: Make Sure Static Initializers and Constructors are Clean

What I mean by this one is that you should make sure that your JavaServerAddin subclass does very little during class loading and instantiation. The reason I say this is that, until your class is actually loaded and running, the only diagnostic information you'll get is that RunJava will say that it can't find your class by name - a message indistinguishable from the case of your class not even being on the server at all. So if, for example, your class references another class that's missing or unresolvable at load time (say, pointing at a class that implements org.osgi.framework.BundleActivator, to pick one I hit), RunJava will act like your code isn't even there. That can make it extremely difficult to tell what you're doing wrong. So I found it best to make very little static other than JVM-provided classes and to delay creation/lookup of other objects and resources (say, translation bundles) until it was in the runNotes method. Once the code reaches that point, you'll be able to get stack traces on failure, so debugging becomes okay again.

#5: Take Care With Threads When Terminating

The Open Liberty runtime makes good use of java.util.concurrent.ExecutorServices to run NotesThread code asynchronously, and I'll periodically execute even a synchronous task in there to make sure I'm working with a properly-initialized thread.

However, when terminating, these services will start to shut down and reject new tasks. So if, for example, you had code that executes on a separate thread and might be run during shutdown, that will fail likely-silently and can cause your addin to choke the server.

#6: That Said, It's a Good Idea to Use Threads

A habit I picked up from writing Darwino's cluster replicator is to make your addin's main Message Queue loop very simple and to send messages off to a worker thread to handle. Doing this means that, for complex operations, the server console and the user won't sit waiting on a reply while your code churns through an individual message.

In my case, I created a single-thread ExecutorService and have my main loop immediately pass along all incoming commands to it. That way, the command runner is itself essentially synchronous, but your queue watcher can resume polling immediately. This keeps things responsive and avoids the potential case of the message queue filling up if there's a very-long-running task (though that's less likely here than if you're drinking from the EM fire hose).

#7: Really, Don't Do This

My final tip is that you should scroll back up and heed my advice from #1: it's almost definitely not worth writing a RunJava addin. This is a special case because a) the goal of the project is to essentially be a server addin anyway and b) I was curious, but normally it's best to use the HttpService route if you need a persistent task.

It's kind of fun, though.

Targeting Domino for Webapps Incidentally

Feb 11, 2020, 5:26 PM

Tags: java maven

I recently had occasion to break ground on a new web project that uses a Notes runtime and has a web front end, and I figured it would be a perfect occasion to structure it in a way that is clean, portable, and, while it will run on Domino, doesn't have to use Tycho.

I ended up coming up with a setup that I'm pretty happy with, and so I put up an example on GitHub for anyone else to use as a reference for similar cases.

What Is This, Specifically?

This is an application that consists of a couple main concepts:

  • Maven for project structure and dependencies
  • Core "plain Java" module that contains code that's intended to be portable and doesn't even know it's in a web app
  • JAX-RS-based REST API
  • Client JS web UI written in Stencil and transpiled with Node
  • Standard webapp project for JEE containers such as Liberty
  • Domino project to wrap the app up as an OSGi bundle

What this is specifically not is an XPages project. And, while it can use a Notes runtime and access NSFs, it's also not something that will be stashed inside an NSF, and the "Notes" part is optional and really only included here to show it's possible. The idea is that this is a standard web app first and a Domino thing second.

Project Structure

The project is organized as a Maven module tree like so:

  • domino-webapp: The parent container project just for configuration
    • core
      • webapp-core: This is the main place for UI-independent business logic
    • web
      • webapp-api-jaxrs: This contains the JAX-RS-based REST API, which exposes the core business logic to the web
      • webapp-webui: This contains a Stencil-based JavaScript app. It doesn't need to be Stencil specifically, or even NPM-based at all, but I find Stencil to be a pretty good choice for this
      • webapp-jee: This is the JEE-container web app, containing very little code of its own and just intended to output a WAR
    • domino
      • webapp-domino: This is the Domino equivalent to the previous project, but contains a chunk of adapter code to get things working, plus some Maven configuration to generate an appropriate OSGi bundle
      • webapp-dist-domino: This is a distribution project that pulls in the Domino OSGi bundle and creates a p2 repository, and then a "site.xml" file for the benefit of importing into an NSF Update Site

How the OSGi Part Works

In going deeper into what's going on, I'm going to start at the end: how to go from a normal web app to a Domino-friendly OSGi bundle. If you're not familiar with what I mean by "web app" in general and in a Domino plugin in particular, it's the sort of thing that Sven Hasselbach wrote a series about a few years back: a Java/Jakarta EE Servlet application using the "WebContainer" extension point in the Domino HTTP runtime.

Traditionally, these projects are built as plain-old Eclipse projects, where you drop a bunch of JARs for your framework of choice into a plug-in project and write your code in there, using Eclipse's Plug-in Development Environment. This works well enough as far as it goes, but puts constraints on how you do development, in particular pretty much requiring Tycho if transitioned to a Maven structure, which would then have massive penalties for the rest of your project.

Fortunately, the thing about an OSGi bundle is that it's really just a JAR file with special metadata, and so it doesn't actually have to be created with a toolchain that has full knowledge of OSGi. As long as the required files end up in the right places inside the JAR (which is in turn just a ZIP file), you're good to go.

In this case, I used the maven-bundle-plugin to decorate the "MANIFEST.MF" file with appropriate OSGi metadata and, importantly, to embed all the compile-scoped project dependencies for me. That second part means that Maven will handle the job of steps 7-10 in Sven's example: it'll bring in the dependencies from Maven, copy them into the right place in the final JAR, and set up the Bundle-ClassPath header to point to them.

It's important to note the "compile-scoped" qualifier there. The Maven projects themselves also depend on a couple things that I know will be present on Domino already, namely IBM Commons, Apache Wink, the Web Container adapter, and Notes.jar. Though it'd probably work if I copied those into the JAR, that would be asking for trouble unnecessarily, so I mark them as "provided" in Maven, and then the bundling process knows to skip over them.

The other OSGi-specific element is the "plugin.xml" file, used by Domino's Equinox framework to identify that the bundle provides a web app. In this case, I put that file in "src/main/resources", where it ends up being copied to the root of the JAR. One down side here is that you have to know ahead of time what the syntax for this file is: since Eclipse won't know this is a plug-in project, you won't get the GUI shown in Sven's example.

There are some other Domino-specific considerations, but I'll return to them later. For now, those parts will cover the OSGi "bridge".

Core: Using the Notes API

The core project doesn't have a lot going on, and that's intentional. It does, though, demonstrate how you can use the JSON-B API for JSON serialization and the Notes API for accessing NSFs and other Notes stuff.

The important parts happen in the project dependencies. The first one is simple: I want to use the JSON-B API, but I was to declare that it will be provided one way or another by the environment. The second one includes Notes.jar by way of my P2 Repository Provider since it's still not available as a normal Maven dependency.

This project contains a single class, which just gathers a bit of information about the runtime environment to be shown as a JSON object. The important part here is my use of NotesThread when calling the Notes API. Since this project can run on non-Domino containers, I can't assume that all threads will already be Notes-friendly, so I use that route. You can also call NotesThread.sinitThread() or go other ways, but I like containing the calls into a separate thread outright in simple cases.

JAX-RS

The JAX-RS project is intended to contain JAX-RS configuration and resource classes, and the immediate part to note is once again the dependency set. Here, I targeted specifically JAX-RS 1.1, which is quite old, but is provided by Apache Wink on all Domino installations. I could theoretically bring in RESTEasy for a newer spec version, but 1.1 is capable enough for now and it keeps things simpler.

In the Application implementation class, I enumerate all of the resource classes used in the app. This is equivalent to the text-file-based method common in Wink apps, but it's portable across JAX-RS implementations and has the side benefit of being compiler-checked. However, though it's a step up from the old Wink way, it's a big step down from the modern JAX-RS way: in newer containers, you can just let the container find your resources by looking for classes with annotations automatically. However, that doesn't fly on Domino and, while you can hack in something roughly equivalent, it's simpler for now to just enumerate the classes explicitly and remember to add them to this list.

There are only two resources here: a Hello World resource and one to ferry the ServerInfo object out using the JAX-RS environment's JSON serializer (more on that in a bit).

The Web UI

The web UI project is complicated, but mostly because NPM-based JavaScript development is complicated. This example uses Stencil, which I quite like, but you can use whatever you'd like: React, Angular, just plain ol' HTML, or whatever.

The important parts here are the use of frontend-maven-plugin to create a Node+NPM environment and build the app and the specific configuration to put the output into "src/main/resources/META-INF/resources". Doing this means that, when this project is wrapped up into a Java-less JAR file, the web resources will be in the "META-INF/resources" directory, which is special on Servlet 3 and above. Any files in there in dependency JARs like this will be visible as if they were in the main web content of your web app.

JEE App

The Jakarta EE app is the simplest of the bunch, and the only actual class in there only exists for example purposes.

The work, such as it is, all happens in the Maven configuration. I declare it to be war-packaged, to not complain if there's no "web.xml" file, to bring in the project dependencies, and to specifically include IBM Commons. It also brings in Notes.jar as a compile-time dependency.

The Domino Shims

Back in the Domino module, it's time to talk about the non-OSGi parts. I've mentioned a few things above that require no configuration in a modern web container, but which will require a bit of legwork in Domino. These are generally related to the fact that Domino's servlet container is version 2.4 and it has no idea about newer standards.

  • I bring in a Eclipse Yasson dependency to provide JSON-B support.
    • To bind that to JAX-RS, I wrote a Provider class that knows how to turn any Java object into JSON when a resource says it wants to output JSON.
    • To register that provider (since it can't be picked up automatically), I subclass the Application class to include it specifically.
  • The ResourcesServlet servlet mimics the Servlet 3 behavior of serving resources out of "META-INF/resources". This specific implementation isn't the best, since it doesn't provide any caching, but it gets the job done and means that the web UI JAR will work the same way on both targets.
  • The RootServlet servlet extends the Wink default REST servlet to shim the ClassLoader around, which avoids a lot of trouble with threads used for web app requests that had previously been used for XPages requests (it's annoying, trust me).
  • I have to include an explicit reference to Wink's JAX-RS provider for some reason to do with bundle class loading.
  • Unlike in the normal web app project, I have to include a "web.xml" file, and this one registers the two servlets above.

Domino Update Site

The second part of the Domino target is the distribution project, which uses the p2-maven-plugin to create a P2 repository. That plugin is a splendid tool for your toolbox and has a lot of capabilities for auto-OSGi-ifying otherwise-non-OSGi projects. In this case, I just want to include the Domino project from the previous step, but I also want to generate an Eclipse feature for it so that it can be imported into an NSF Update Site and with some proper metadata.

I also use the p2sitexml-maven-plugin, which takes the newer-style P2 site generated by the previous step and adds a "site.xml" file, which is needed by the NSF Update Site import process if you want to include categories, which I think are nice.

Seeing It In Action

To run the app on Domino, you can do a Maven install on the root, install the update site from the distribution project onto Domino, and then visit "/exampleapp/". You'll be greeted by a vision of beatuty like this:

Example Webapp Screenshot

Placeholder garishness aside, it shows the Stencil app loading, using the custom favicon, and making a call to the System Info service. That, in turn, shows using the Notes runtime to get the server's distinguished name. It's left as an exercise for the reader to then put in the thousands of hours of work to make a world-class application.

Caveats!

Since this is a Domino thing, there are important caveats.

The first is one I mentioned earlier: because we're restricted to Servlet 2.4/2.5ish, a lot of things just won't work. Indeed, not even all of the 2.4 spec works, as Filters aren't implemented for some reason. Additionally, outside of Servlet and JAX-RS 1.1, you're pretty much in "BYOB" territory when it comes to other JEE specs. In this example, I brought in Yasson for JSON-P and JSON-B and that was pretty simple, but others (say, CDI) would require a lot more fiddly work.

There's also an extra-special caveat when it comes to JSP. Domino's web container knows about JSP, but requires what it calls a "JSP compiler bridge": a special extension that allows for interpreting JSPs inside the special environment it creates. However, it doesn't actually ship with such a bridge. Notes does (and MyFaces too) for what I assume are "social" reasons, but Domino doesn't. You could probably nab the JSP stuff from Notes and drop it onto Domino, but you'd be getting into weird territory. I tried dropping Jasper into the app, but it ran into ClassLoader-casting trouble... hence the bridge, I guess.

Usefulness

Phew! Admittedly, it's a long walk to get to the point where you can just run a web app, and there are quicker ways to get there. However, I do think this is worth it. With this setup, I have a set of Maven projects that work swimmingly in Eclipse and any other Java IDE, a NPM project that acts like any other, and a JEE container front-end for rapid development. No Designer, no NSF syncing, no Plug-in Development Environment, no Tycho. And, though I don't have the full breadth of JEE available to me, JAX-RS is the main one you need for a client-JS app anyway. It's not an appropriate setup for every app, but it's really nice when it fits.

Domino 11's Java Switch Fallout

Jan 7, 2020, 10:50 AM

Tags: java

In Notes and Domino 11, HCL switched from using IBM's J9 Java distribution to using the OpenJ9 variant of AdoptOpenJDK. This is a lateral move technically - it's still Java 8 - and it's one presumably made in the short term to avoid licensing costs from IBM and in the long term to align better with AdoptOpenJDK.

However, OpenJ9 is not the same as J9, and AdoptOpenJDK is not the same distribution as the previous one, so there are some minor gotchas to look out for.

BASE64 and Other Internal Classes

A couple months back, I wrote a post describing this situation: namely, that some XPages and agents grew to depend on the presence of JVM-internal classes in the com.ibm namespace, particularly com.ibm.misc.BASE64Encoder and its decoder sibling.

The true fix for this is to ferret out uses of these classes in your code base, but that can be difficult. If you have to maintain legacy code, I made a small shim Jar you can drop on your server to map the two BASE64 classes to their sun.misc versions. I intentionally use those classes, even though they're also not for public use, both because they have the same semantics as the IBM ones and to reinforce that the best solution is to use the vendor-independent java.util.Base64 class.

java.pol

It's been fairly-common practice for a little while now to create a file named "java.pol" in the Java installation directory to loosen the security policy and get around Domino's bizarrely-strict interpretation of the rules. This came into vogue in favor of editing "java.policy" because this file was (usually) not overwritten during Notes/Domino version upgrades.

However, as Per Lausten discovered, AdoptOpenJDK's distribution does not reference this file, and so its policy changes won't take effect. The upshot of this is that there are three main options to loosen the policy:

  • As Per mentions (via Daniele Vistalli), you can create a file named ".java.policy" in the home directory of the user running Domino and it will be honored.
  • You can go back to editing the "java.policy" file, and re-editing it with each new release
  • You can modify "java.security" to reference "java.pol" again. This is kind of a wash, though, since you'll need to re-edit "java.security" every update anyway

Different Implementation Jars

This last one is much more limited in scope, and may actually be limited in effect to just the NSF ODP Tooling project. In that project, in order to create a Domino-compatible runtime environment for local compilation, I included a couple expected Jars from the Notes/Domino installation in the runtime's classpath. One of these was "ibmpkcs.jar", which covers both some security stuff but also the aforementioned BASE64 classes.

The fix in my case was to just make the resolution of that Jar optional, which should work for the normal case, but it'll be something to keep an eye on in the future.

Winter Project #2.5: XML Schemas for XPages

Jan 2, 2020, 10:59 AM

Tags: xml xpages

After I did my initial port of my XSP completion assistant to LSP4XML, I got to thinking about improvements I could make to it. One of these was the notion of creating XML Schemas for a project's XPages, similar to how the DXL contributor just passes along the schema files that ship with Notes and Domino.

Doing the same with XSP isn't nearly so easy, and comes with a good number of gotchas that make it not only impossible to fully validate an XPage with schemas, but also difficult-to-impractical to generate such schemas on the fly outside of Designer. But first, two asides!

Aside #1: Mechanisms of Content Assistance

At its core, doing content assistance in a text editor is essentially a matter of the editor saying to the assistance plugin "I have a file here, with this content, at this location, and the user's cursor is in this spot. What should I suggest as the next part to type? And, once they've typed it, can you tell me if it is valid?".

In the simplest case, this could be something like a dictionary of words when writing prose. A content assistance plugin for English-the-language could merely consist of a list of known words, and it would take a prompt from the editor of "ca" and suggest "car", "cat", and so forth. There's an upper limit to what that sort of thing can and should do, and just doing that would probably suffice.

Programming languages are more complicated, and often the best route for autocomplete is to basically also be a whole compiler with full knowledge of the language and the structure of not only the current file, but also of other files in the project and all the dependencies. That way, an IDE can suggest types, method names, parameters, and all sorts of complicated external notions.

XML/HTML content assistance is often somewhere in between there. In both the case of my original XSP implementation and the LSP4XML port, the route I took was to let the in-between layer handle knowledge of raw XML mechanics like tags and attributes, but then basically provide a dictionary of known words when asked. So, when the editor said "the user typed xp:v", my code searched through all of its known components and responded with "maybe xp:view or xp:viewPanel".

I decided over the last couple days to take another route, based on the fact that XML is designed to be a toolkit for making fully-described and -validated markup.

Aside #2: XML Validation

XML itself is something of a meta-language: it has its rules, but it's intended to be a format used to describe other, more-specific grammars. On its own, it has the notion of whether or not a document is well-formed: this means specifically that it follows all the syntax rules of XML, like the proper use of brackets, attribute quotes, element hierarchy, and all that. Well-formedness is comparatively easy to enforce, and basically any XML editor does, but it doesn't say anything about whether a given XML document is a valid example of its kind.

That job is left up to a secondary definition, usually done via either a Document Type Definition file or an XML Schema, though there are more ways than that. These are the things that say, for example, that in XHTML the root element must be <html>, and that element contains zero or one <head> element and one <body> element, and so forth.

With the aid of a document schema, an XML processor (such as a structured text editor) can verify first that a document is well-formed XML and second that all of the elements that it can match up with the schema are valid. Moreover, it can itself maintain the list of potential elements, attributes, and values, and display them in a clean and fast way without the content-assistance plugin having to worry about parsing and substring matching.

That "...elements that it can match up..." caveat comes in to play because XML allows mixing grammars in a single file and the processor may or may not require that all of those grammars be strictly defined. What identifies these grammars in a file is the use of an XML "namespace", which is a URI that may or may not actually go anywhere. For example, the MathML namespace is "http://www.w3.org/1998/Math/MathML" and the SVG one is "http://www.w3.org/2000/svg". Both of those are URLs that resolve to pages, but that's just because the W3C is being nice; the only requirement is that they are unique URIs. In an individual document, these namespaces may be matched up to prefixes, and one can be defined as the base namespace for elements that don't have prefixes.

XSP as an XML Grammar

Which brings us to XPages. XSP-the-markup-language is XML-based and so every XSP document must be well-formed. Designer won't even give you the time of day if, say, you leave off the closing </xp:view> tag at the end of a file. Additionally, XSP has many of the trappings of a fully-validated XML language. It uses namespaces as its prime identifier to identify tags and you can't just write any old tag in one of those namespaces or give an existing tag some random new attribute. Moreover, many attributes and elements have special rules about their content: you can't put text inside <xp:this.data/>, and <xp:text disableOutputTag="foo"/> is illegal. These are all specialities of XML Schema.

XSP does not, though, have any schema. Most of the validation happens at build time (including of XML well-formedness, apparently), and some is even delayed until runtime (like that invalid boolean above). There are some good reasons for this. First and foremost is the fact that XSP really describes Java objects that are themselves contributed programmatically, by way of XSP library classes. Once your path to validity runs through arbitrary Java code, it means that you don't have the option to statically compare the file to a schema - you have to run that code inside your environment. Additionally, the XPages namespace ("http://www.ibm.com/xsp/core") isn't even defined in a single place, and has controls declared across several plugins. Same goes for the ExtLib's "http://www.ibm.com/xsp/coreex" namespace, and then there's the Custom Control namespace, "http://www.ibm.com/xsp/custom", which can't even be determined at a global level and has to be synthesized repeatedly on a project-by-project basis. And then, beyond all of that, XSP has rules that just can't be expressed in XML Schema at all, so Designer would have to have a secondary validator anyway, dampening the benefits of codifying a schema.

But Could It Work, Though?

Still, I figured that, if I could craft a schema that's good enough for basic use, I could get some extra completion and suggestion assistance while hopefully marking everything as vague enough to not run into trouble with the flexible nature of XSP.

My biggest ally here is that, while the core and ExtLib namespaces are technically open at any point, it would be such bad form for, for example, a company-specific library to declare its components as part of it that I can ignore that possibility entirely. In effect, for a given Domino release, those namespaces are sealed and fully describable ahead of time.

So I set out to see if I could make XML schemas to contribute to LSP4XML to give it some more knowledge of what it's working with. Like when I generated the JSON used by the original content assistance, the tack I took was to write a servlet that runs in an XPages context and emits the files I want based on a stock runtime. My initial version of this was too clever: XML Schema allows for subclassing, inheritance, and references, and I originally set out to have the schemas match the structure of the underlying component trees. I got pretty close on this, but ended up spending all my time fighting namespace collisions, and in particular the really-subtle one where the roots of the tree are actually defined in the "http://www.ibm.com/xsp/jsf/core" namespace, which is an IBM fork of JSF's "http://java.sun.com/jsf/core". As a side note, there are no concrete component definitions there, so you can't get any use out of it in an XSP file; it's just an interesting implementation detail.

The Current Results

When I took a step back and gave it another whack, I ended up coming up with something that pretty much works. I'm still not sure if it'll actually be the way I keep going, though. For one, there are currently a bevy of open bugs, some of which may end up being showstoppers. Beyond that, though, since XSP still requires a secondary processor to check validity, it may end up being the most reasonable route to just go back to offering completion ideas and maybe some post-processing to check.

Still, this is one of those types of projects that's worth it just as a learning experience. I hadn't even really looked at XML Schemas since around when they came out, and this sure was a good way to get a crash course. And, in the mean time, I think that the generated schema files are pretty-interesting artifacts.