Showing posts for tag "osgi"

XPages JEE 3.0 Beta 4

Wed May 22 13:48:19 EDT 2024

Earlier today, I uploaded beta 4 of XPages JEE 3.0 to GitHub. I've been taking a slow approach to this release due to its "breaking changes" nature, but I think it's just about ready for release.

Domino 14

Like previous betas, this release requires Domino 14 (and Notes 14 for development), since it moves to a baseline of Jakarta EE 10, which in turn requires Java 11. Doing this let me get rid of some extra shim code that was needed to support both Domino 14 and previous versions, and also let me move to some newer language constructs. If you're interested in the sorts of things that the new versions of Java brought, check out the OpenNTF webinar from April, where I talked about just that.

Library Reorganization

Beyond the Java version requirement, the big breaking change I made was to finally shrink the number of XPages libraries and p2 features in the project. As the project grew, I kept adding new distinct XPages libraries, for the principle of keeping each spec distinct, as they often technically are. A few things made me want to fix this, though:

  • Checking all the boxes in the Xsp Properties editor for each library was annoying
  • Checking "Yes, install this plug-in" for every single component, plus its source version, when installing in Designer was very annoying
  • I had to do weird tricks to add features that touched multiple specs. For example, the project tree had a bunch of cross-spec fragments like "jaxrs.cdi" and "json.cdi" to contribute parts for when CDI was present but not break things when it wasn't. This added an extra layer of indirection and maintenance hassle
  • The specs themselves have been converging, particularly in the sense that more and more they assume the "backbone" of CDI is present. For example, Faces removed its original @ManagedBean and related support in favor of going all-in on CDI. Jakarta REST is moving towards the same
  • It was hard to think of realistic scenarios where it would be important to split up the specs like this, using, say, REST but not CDI or Validation

Now, there are just three: "org.openntf.xsp.jakartaee.core", "org.openntf.xsp.jakartaee.ui", and "org.openntf.xsp.microprofile". I was tempted to roll MicroProfile into "core", but they're conceptually (and administratively) distinct enough that it was worth separating them. With this change, it's not only less annoying to install, but it lets me make a lot more assumptions about what is present across specs, simplifying a lot of little things.

Deep-Dive Sidebar: Class Loading

One interesting aspect I ran into when making this change was that I had to readjust my mental model for how class loading is done from an NSF-based application and the libraries it uses. The way it mostly works conceptually aligns with what you see in Designer:

  • Select a library to depend on
  • The XspLibrary has a "getPluginId" method, which then Designer uses to add the OSGi bundle to the classpath
  • Any Require-Bundle dependencies in that plugin marked as "visibility:=reexport" are also included on the classpath

So, in this way, you'd previously select the "org.openntf.xsp.cdi" library, which would then add a dependency on the bundle of the same name, which would in turn re-export the things the NSF should see, such as the CDI API classes.

When I consolidated the libraries, I did it in the straightforward way: I made new "*.library" bundles for them and then added the existing spec-specific bundles as re-exported dependencies. As far as Designer was concerned, all was well, and there was just another little layer in between.

However, that's not quite the whole story when it comes to the runtime on the server. Though Designer presents the NSF as a pseudo-OSGi bundle using the Plug-in Development Environment, Domino doesn't do the same thing. What Domino does is use a class called ModuleClassLoader (not to be confused with Equinox's ModuleClassLoader, which is entirely different and IS an OSGi loader) to handle loading classes from the NSF and its dependencies. The way it gets to its dependencies isn't really a "true" OSGi way, though: it keeps track of a collection of ClassLoader objects as extraDepends, which it consults each in turn as needed. Those ClassLoader objects, at least in post-8.5.2-era Domino, are the internal class loaders from the library OSGi bundles. This is cheating, and I imagine it was made for pragmatic transitional reasons when OSGi came into the picture.

The old layout conceptually looks like this:

Diagram of NSF to old library dependency

At first blush, this seems like a "six of one, half a dozen of the other" sort of situation, but it's not quite. What this setup does that normal OSGi doesn't is that it exposes META-INF/services files inside the direct dependencies to the application's ClassLoader, whereas these are normally encapsulated in OSGi. The effect was that a bunch of things that used to work started to fail - REST couldn't find all its output-writing classes, Validation couldn't find its implementation, and so forth. This is because they would all internally ask the thread-context ClassLoader (i.e. the NSF's loader) for resources within META-INF/services, and the extraDepends list used to be able to find them. Now that there was a layer of indirection, this no longer worked: the extraDepends loaders could see their own stuff but would not traverse the OSGi barrier to peek inside their further dependencies for these. Conceptually, now we have this:

Diagram of NSF to new library dependency

A direct ClassLoader dependency allows reading of resources, but a true OSGi-type dependency does not. So the result is that I had to "promote" a bunch of META-INF/services files from the now-downstream plugins into the "*.library" ones. It all makes sense once you see how the gears are moving, but it sure threw me for a loop for a while.

Bundle and Package Renaming

Okay, now back to the changes.

Since I was already breaking things anyway, I decided this was a good opportunity to fix the names of the bundles and packages in the project's source. For example, some names were antiquated: what was once "JSF" is "Jakarta Faces", but my bundle was "org.openntf.xsp.jsf". Additionally, I was inconsistent in my hierarchy: while Transaction was in "org.openntf.xsp.jakarta.transaction", others (like Faces there) skipped the "jakarta" level of the hierarchy. These don't normally matter to developers consuming the library, but they annoyed me. Now, all of the bundles and their contained packages are within either "org.openntf.xsp.jakarta", "org.openntf.xsp.jakartaee" (for platform-wide capabilities), or "org.openntf.xsp.microprofile".

Along with this will be a couple potential breaking changes for app-level code, such as moving org.openntf.xsp.beanvalidation.XPagesValidationUtil to org.openntf.xsp.jakarta.validation.XPagesValidationUtil, but there won't be TOO many due to this change.

Jakarta Data and NoSQL Changes

This one isn't from my latest round of changes and has been the case since early in the 3.x stream, but it's worth mentioning again here. The Repository concept from Jakarta NoSQL moved from that spec to the new "Jakarta Data" spec, and so related packages changed from jakarta.nosql.mapping to jakarta.data. Additionally, since the NoSQL spec shrunk to accommodate, things like @Column changed from jakarta.nosql.mapping.Column to jakarta.nosql.Column. It makes sense as NoSQL has been an evolving spec all along, but I suspect that this will be the biggest app-code-breaking change it experiences for a good while.

Release and Future Versions

My next steps are to put this through its paces now that all the issues are closed. Though I've ported everything to the JEE 10 versions, I haven't tested to make sure that most of the new features work. While JEE was largely a "cleanup" release, there are a bunch of new features, particularly in Faces, which is in turn always the jankiest part of the stack on Domino.

Post-3.0, I expect that my focus will start to shift to Jakarta EE 11. For a time, I was going to be SOL with it: though Domino 14 bumped Java to 17, JEE 11 was slated to target Java 21 at a minimum. In the mean time, however, that target shifted down to 17, putting it back on the table for Domino. JEE 11 was originally slated for Q1 of this year, but it slipped to some time around summer. That fits reasonably well with my cadence here. JEE 11 is technically also a breaking release, but I suspect that it won't break features that XPages JEE users use, at least not after this hurdle here.

Simplifying the Maven Build of the NSF File Server Project

Wed Apr 10 17:02:09 EDT 2024

When working on NSF File Server project that I talked about the other day, I took a slightly-different tack as far as building it than I did in the past, and I think it's worth going over some of that in case it's useful for others.

Initial Version

The first version of this project was a non-OSGi WAR file meant to be deployed to an app server like Liberty, not to Domino's OSGi stack, and so it's never involved Tycho. This made it mostly simpler, since its various dependencies are normal Maven dependencies and so I didn't have to worry about any of the annoying hoops.

However, it did have some native Domino dependencies: Notes.jar and the NAPI. These would need to be included as Maven dependencies and brought into the final WAR. The way I handled this was using the generate-domino-update-site project, which lets you first generate a p2 site in the style of the painfully outdated IBM-provided update site and then, if desired, turn that p2 site into more-normal Maven artifacts.

When I eventually switched from targeting a WAR file to having it run on Domino, I used the same dependency structure. The Domino version runs as an HttpService implementation, and so I pointed at the Mavenized version of the com.ibm.xsp.bootstrap and com.ibm.domino.xsp.adapter bundles.

Then, I used the maven-bundle-plugin, which fits the job of taking an otherwise-normal Maven project and making it work in OSGi environments (mostly). The way that plugin works is that you specify a lot of your MANIFEST.MF rules in the pom.xml:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
<plugin>
	<groupId>org.apache.felix</groupId>
	<artifactId>maven-bundle-plugin</artifactId>
	<extensions>true</extensions>
	<configuration>
		<instructions>
			<Bundle-SymbolicName>org.openntf.nsffile.httpservice;singleton:=true</Bundle-SymbolicName>
			<Bundle-RequiredExecutionEnvironment>JavaSE-1.8</Bundle-RequiredExecutionEnvironment>
			<Export-Package/>
			<Require-Bundle>
				com.ibm.domino.xsp.adapter,
				com.ibm.commons,
				com.ibm.commons.xml
			</Require-Bundle>
			<Import-Package>
				javax.servlet,
				javax.servlet.http
			</Import-Package>
			<Embed-Dependency>*;scope=compile</Embed-Dependency>
			<Embed-Transitive>true</Embed-Transitive>
			<Embed-Directory>lib</Embed-Directory>
			
			<_removeheaders>Require-Capability</_removeheaders>
			
			<_snapshot>${osgi.qualifier}</_snapshot>
		</instructions>
	</configuration>
</plugin>

The first couple are one-for-one matches to what you'd have in the MANIFEST.MF, but things get weird once you get to the "Embed-*" ones.

The Embed-Dependency instruction is a potent one: you give it a description of what dependencies you want embedded in your OSGi bundle (in this case, all my non-provided dependencies), and then it does the job of copying them into your final bundle JAR. You can do this other ways - copying them manually, using the Maven Dependency Plugin, or others - but this handles all your transitive stuff nicely for you, thanks to Embed-Transitive. I use Embed-Directory here just for cleanliness - the result is functionally the same without it.

The final bits are just for cleanliness: I remove Require-Capability to avoid some trouble I had with older Domino versions, and then I set what the snapshot value will be, which ends up being the current build time.

With this, I end up with a single OSGi bundle with everything in it. This works well for this sort of project - with something to be used in Designer, I prefer to make a big pool of distinct OSGi bundles to make it so that you can look up the source properly, but something server-only like this doesn't need that.

2.0 Version

In this new version, the switch to JNX meant that I was tantalizingly close to not having to do any weird dependency stuff: JNX is distributed in Maven Central, so I didn't need to have the weird locally-built stuff like I did for Notes.jar and the NAPI.

However, that wasn't everything: there are still the "bootstrap" bundles containing the HttpService superclass and related classes. While I don't need to distribute those anywhere, they're still required to compile the classes - no amount of non-verified text files or the like will get around that.

I came up with another way, though. Java only needs classes that look like those to compile, and then the compiled class of mine will be the same regardless of anything else. This is critical: I don't actually need any implementation code, and that's the part I can't redistribute. So I made two little Maven modules: com.ibm.xsp.bootstrap.shim and com.ibm.domino.xsp.adapter.shim. These modules contain just a handful of classes, and of those classes only the methods actually referenced by my code. Since these modules will then be marked as "provided", they won't be bundled into the final JAR.

This squared the circle nicely, where now I can compile the Java side without any weird pre-requisites. Admittedly, the two NSFs in the module set still require an NSF ODP Tooling environment, which is a whole other ball of wax, but it's a step in the right direction.

Other Uses

This technique can be used in other similar projects when you only need a few classes from the XPages stack. For example, if your goal is to just wrap a third-party library and provide it to XPages in Designer, you could probably do this by making a stub implementation of XspLibrary and related classes, and skip the whole generate-domino-update-site step. The more you use from the stack, the less practical this is - for example, the XPages Jakarta EE project reaches into all sorts of crap, and so I can't really do this there. For this, though, it works nicely.

Intercepting Class Loading in OSGi, A Travelogue

Mon Jan 10 10:36:08 EST 2022

Tags: java osgi xpages

Yesterday, I had a problem. I was trying to get MicroProfile Config working inside an NSF to add to the XPages Jakarta EE project, and I was severely blocked by odd behavior.

To describe that, I'll lightly cover what MP Config is. It's a CDI extension that allows you to annotate properties on a bean to indicate that they're intended to come from an available configuration source - often a .properties file in the project, but it's a pluggable system. Your bean will look like this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
package example;

// ...

@ApplicationScoped
public class ConfigExample {
    @Inject
    @ConfigProperty(name="java.version", defaultValue="unknown")
    private String javaVersion;
    
    /* other methods go here */
}

The idea is that you'll then have a properties file or environment variable to fill in the value, allowing you to separate your configuration from the implementation in a consistent way. Here, I'm making use of the fact that a default provider looks up Java system properties, so I could just get it working before investigating adding providers.

Since I'd already added CDI and a CDI-based extension in the form of MVC, I figured this would be easy.

The Problem

The problem I hit, though, was bizarre. CDI would identify the bean above, but would hit this problem:

1
2
3
4
5
org.jboss.weld.exceptions.DeploymentException: WELD-001408: Unsatisfied dependencies for type String with qualifiers @Default
  at injection point [UnbackedAnnotatedField] @Inject private example.ConfigExample.javaVersion
  at example.ConfigExample.javaVersion(ConfigExample.java:0)
WELD-001475: The following beans match by type, but none have matching qualifiers:
  - Producer Method [String] with qualifiers [@Any @ConfigProperty] declared as [[UnbackedAnnotatedMethod] @Dependent @Produces @ConfigProperty protected io.smallrye.config.inject.ConfigProducer.produceStringConfigProperty(InjectionPoint)]

The gist of this is that it noticed that the javaVersion property is supposed to be an injected property, but it had no idea what the source should be. It did know about the MicroProfile provider, which handles @ConfigProperty, but it couldn't put two and two together.

I banged my head against this for a while, and eventually determined that the class as loaded from the NSF is stripped of the @ConfigProperty annotation outright. Other annotations, such as @Inject and even custom annotations, would remain, but not @ConfigProperty. I wrestled with OSGi dependency chains for a while, to no avail.

The Enemy

Eventually, I found the core, and it was an old nemesis of mine. It's this method in com.ibm.xsp.util.ClassLoaderUtil:

ClassLoaderUtil.checkProhibitedClassNames

This field is called by the ClassLoader used in an NSF to ensure that certain classes, by name prefix, cannot be loaded by code coming from an NSF. The last three lines there make a sort of sense: Domino is supposed to be an app container for XPages apps, and ideally it's not a simple process for an app to break out of its container to muck about in the parent environment. Fair enough. The NAPI line is presumably there because IBM wanted to protect developers from themselves, even though Notes devs had been making unauthenticated calls to C APIs for freaking ever.

It's the first two, and specifically the first, that are the source of my trouble. Those prohibitions are presumably meant to isolate XPages apps from the fact that they live in an OSGi world, with the assumption that anything beginning with org.eclipse. refers to something like org.eclipse.core.runtime, the OSGi system bundle.

And this is the issue. MicroProfile is not in any way related to OSGi, but it sure is an Eclipse project. Accordingly, the class name of @ConfigProperty is org.eclipse.microprofile.config.inject.ConfigProperty, and thus cannot be loaded from an NSF.

Attempted Workarounds

So I considered my options.

One was to fork MP Config and rename the packages. That would work, but it would defeat the portability goals of the XPages JEE project, and would also just be a hassle - I've already had to fork a few specs, and each new one adds to the maintenance burden. That remained an option, but it would be a last resort.

My next idea was to wrap around the ModuleClassLoader class used by NSFComponentModule for class loading purposes. This class is blessedly non-final, and so in theory I could look at the instance in a module and swap it out with a replacement. I tinkered with this a bit, but the trouble became the way it's layered, with a DynamicClassLoader private class within - something harder to subclass. In theory, I could reproduce the behavior of it wholesale, but that would be both fragile (if the implementation ever changes) but also verging on if not outright illegal (it's one thing to be API-compatible, and another to reproduce the internal functionality). After some wrangling, I decided to look elsewhere.

The True Workaround

I realized eventually that I don't really care about ModuleClassLoader as such: it does its job fine, and it's only the response that it gets from ClassLoaderUtil that is the problem. If I could change that, I would be set.

I've used the Javassist project here and there for a long time, ever since its inclusion in ODA for one reason or another. It's a handy toolkit, and notably includes the capability to alter a method implementation on the fly. There's my loophole.

The reason this kind of thing can work is related to how Java handles classes and calls between them. For all intents and purposes, you can consider a method call from one bit of Java to another to be a string-based lookup, saying "find a class named X and a method named Y, and then execute it". The "find" part there is much looser than you might think. It's easy to think of class references like C static linking, but they're really not. When code asks for a class, it asks the context ClassLoader, and that object can do basically whatever the heck it wants to find it, as long as it eventually emits a Class that the runtime can deal with.

Javassist's manipulation makes use of the fact that classes are generally eventually just a bunch of bytes, and you can do whatever you want with a bunch of bytes. Using Javassist, it's fairly simple to, once you have a handle on the class, alter the method. Truncated, that's:

1
2
3
4
5
6
ClassPool pool = /* build a ClassPool that can load the class */;
CtClass cc = /* get the class from the pool */;
cc.defrost();
CtMethod m = cc.getDeclaredMethod("checkProhibitedClassNames");
m.setBody("{ return false; }");
Class<?> result = cc.toClass();

And this works, as far as it goes: I now have a Class version of ClassLoaderUtil that skips the onerous check.

The trouble now was to get this to be actually used by other classes. Generally, once a ClassLoader loads a class, it's difficult to feed it another version unless it's designed to do so: most ClassLoader implementations, including those used here, are designed to read and emit classes by their own rules, not have new data fed into them.

I tried digging through the Eclipse OSGi ModuleClassLoader (distinct from the NSF ModuleClassLoader) for entrypoints and had some initially-promising work with Eclipse's internal ClassLoaderHook type, but eventually determined that this would require more patching than I'd want, if it was possible at all.

I also considered using Java's instrumentation capabilities to intercept class loading, but that would require setting up a special Java agent in the launch parameters, which would be too onerous.

But then I remembered something I had heard about when looking into getting ServiceLoader to work with OSGi: a concept in the OSGi spec called "weaving".

OSGi Weaving

I had noted that this concept existed, but set it aside in large part due to how esoteric it sounds: the term "weaving" makes it sound like it's a way to interact with the threads of fate or something, which is evocative but not something that seems immediately useful.

What it really is, though, is an OSGi-friendly version of the above: when the OSGi runtime goes to load a class from a bundle, it reads the data but then gives any such listeners an opportunity to manipulate the code before it's actually reified into a class. This is how the ServiceLoader mediator does its thing: it looks for ServiceLoader calls during loading and re-"weaves" them to run through OSGi instead.

This was perfect: it provides exactly the hook I want and it does it in a clean, spec-based way, without having to do weird reflection to reassign object properties or anything.

The Implementation

So I went about writing such an implementation. All the pieces are there on Domino, and the mechanism for registering a WeavingHook is something I'd done before in Open Liberty: it's a type of OSGi service that you can register and manage in an Activator class. It's also the sort of thing that would work well with Declarative Services, but Domino doesn't have a DS handler installed and I figured I didn't need to solve that quite yet.

So I wrote a WeavingHook implementation:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
public class UtilWeavingHook implements WeavingHook {
    @Override
    public void weave(WovenClass c) {
        if("com.ibm.xsp.util.ClassLoaderUtil".equals(c.getClassName())) {
            ClassPool pool = new ClassPool();
            pool.appendClassPath(new LoaderClassPath(ClassLoader.getSystemClassLoader()));
            CtClass cc;
            try(InputStream is = new ByteArrayInputStream(c.getBytes())) {
                cc = pool.makeClass(is);
            } catch (IOException e) {
                throw new UncheckedIOException(e);
            }
            cc.defrost();
            try {
                CtMethod m = cc.getDeclaredMethod("checkProhibitedClassNames");
                m.setBody("{ return false; }");
                c.setBytes(cc.toBytecode());
            } catch(NotFoundException | CannotCompileException | IOException e) {
                new RuntimeException("Encountered exception when weaving ClassLoaderUtil replacement", e).printStackTrace();
            }
        }
    }
}

This builds on the above Javassist usage to now load the class from the byte array provided by OSGi, transform it, and then write the new version back. Since this happens while OSGi is reading the class to begin with, there's never a time when there's an older, less-permissive version of the class running around, as long as I get my service in early enough.

This service is registered in the Activator without too much fuss:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
public class JakartaActivator implements BundleActivator {
    private final List<ServiceRegistration<?>> regs = new ArrayList<>();

    @Override
    public void start(BundleContext context) throws Exception {
        regs.add(context.registerService(WeavingHook.class.getName(), new UtilWeavingHook(), null));
    }

    @Override
    public void stop(BundleContext context) throws Exception {
        regs.forEach(ServiceRegistration::unregister);
        regs.clear();
    }
}

The final bit to get right was the "get the service in early enough" aspect. The main task was making sure that this bundle was activated before any XPages apps loaded, and that was a job for my old friend IServiceFactory, which is the extension point that's intended to add handlers for incoming URLs but has the desirable attribute of being initialized right at the very start of HTTP loading.

With this in place, I now have a fix automatically applied to that fiendish class on load, and MicroProfile Config (and future MP specs) works like a charm.

Conclusion

This was an arduous one, and I think the FTL victory jingle actually physically played when I got it working. I've hated this restriction for a long time, and I'm glad to finally have a workaround.

It was also enlightening to properly learn about OSGi's weaving capability. As mentioned above, this is what the ServiceLoader bridge does, and I'd tinkered with that at one point, but never got it working. I suspect now that it should be entirely doable to make it work, most likely also involving bringing in an implementation of the Declarative Services OSGi capability. That should be a fun project in its own right.

Moreover, the fact that I now have a system in place to do this weaving on the fly means that I may be able to un-fork some of the specs I had to fork to get working previously, which specifically required altering ServiceLoader calls. Even if I don't get the official service bridge in, perhaps I can use this technique to just alter the parts I need to on the fly, and otherwise use stock implementations from Maven.

But, for now, the way is cleared for further progress, and a bizarre mystery is solved. I call that a good day.

The Intricate Work of OSGi Dependencies on Domino

Wed Dec 02 15:03:06 EST 2020

Tags: domino osgi
  1. Converting Tycho Projects to maven-bundle-plugin, Initial Phase
  2. Winter Project #2: Maven P2 Repository Resolver
  3. OpenNTF Fork of p2-maven-plugin
  4. The Intricate Work of OSGi Dependencies on Domino

One of the main goals of OSGi is proper runtime dependency management, not only allowing a bundle to declare what its dependencies are to ensure that they're there, but even to select one of multiple available versions. For example, you might have one bundle that expects Guava 15 but not higher and another that expects 18 or above, and both can be loaded successfully if you have multiple Guava versions installed. The goal is that you're supposed to be able to just throw a whole bunch of bundles into a pot and the system will figure it out.

Before I get into why this system is hobbled on Domino, I'll add a little more background.

Mechanisms

For our purposes here, bundles have two main ways to declare what they are (there are more than this, but they're not relevant right now):

  • The bundle's symbolic name combined with its bundle version. For example, the core bundle for ODA is named "org.openntf.domino" and has a version like "11.0.1.202006091416".
  • The bundle's exported packages, which are Java class packages and might also individually have versions. Using ODA again as an example, it exports a slew of packages, such as org.openntf.domino and org.openntf.domino.nsfdata, though it doesn't specify versions for any of these.

The versions of the bundle and the exported packages don't need to be the same, nor do all the exported packages need to have the same version. This shows up a lot in "spec bundles", where a vendor will wrap a standard spec API in their own bundle for various reasons. For example, Apache Geronimo has a bundle called "org.apache.geronimo.specs.geronimo-jaxrs_2.1_spec", which provides the JAX-RS 2.1 spec. The bundle itself has a version of "1.1.0", while the exported packages are all version "2.1".

To require a bundle by name, you use the Require-Bundle header, like Require-Bundle: org.openntf.domino;bundle-version="11.0.0", which requires specifically the ODA bundle, version 11 or above. To require a package, you use Import-Package, like Import-Package: javax.ws.rs;version="2.1.0". The two methods have some implications when it comes to how the ClassLoaders work, which I touched on a bit earlier this year.

Culture Differences

The package-based mechanism is generally preferred nowadays over the bundle-based one, largely for the kind of flexibility and division-of-responsibilities it provides. For example, if I want version 2.1.0 of javax.ws.rs, my code shouldn't care at all whether it comes from Geronimo's bundle, JBoss's, or anywhere else - it's all the same thing (in theory). This is extremely common for projects created with bnd and tools based on it, which can generated imported and exported packages automatically based on your code and some small configuration. For example, the bundles that make up Open Liberty use bnd config files that have some loose configuration, which then is processed out to full listings of packages with version information. This is also often paired with OSGi's "Capability" system, but that's one of the "not important for now" things.

Domino - and I believe this is inherited from Eclipse, which does the same thing - is largely based on bundle requirements. For example, the org.eclipse.jdt.ui bundle (part of the Java development tools in Eclipse and Designer) requires a bevy of bundles by name version and doesn't tag any of its exported packages with versions. This is similarly reinforced in the tools. Eclipse, unsurprisingly, uses Tycho to bring the "Eclipse PDE" style to the table, where the MANIFEST.MF file is more hand-crafted, and the tools don't do as much for you automatically. You can go full Import-Package and attach versions to your exported packages with Eclipse, but the tooling doesn't encourage it.

Conflicts in Practice

That brings us back to how this all contributes to making working with OSGi a little extra annoying on Domino. This is something that comes up constantly in my XPages Jakarta EE Support project: I want to bring in an implementation component for a JEE spec, but it will either already have or will have generated for it OSGi rules that lean towards the "package-style" of doing things, versions and all. Because there are common packages (like, say, javax.activation) that are supplied by bundles present in the Domino runtime, I want to use those. However, since the packages from Domino don't have versions specified, I need to re-wrap the bundles to import the packages without versions. Thus begins this ongoing nightmare.

Another approach I could take would be to bring my own, nicely-OSGi-ified versions of the afflicted packages, and options abound. However, that leads to a sneaky other trouble: because some of the Domino bundles are multi-spec monsters - with com.ibm.designer.lib.javamail being the absolute worst culprit - some other bundle on the system might casually import javax.activation and javax.mail. If I have this other spec implementation floating around, then it could get matched to my bundle for the former and IBM's for the latter... and crash right into "exposed to a package via two dependency chains" problem. That wouldn't necessarily be an issue if everything used versions on packages, but leaving them off means it's kind of up to the container to match bundles to each other, and it's entirely happy creating impossible conflicts.

Ways Around It

When working through this sort of trouble, I've found a few ways around the things I've run into. The first is what I mentioned above: using p2-maven-plugin to re-wrap bundles with instructions that make them more Domino-friendly. This involves a few tricks:

Aside from reworking existing bundles, I have a few times ended up creating fragment bundles that glom onto one bundle to tie it to another one after the fact, usually for the components that bridge different JEE specs.

The Wildcard: The System Bundle

In general, with OSGi, you can expect the packages you need to be provided in bundles, but it also allows for the core JDK to be implicitly available - that is, things like java.lang don't need to be imported, and are always present. That's fine, since it's generally well-enough-defined what makes up the JDK and what you'd need to instead bring in.

However, the Domino core classpath doesn't contain just the JDK, but also anything in jvm/lib/ext and (as a curveball) ndext from the Domino directory. Above, I specifically pointed out the javax.activation package, and that's because it suffers the most from both the javamail bundle as well as this. If you run tell http osgi packages javax.activation on Domino's console, you should see that bundles get this from two places: some use com.ibm.designer.lib.javamail, while others use the system bundle, org.eclipse.osgi. That "system bundle" concept is not only the thing in charge, but it also passively provides access to classes found in the lower-level classpath. In a fully OSGi environment, that system classpath will be pretty clean, but Domino's isn't that.

The good news here is that it isn't usually a big problem. If you import packages by a non-zero version, they'll never match from the system bundle this way. Still, it's always there, lurking, and the problems increase if you start adding more JAR files to jvm/lib/ext to avoid policy or amgr-memory issues. We've seen this periodically with ODA, where we used to support the use case of putting the core files there and using just a shim at the XSP layer, before it became too much of a hassle to manage. I ran into it again more recently when Guava showed up there: because I hadn't specified a version, it was able to match from the system bundle, and ended up with classes available at compile time but missing at runtime.

The Upshot

The upshot here is that there's no simple advice for dealing with this. Cultural and implementation factors make bringing third-party code into Domino unusually difficult, but it can generally be dealt with via various patching mechanisms. The XPages JEE project's dependency module ended up turning into a trove of such workarounds, so perhaps it can be useful if you ever run into this sort of thing yourself.

Java ClassLoaders

Fri Jun 05 10:47:55 EDT 2020

Tags: java osgi xpages
  1. Java Services (Not the RESTful Kind)
  2. Java ClassLoaders
  3. Managed Beans to CDI
  4. The Myriad Idioms For Finding Implementations In Java

In my last post, I casually mentioned the concept of ClassLoaders a couple times, and I think that they deserve their own post. ClassLoaders are exactly the kind of thing where, once you do Java long enough, you start to take for granted, but which aren't necessarily immediately obvious for people not as immersed.

The Basics

The core job of a ClassLoader is what it says on the tin: it loads classes. Say you have this bit of code using a class from the core Java library:

1
long now = System.currentTimeMillis();

This uses two types: long, which is a built-in primitive type and not a class at all, and java.lang.System. long doesn't have to come from anywhere, but java.lang.System does, and that's the job of a ClassLoader. In this case, the Java VM will ask the contextual ClassLoader for a class by that name, and the ClassLoader will (at least in Java 8 - things got weird later) look into the core library and find a file named "java/lang/System.class" within "rt.jar", parse its binary contents into an executable class, and hand it back to the VM.

ClassLoaders are also the source of two problem reports you've likely seen: ClassNotFoundException and NoClassDefFoundError. These two basically mean the same thing: the running app tried to load a class by name, but it wasn't found - they just differ in context (the former generally when a class is asked for dynamically, the latter when it's referenced as part of compiled code). This sort of thing can occur when you write code using a class that's present in your development environment but is not present when run later - among XPages developers, this happens quite a bit when people drop some JARs into jvm/lib/ext in their Designer installation but don't do the same on Domino.

Resource Loading

In addition to finding classes, ClassLoaders have a few other tasks, the main one of which of interest to us is loading resources. In my previous post, I talked about how ServiceLoader looks for service files by a given name, like META-INF/services/com.sprockets.data.FizzBuzzConverter. It does this by checking with the current ClassLoader and calling cl.getResources("META-INF/services/com.sprockets.data.FizzBuzzConverter"), which will return a listing of resources from JARs (and JAR-like sources, like an NSF) that it knows about matching that name. In that way, multiple JARs can declare services with the same name without conflicting.

ClassLoader Trees

Though conceptually your running program has "a ClassLoader", in reality it's almost definitely a chained series of ClassLoaders, rooted in the core system ClassLoader and then drilling down more specifically to your app's code. For example, take an application running in Apache Tomcat. In that case, Tomcat's documentation describes four basic tiers:

  • The core JVM ("bootstrap") ClassLoader that comes with any running Java program. As Tomcat's docs note, this implementation may vary
  • The central ("system") ClassLoader that contains the "just above the metal" classes, such as those you may add in the "CLASSPATH" environment variable
  • The Tomcat-specific ("common") ClassLoader, containing classes shared among all running applications. For example, javax.servlet.Servlet would be found here
  • Your app's ClassLoader, containing classes you write as well as any third-party JARs you bundled into your WAR file in WEB-INF/lib

When your code executes and requests a new class, the runtime will check first with your app's local ClassLoader and return what it finds there if present - if the class isn't present there, then that ClassLoader will delegate up to its parent, and so forth until it either finds a class or hits the root and throws a NoClassDefFoundError.

The way that each app has its own ClassLoader is also how you can have multiple apps on the same server that can each know about common core classes, but don't step on each others' toes with their own custom classes. Though javax.servlet.Servlet is the same class for two running apps, one app could have an internal class named "com.example.SomeBusinessLogic" and it wouldn't be visible by other running apps.

Dynamic ClassLoaders

Though the normal case of ClassLoaders is that sort of "do I have this class? If not, ask my parent" chain, the fact that a ClassLoader is itself a custom Java class means that its behavior can be pretty arbitrary. This is present in a normal web app ClassLoader: it knows to look in the WEB-INF/classes path within the WAR file instead of the normal behavior of checking from the root of a JAR, and it knows how to look in WEB-INF/lib for additional JARs to search.

In an XPages application, the active ClassLoader is roughly similar to Tomcat's app ClassLoader example, but with a couple additional capabilities. The main one is that the NSF's ClassLoader - an instance of com.ibm.domino.xsp.module.nsf.ModuleClassLoader - has knowledge of how to treat an NSF as if it were a WAR file. In Designer's "Package Explorer" pane, you get a view of the NSF that makes it look basically like a normal WAR, where classes go in WEB-INF/classes and JARs go in WEB-INF/lib. However, it's still really a nebulous pool of notes floating around, and so the ModuleClassLoader does design-collection lookups for file resources of various types and loads the class bytecode or resource data from there.

It also, in a move presumably designed to inconvenience me personally, has explicit restrictions on what classes it can load: even though it knows about, for example, org.eclipse or com.ibm.domino.napi classes, it has a check to explicitly bar loading these. That's why, even if you configure Designer to see those classes and compile XPages code that references them, they won't be available at runtime.

OSGi ClassLoaders

OSGi ClassLoaders are a particular kind of dynamic ClassLoader. In addition to the normal hierarchical view of the world, they take on special responsibilities for ensuring that your OSGi module (which an XPages app kind of is) sees classes from other bundles based on its dependency rules, but not necessarily their resources. For example, take rules like this in an OSGi bundle's META-INF/MANIFEST.MF:

1
2
Require-Bundle: com.ibm.xsp.core
Import-Package: com.ibm.commons.util

These simple lines hide some beguiling complexity. With this definition, a running class in your bundle will be able to see:

  • All classes at the system level, such as java.lang.System
  • All classes contained within and exported by "com.ibm.xsp.core", such as com.ibm.xsp.FacesExceptionEx and com.ibm.xsp.url.UrlHandler
    • There's also special behavior going on here, because those classes are contained within an embedded JAR in the bundle, referenced as Bundle-ClassPath: lwpd.xsp.core.jar - this is an OSGi-ism
    • Though this bundle lists all of its packages in its Export-Package header, this is not a requirement: it's common for an OSGi bundle to have classes internally that are not accessible from outside
  • All classes exported by its bundle dependency that it marks as visibility:=reexport: "com.ibm.pvc.servlet" and "com.ibm.designer.lib.jsf"
    • This is why you can have a dependency on just "com.ibm.xsp.core" and access javax.faces.context.FacesContext even though it's not in the core XSP bundle
    • This is also transitive, though neither of those re-exported dependencies themselves re-export any dependencies
  • The classes from the "com.ibm.commons" bundle in the "com.ibm.commons.util" package. This means that com.ibm.commons.util.StringUtil is visible, but com.ibm.commons.extension.ExtensionManager is not, despite both being within the same bundle JAR

There are also tons of weird visibility and dependency details as well in OSGi, but that's the gist of it. Note that I specifically mentioned that the resources aren't visible. Though the Require-Bundle: com.ibm.xsp.core line makes all classes exported from the XSP core visible to your code, calling ServiceLoader.load(com.ibm.xsp.acf.HtmlFilteringFactory.class) will not find the DefaultHtmlFilteringFactory implementation declared in there, even though it's done in a ServiceLoader-compatible way. This is why IBM Commons papers over that difference with its "plugin.xml" extension declarations. OSGi actually contains a Service Loader Mediator specification to bridge this gap, but Domino doesn't include an implementation of that part.

Fragment Bundles

There's one special case with OSGi bundles that's worth highlighting: fragments. Normally, each bundle effectively has its own ClassLoader space, walled off from all others by OSGi's broker. However, if you declare your bundle as having a Fragment-Host of another active bundle, your code acts as if it's within the parent, gaining access to not just all of the parent bundle's classes, but also its non-class resources. Moreover, this works in the reverse: the parent also gains access to the fragment's classes as resources, though it generally won't "know" about them at the time of development.

This is a technique that's come in handy for me many times, in particular in cases like the XPages Jakarta EE Support project, where API bundles will use ServiceLoader to find their implementations. In those cases, one of the ways I get it to work in OSGi is to create a fragment bundle out of the implementation, meaning that the bundles remain distinct but now the API can find the META-INF/services files and classes it needs to operate.

This has a good number of other uses, too, such as providing platform-specific native code to an otherwise-platform-independent core bundle. The Notes.jar wrapper used in XPages land uses this type of technique. Though, to my knowledge, Notes.jar doesn't contain any actual native code, it's still delivered in two pieces:

  1. The "com.ibm.notes.java.api" bundle, which lists all of the exported packages but holds no code itself
  2. The "com.ibm.notes.java.api.win32.linux" bundle, which contains the actual Notes.jar and declares Fragment-Host: com.ibm.notes.java.api
    • I'm not sure why this is the case, but maybe Notes.jar is different on System i or something

If you have a bundle that needs access to lotus.domino classes, you then can either do Require-Bundle: com.ibm.notes.java.api or Import-Package: lotus.domino and it'll be resolved out of the fragment. There's also an Eclipse-ism in here: the first bundle has Eclipse-ExtensibleAPI: true, which is a tip-off to the IDE that it should specifically allow fragments to contribute available classes to the development environment. This is generally required when developing with Eclipse's plug-in tooling (shared with Designer), but it's not actually enforced one way or the other by the runtime.

Wrapping It Up

This is all definitely in the category of "you don't normally need to worry about it, but it's very helpful to know", like the previous ServiceLoader topic. Until you're implementing some low-level stuff, you're not likely to interact with the ClassLoader directly, especially to a level beyond finding Thread.currentThread().getContextClassLoader() or Foo.class.getClassLoader(). Knowing about it can help make clear what's going on in situations where a class shows up in development but not at runtime, or when the XPages ClassLoader tries to get to fancy and throws up on itself.

Winter Project #2: Maven P2 Repository Resolver

Sat Dec 28 14:12:26 EST 2019

  1. Converting Tycho Projects to maven-bundle-plugin, Initial Phase
  2. Winter Project #2: Maven P2 Repository Resolver
  3. OpenNTF Fork of p2-maven-plugin
  4. The Intricate Work of OSGi Dependencies on Domino

The second project I took on this past week was related to the first, and also relates to my ongoing struggles with Tycho.

While working on the NSF ODP Tooling, I figured that it could be a good candidate to move away from Tycho and to maven-bundle-plugin or bnd directly. Since I've been Mavenizing the Domino OSGi bundles for a good while now, and the tooling doesn't have any OSGi-dependent tests in it, it seemed like it could go smoothly. Unlike historical precedent, though, my enemy in this endeavor wasn't Domino, but rather Eclipse.

Repository Layouts

One of the big things that makes working with OSGi bundles - at least specifically ones in the Eclipse style - difficult with Maven is that they're generally provided using a repository layout called "p2". This is the evolution of the "site.xml" Update Site style and shares a lot of characteristics. In fact, a p2 repository will often have a "site.xml" file alongside its "artifacts.xml" and "contents.xml" (often Jar'd up) to provide backwards compatibility. It's how we package up XPages plugins and how once upon a time IBM provided [the XPages artifacts for Tycho use](https://openntf.org/main.nsf/project.xsp?r=project/IBM Domino Update Site for Build Management). As a live example, Eclipse 2019-12 is distributed via such a repository.

Maven has its own repository layout, variously called "Maven", "Maven2", "m2", or just "default". This serves a similar purpose, but is structured differently - whereas p2 just has the repo and its "features" and "plugins" directory (and, potentially, composite repositories) - Maven's repository system is organized like a conceptual folder tree based on translating a Group ID (like, say, "org.openntf.maven") into a successive series of subdirectories (like "org/openntf/maven"), followed by a directory for an Artifact ID, which in turn contains directories for each version, and finally within there are any of the actual files that make up a given named "artifact". As a live example, Maven Central is browsable in this manner.

Translating Between Them

Though the two layouts share a common core job - hosting Jar files (mainly) - they diverge enough in how the tools expect the metadata to be laid out that they're difficult to mesh. Tools like bnd can often work with whatever, and even Tycho can try to find OSGi bundles via Maven dependencies, but it's not smooth.

Over time, a pseudo-standard of adding p2 repositories to Maven has emerged, but it's only actually used as a marker to pass along to true Eclipse tools. The most common in our sphere is this construct, seen in projects like ODA:

1
2
3
4
5
<repository>
	<id>notes</id>
	<layout>p2</layout>
	<url>${notes-platform}</url>
</repository>

There, you reference the XPages update site somewhere, and then Tycho can use that to resolve dependencies for things like Require-Bundle: com.ibm.xsp.core. However, it's used only for Tycho's specific OSGi needs. You can't bring in the Tycho plugins and then have a non-Tycho module in your tree declare a dependency like that. Tycho has an implementation class for this, but it's intentionally stubbed out.

My Needs

The reason why tools that can work with both are so heavily slanted to the specific task of generating a true OSGi environment is that that's usually what you want. If you're designing, say, an Eclipse plugin or Eclipse-derived product, you want all of your tooling to know about OSGi from top to bottom, and that's where Tycho excels. It makes sure that all of your dependencies are correct and everything is OSGi-friendly.

This is as opposed to something like maven-bundle-plugin, which is most typically used to put an OSGi coat of paint on a project that isn't primarily geared towards OSGi.

I kind of want a middle ground, though. A project like the NSF ODP Tooling has grown into a sprawling hydra, with heads for Maven, Eclipse, Domino, and now Visual Studio Code. While that works, putting Tycho at the front of it ends up feeling needlessly proscriptive, and I'd love to toss it aside. However, I was blocked in my desire by a small thing: though Eclipse publishes their core bundles on Maven nowadays, Wild Web Developer is currently p2-only.

The Project

So I set out to solve this problem for myself and learn something in the process. As indicated by the <layout>p2</layout> option on the repository above, Maven's repository system is intended to be extensible. Unfortunately, it seems like it hasn't been extended particularly often in practice, and most of what I could find about it was that it's possible to do, but only via references to people saying that one could.

Fortunately, it turns out that it's actually not too difficult to implement after all, and I did just that.

What this Maven plugin does is allow you to specify p2 repositories in any old Maven project, using the ID of the repository you add as the Group ID of dependencies and then the Symbolic Name as the artifact ID. Now, with the plugin added to the project, I'm able to reference the p2-only Wild Web Developer artifact I need:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
<repositories>
	<repository>
		<id>org.eclipse.wildwebdeveloper</id>
		<url>https://download.eclipse.org/wildwebdeveloper/releases/0.8.0/</url>
		<layout>p2</layout>
	</repository>
</repositories>

<dependencies>
	<dependency>
		<groupId>org.eclipse.wildwebdeveloper</groupId>
		<artifactId>org.eclipse.wildwebdeveloper.xml</artifactId>
		<version>[0.5.0,)</version>
	</dependency>
</dependencies>

And, just like that, the bundle and its explicit dependencies show up in my Maven Dependencies group in Eclipse:

p2 Maven Dependencies in Eclipse

As a side bonus, this obviates the need for the mavenizeBundles half of the generate-domino-update-site project, since now I can just reference the generated site directly and get the dependencies, including with better behavior for embedded Jars than I had there:

com.ibm.xsp.core Dependency in Eclipse

The Tiny Details

I think that this plugin is about where it needs to be to suit my needs, but there is still an array of [tiny details I've yet to contend with](https://github.com/OpenNTF/p2-layout-provider/issues?q=is:open is:issue). It'll never be quite a perfect match for an arbitrary OSGi bundle (though some also contain useful Maven metadata), and so there will always be rough edges with something like this, but I think that it will solve a lot of headaches I'd otherwise have to deal with down the line.

If you think it'd be useful for your projects, take a look and let me know if you run into any trouble.

Converting Tycho Projects to maven-bundle-plugin, Initial Phase

Thu Aug 22 15:27:10 EDT 2019

Tags: maven osgi tycho
  1. Converting Tycho Projects to maven-bundle-plugin, Initial Phase
  2. Winter Project #2: Maven P2 Repository Resolver
  3. OpenNTF Fork of p2-maven-plugin
  4. The Intricate Work of OSGi Dependencies on Domino

To date, Tycho has been my tool of choice for developing Domino-targeted Maven projects. However, it's not without protest.. Unlike most Maven plugins, Tycho inserts itself at the very start of the build process and takes over dependency management. Purely in Maven, you can use normal Maven dependencies, but only so long as you're pointing to a dependency that already has OSGi metadata (which, fortunately, most do), and only then to satisfy a Require-Bundle or Import-Package that also has to be present. This gets more annoying, though, when dealing with Eclipse, which removes the notion of Maven dependencies entirely when using Tycho and forces you to jump through hoops to do what you want. And, as a final kicker, Tycho's p2 repository support is completely broken in the latest release version of Maven.

So why do I keep using it, anyway?

Well, it brings a couple major benefits that are of particular importance for Domino:

  • It can use p2 repositories for dependencies. This matters because the XPages runtime plugins are not (yet?) available as normal Maven dependencies. Years back, IBM [provided a "Build Management" update site](https://openntf.org/main.nsf/project.xsp?r=project/IBM Domino Update Site for Build Management), which is helpful, but it's still an Eclipse-style p2 repository, not a Maven repository. Tycho can use p2 repositories natively, though, just as Eclipse does.
  • It constructs a true Equinox environment. This matters both when compiling your project and when running automated tests. The environment created by Tycho is the same Equinox OSGi runtime that Domino uses, and so it supports the same styles of bundle resolution and extensions that you get in Domino. Without this happening during the build, you lose some assurance that things at runtime will match your expectations.
  • It spawns tests in a separate process. This is a little esoteric, but it matters because launching a Notes environment on a non-Windows platform more-or-less requires setting up environment variables for the Notes/Domino directory and others, and these variables are not successfully set when using the normal maven-surefire-plugin runtime. This means that reliably running tests requires setting up the environment ahead of time, which is fiddlier and less automated.
  • It can generate new- and old-style Eclipse Update Sites. To be used in Designer and NSF-based Update Sites, an OSGi project has to be packaged up into a p2 repository along with an old-style "site.xml" file. Tycho can generate these (and can be assisted with "site.xml" when using the newer style), and it can also auto-generate source bundles, features, and repositories.

Alternatives and Workarounds

Some of the "hard" requirements for Tycho can be at least worked around.

Years ago, I wrote a Ruby script that would take a p2 site like IBM's or one generated from a newer version and "Mavenize" it by creating artifact information based on each bundle's OSGi manifest. I since converted it to Java and included it in Darwino's Studio plugins, and yesterday added it to the generate-domino-update-site Maven plugin. Using that lets you declare dependencies on any of the bundles or embedded JARs in a normal Maven project:

1
2
3
4
5
6
7
        <dependency>
            <groupId>com.ibm.xsp</groupId>
            <artifactId>com.ibm.notes.java.api.win32.linux</artifactId>
            <version>[10.0.0,)</version>
            <classifier>Notes</classifier>
            <scope>provided</scope>
        </dependency>

This isn't perfect, since it's neither standardized nor generally available (go vote for the aha idea!), but at least it's reproducible and can be something of a de-facto standard if used enough.

Then there's the matter of generating appropriate OSGi metadata. Outside of the Tycho-using world, the main way that generating this is via a tool called bnd and its related tools. bnd is kind of a parallel world and there's even an alternate tooling set for Eclipse instead of the default PDE. There are a couple ways to use bnd in a Maven build, but the one I'm familiar with to date is the maven-bundle-plugin. I've used this with Darwino to incidentally create OSGi metadata for the otherwise non-OSGi core modules, and I suspect that it gets used heavily this way. It's more powerful than that, though, and is a nice wrapper for bnd under the hood, supporting Declarative Services annotations and all the other OSGi goodies. In my case, I used it to generate the MANIFEST.MF with most of the defaults, but then added in some specifics to play nice in my Domino Equinox target.

I suspect that these bnd-based tools can also be a route to solving my automated-testing woes. For the Open Liberty Runtime project, I don't have to worry about that, since it's so dependent on running in actual Domino that the return-on-investment for setting up JUnit tests wouldn't be worth it. However, I recall seeing some Maven testing plugin that let you spawn an OSGi environment of your choice, and I think that something like that may be able to replace Tycho for me there.

Since p2 repositories/update sites are entirely an Eclipse-ism, most OSGi tooling doesn't care about them. That's where p2-maven-plugin comes in. Not only will it allow you to create p2 repositories, but it lets you define features in the configuration, meaning they don't have to be separate modules like in Tycho. And not only that, but it will also auto-OSGi-ify any Maven dependencies you bring in if they don't already have OSGi bundle information. It also lets you override existing bundle data on the fly if needed, such as if the dependencies and imports conflict with something on Domino.

Eclipse Friendliness

Since I still use Eclipse to develop these projects, I want to be able to make use of the [XPages SDK](https://openntf.org/main.nsf/project.xsp?r=project/XPages SDK for Eclipse RCP)'s ability to run Domino's HTTP stack pointed at my active workspace. For that to work, I need to be able to get Eclipse to recognize my projects as functional PDE-compatible bundles even if I'm not using PDE for them. Fortunately, that process isn't difficult: once I set the location for MANIFEST.MF to be in "META-INF" in the project root, maven-bundle-plugin started generating the files there instead of within "target", and Eclipse started working with the projects as OSGi bundles. The only thing left to do then was to gitignore the generated files, since they don't need to be checked into source control anymore.

Future Improvements

The big thing that is still an open problem is dealing with testing. I have some ideas for taking a swing at it, but for now it's the main thing preventing me from doing this for all of my Tycho projects.

Beyond that, I want to look a bit into bnd-maven-plugin. This diverges from maven-bundle-plugin in that it's geared towards using bnd configuration files directly. During the build process, I think the results would be the same, since maven-bundle-plugin can already pass through whatever configuration I want, but it would be a better match for the Eclipse bndtools tooling. Additionally, externalizing the bnd config files would mean they'd be the same if I decided to switch to Gradle, as Open Liberty uses.

Finally, and specific to this Open Liberty project, I may want to consider using bnd to generate Liberty Feature manifests, as it itself does. These features are implemented as OSGi "subsystems" packaged .esa files. Currently, I'm using esa-maven-plugin to generate their specialized manifests, but I've already hit some limitations in the area of cross-feature dependencies. Apparently, bnd takes some wrangling to suit this, but is worth it. I'll consider that one a "stretch goal", though.

For now, I'm pretty pleased with the new setup. The projects still work on Domino, I can run them on there from the workspace, I was able to eliminate the p2 feature projects outright, and now I don't have to worry about packaging up a dependencies site just to have something to point at in Eclipse. Heck, I can even use Visual Studio Code now! It's pretty nice.

Developing Open Liberty Features, Part 2

Sun Aug 18 09:14:24 EDT 2019

  1. Developing an Open/WebSphere Liberty UserRegistry with Tycho
  2. Developing Open Liberty Features, Part 2

In my earlier post, I went over the tack I took when developing a couple extension features for Open Liberty, and specifically the way I came at it with Tycho.

Shortly after I posted that, Alasdair Nottingham, the project lead for Open Liberty, dropped me a line to mention how programmatic service registration isn't preferred, and instead the idiomatic way is to use Declarative Services. I had encountered DS while fumbling my way through to getting these things working, but I had run into some bit of trouble or another, and I ended up settling on what I got working and not revisiting it.

Concepts

This was a perfect opportunity to go back and do things the right way, though, so I set out to do that this morning. In my initial reading up, I ran across a blog post from the always-helpful Vogella Blog that talks about coming at OSGi DS from essentially the same perspective I have: namely, having been used to Equinox and the Eclipse plugin/extension mechanism. As it turns out, when it comes to generic OSGi, Equinox can kind of poison your brain. The whole term "plug-in" instead of "bundle" comes from earlier Eclipse; "features", "update sites", and all of p2 are entirely Equinox-specific; and the "plugin.xml" extension mechanism is of a similar vintage. However, unlike some other vestiges that were tossed aside, "plugin.xml" is still in active use.

At its core, the Declarative Services system is generally similar to that route, in that you write classes to implement a given interface and then declare that your bundle provides that using XML files. The specifics are different - DS is more type-safe and it uses individual XML files in the "OSGI-INF" directory for each service - but the concept is similar. DS also has an annotation-based mechanism for this, which allows you to annotate your service classes directly and not worry about maintaining XML files. It's something of a compiler trick: the XML files still exist, but your tooling of choice (PDE, bnd, etc.) will generate the files based on your annotations. It took a bit for Eclipse to get on board with this, but, as of Neon, you can enable this processing in the preferences.

Implementation

Fortunately for me, my needs are simple enough that making the change was pretty straightforward. The first step was to delete the Activator class outright, as I won't need it for this. The second was to add an optional import for the org.osgi.service.component.annotations package in my Liberty extension bundle. I suspect that this is a bit of a PDE-ism: the annotations aren't even retained at runtime (and the package isn't present in the Liberty server), but this is the only mechanism Eclipse has to add a dependency for a plug-in project.

The annotation for the user registry was as straightforward as can be, needing a single line in this heavily-clipped version of the class:

1
2
3
@Component(service=UserRegistry.class, configurationPid="dominoUserRegistry")
public class DominoUserRegistry implements UserRegistry {
}

With that, Eclipse started generating the associated XML file for me, and the registry showed up at runtime just as it had before.

The TrustAssociationInterceptor was slightly more complicated because it had some extra initialization properties set, in particular the one to mark it as executing before normal SSO. This was a little tricky in two ways: Java annotations don't have any mechanism for specifying a literal Map for properties, and the before-SSO property is a boolean, but I could only write a string. It turned out that the property, uh, property on the annotation has a little mini-DSL where you can mark a property with its type. The result was:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
@Component(
	service=TrustAssociationInterceptor.class,
	configurationPid=DominoTAI.CONFIG_PID,
	property={
		"invokeBeforeSSO:Boolean=true",
		"id=org.openntf.openliberty.wlp.userregistry.DominoTAI"
	}
)
public class DominoTAI implements TrustAssociationInterceptor {
}

Further Features

This is proving to be a pretty fun side project within a side project, and I think I'll take a crack at developing some more features when I have a chance. In particular, I'd like to try developing some API-contribution features so that they can be deployed to the server once and then used by web apps without having to package them (similar in concept to XPages Libraries). This is how Liberty implements its Jakarta EE specifications, and I could see making some extra ones. That's also exactly what CrossWorlds does, and so I imagine I'll crib a bunch of that work.

Developing an Open/WebSphere Liberty UserRegistry with Tycho

Fri Aug 16 15:08:10 EDT 2019

  1. Developing an Open/WebSphere Liberty UserRegistry with Tycho
  2. Developing Open Liberty Features, Part 2

In my last post, I put something of a stick in the ground and announced a multi-blog-post project to discuss the process of making an XPages app portable for the future. In true season-cliffhanger fashion, though, I'm not going to start that immediately, but instead have a one-off entry about something almost entirely unrelated.

Specifically, I'm going to talk about developing a custom UserRegistry and TrustAssociationInterceptor for Open Liberty/WebSphere Liberty. IBM provides documentation for this process, and it's alright enough, but I had to learn some specific things coming at it from a Domino perspective.

What These Services Are

Before I get in to the specifics, it's worth discussing what specifically these services are, especially TrustAssociationInterceptor with its ominous-sounding name.

A UserRegistry class is a mechanism to provide a Liberty server with authentication and user info services. Liberty has a couple of these built-in, and the prototypical ones are the basic and LDAP registries. Essentially, these do the job of the Directory and Directory Assistance on Domino.

A TrustAssociationInterceptor class is related. What it does is take an incoming HTTP request and look for any credentials it understands. If present, it tells Liberty that the request can be considered authenticated for a given user name. The classic mechanisms for this are HTTP Basic and form-cookie authentication, but this can also cover mechanisms like OAuth. In Domino, this maps to the built-in authentication mechanisms and, more particularly, to DSAPI filters.

How I Used Them

My desire to implement these developed when I was working on the Domino Open Liberty Runtime. I wanted to allow Liberty to use the containing Domino server as a user registry without having to enable LDAP and, as a stretch goal, I wanted to have some sort of implicit SSO without having to configure LTPA.

So I ended up devising something of an ad-hoc directory API exposed as a servlet on Domino, which Liberty could use to make the needed queries. To pair with that, I wrote a TrustAssociationInterceptor implementation that looks for Domino auth cookies in incoming requests, make a call to a small servlet with that cookie, and grabs the associated username. That provides only one-way SSO, but that's good enough for now.

The Easy Part

The good part was that my assumption that my comfort with Tycho going in would help was generally correct. Since the final output I wanted was a bundle, I was able to just add it to my project structure like any other, and work with it in Eclipse's PDE normally. Tycho and PDE didn't necessarily help much - I still had to track down the Liberty API plugins and make a local update site out of them, but that was old hat by this point.

What Made Development Weird

I went into the project in high spirits: the interfaces required weren't bad, and Liberty uses OSGi internally. I figured that, with my years of OSGi experience, this would be a piece of cake.

And, admittedly, it kind of was. The core concepts are the same: building with Tycho, bundle activators, MANIFEST.MF, and all that. However, Liberty's use of OSGi is, I believe, much more modern than Domino's, and certainly much less focused on Equinox specifically.

For one, though Liberty is indeed OSGi-based, it doesn't use Maven Tycho for its build process. Instead, it uses Gradle and the often-friendlier bnd tooling to handle its OSGi composition. That's not too huge of a difference, and the build process doesn't really affect the final built feature. The full differences are a whole big topic on their own, but the way they shake out for this purpose is essentially a difference in philosophy, and the different build mechanism was something of a herald of the downstream distinctions.

One big way this shows is in service registration. Coming from an Eclipse heritage, Equinox-based apps tend to use "plugin.xml" to register services, Liberty (and most others, I assume) favors programmatic registration of services inside the bundle activator. While this does indeed work on Equinox (including on Domino), this was the first time I'd encountered it, and it took some getting used to.

The other oddity was how you encapsulate your bundle as a feature in Liberty parlance. Liberty uses the term "feature" to refer to individual components that make up the server, and which you can configure in the "server.xml" file. These are declared using files similar to MANIFEST.MF with specialized headers to declare the name of the feature, the bundles that make it up, and any APIs it provides to the server and apps. In my case, I wrote a generic mechanism to deploy these features when a server is established, which writes the manifest files to the server's feature directory. Once they're deployed, they become available to the server as a feature with the "usr" prefix, like "usr:dominoUserRegistry-1.0" for my case.

In The Future

I have some ideas for additional features I'd like to develop - providing implicit APIs for Darwino and Jakarta NoSQL/JNoSQL would be handy, for example. This way went pretty smoothly, but I'll probably develop non-Domino ones using either Gradle or Maven with the maven-bundle-plugin. Either way, it ended up fairly pleasant once I discarded my old assumptions, and it's another good entry in the "pros" column for Liberty.

AbstractCompiledPage, Missing Plugins, and MANIFEST.MF in FP10 and V10

Fri Oct 19 11:48:33 EDT 2018

  1. AbstractCompiledPage, Missing Plugins, and MANIFEST.MF in FP10 and V10
  2. Domino 11's Java Switch Fallout
  3. fontconfig, Java, and Domino 11
  4. Notes/Domino 12.0.2 Fallout
  5. Notes/Domino 14 Fallout
  6. PSA: ndext JARs on Designer 14 FP1 and FP2
  7. PSA: XPages Breaking Changes in 14.0 FP3

Since 9.0.1 FP 10, and including V10 because it's largely identical for this purpose, I've encountered and seen others encountering a couple strange problems with compiling XPages projects. This is a perfect opporunity for me to spin a tale about the undergirding frameworks, but I'll start out with the immediate symptoms and their fixes.

The Symptoms

There are three broad categories of problems I've seen:

  • "AbstractCompiledPage cannot be resolved to a type"
  • Missing third-party XPages libraries, such as ODA, resulting in messages like "The import org.openntf cannot be resolved"
  • Complaints about MANIFEST.MF, like "MANIFEST.MF has no main section" and others

The first two are usually directly related and have the same fix, while the second can also be caused by some other sources, and the last one is entirely distinct.

Fix #1: The Target Platform

For the first two are based on problems in the active Target Platform, namely one or both of the standard platform components go missing. The upshot is that you want your Target Platform preferences to look something like this:

Working Target Platform

There should be a selected platform (the name doesn't matter, but "Running Platform" is the default name) with entries at least for ${eclipse_home} and for a directory inside your Notes data dir, here C:\Notes\Data\workspace\applications\eclipse. If they're missing, modify an existing platform or create a new one and add an "Installation"-type entry for ${eclipse_home} and a "Directory"-type one for the eclipse directory within your data dir.

Fix #2: Broken Plugins, Particularly ODA

Though V10 didn't change much when it comes to XPages, there are a few small differences. One in particular bit ODA: we had a dependency on the com.ibm.domino.commons plugin, which was in the standard Notes environment previously but is not as of V10 (though it's still present on the server). We fixed that one in the V10 release, and so you should update your ODA version if you hit this trouble. I don't think I've seen other plugins with this issue in the V10 transition, but it's a possibility if Fix #1 doesn't do it.

Fix #3: MANIFEST.MF

This one barely qualifies as a "fix", but it worked for me: if you see Designer complaining about MANIFEST.MF, you can usually beat it into submission by cleaning/rebuilding the project in question. The trouble is that Designer is, for some reason, skipping a step of the XPages compilation process, and cleaning usually kicks it into gear.

I've also seen others have success by deleting the error entry in the Errors view (which is actually a thing that you can do) and never seeing it again. I suspect that the real fix here is the same as above: during the next build, Designer creates the file properly and it goes away on its own.

The Causes

So what are the sources of these problems? The root reason is that Designer is a sprawling mountain of code, built on ancient frameworks and maintained by a diminished development team, but the immediate causes have to do with OSGi.

The first type of trouble - the target platform - most likely has to do with a change in the way Eclipse manages target platforms (look at the same prefs screen in 9.0.1 stock and you'll see it's quite different), and I suspect that there's a bug in the code that migrates between the two formats, possibly due to the dramatic age difference in the underlying Eclipse versions.

The second type of trouble - the MANIFEST.MF - is due to a behind-the-scenes switch in how Designer (and maybe the server) handles dependencies in XPages projects.

Target Platforms

The mechanism that OSGi projects - such as XPages applications - use for determining their dependencies at build time is the notion of a "Target Platform". The "target" refers to the notion that this is the platform that is expected to be available at runtime for what you're building - loosely equivalent to a basic Java classpath. An OSGi project is checked against this Target Platform to determine which classes are available based on their bundle names and versions.

This is distinct from the related concept of a "Running Platform". Designer, being based on Eclipse, is itself built on and runs using OSGi. Internally, it uses the same mechanisms that an XPages application does to determine what plugins it knows about and what services those plugins provide.

This distinction has historically been hidden from XPages developers due to the way the default Target Platform is set up, pointing at the same Running Platform it's using. So Designer itself has the core XPages plugins running, and it also exposes them to XPages applications as the Target. Similarly, the way we install XPages Libraries like ODA is to install them outright into the Designer Running Platform. This allows Designer to know about the library service provided, which it uses to populate the list of available plugins in the Xsp Properties editors.

However, as our trouble demonstrates, they're not inherently the same thing. In standalone OSGi development in Eclipse, it's often useful to have a Target Platform distinct from the Running Platform - such as the XPages environment for plugins - to ensure that you only depend on plugins that will be available at runtime. But when the two diverge in Designer, you end up with situations like this, where Designer-the-application knows about the XPages runtime and plugins and constructs an XPages project and translates XSP to Java using them, but then the compilation process with its empty Target Platform has no idea how to actually compile the generated code.

MANIFEST.MF

I've mentioned that an OSGi project "determines its dependencies" out of the Target Platform, but didn't mention the way it does that. The specific mechanism has changed over time (which is the source of our trouble), but the idea is that, in addition to the Java classes and resources, an OSGi bundle (or plugin) has a file that declares the names of the plugins it needs, including potentially a version range. So, for example, a plugin might say "I need org.apache.httpcomponents.httpclient at least version 4.5, but not 5.0 or higher". The compiler uses the Target Platform to find a matching plugin to compile the code, and the runtime environment (Domino in our case) does the same with its Running Platform when loading.

(Side note: you can also specify Java packages to include from any plugin instead of specific plugin names, but Designer does not do that, so it's not important for this purpose.)

(Other side note: this distinction comes, I believe, from Eclipse's switch from its own mechanism to OSGi in its 3.0 release, but I use "OSGi" to cover the general concept here.)

The old way to do this was in a file called "plugin.xml". If you look inside any XPages application in Package Explorer, you'll see this file and the contents will look something like this:

<?xml version="1.0" encoding="UTF-8"?>
<?eclipse version="3.0"?>
<plugin class="plugin.Activator"
  id="Galatea2dVCC_2fIKSG_dev5csyncagent_nsf" name="Domino Designer"
  provider="TODO" version="1.0.0">
  <requires>
    <!--AUTOGEN-START-BUILDER: Automatically generated by null. Do not modify.-->
    <import plugin="org.eclipse.ui"/>
    <import plugin="org.eclipse.core.runtime"/>
    <import optional="true" plugin="com.ibm.commons"/>
    <import optional="true" plugin="com.ibm.commons.xml"/>
    <import optional="true" plugin="com.ibm.commons.vfs"/>
    <import optional="true" plugin="com.ibm.jscript"/>
    <import optional="true" plugin="com.ibm.designer.runtime.directory"/>
    <import optional="true" plugin="com.ibm.designer.runtime"/>
    <import optional="true" plugin="com.ibm.xsp.core"/>
    <import optional="true" plugin="com.ibm.xsp.extsn"/>
    <import optional="true" plugin="com.ibm.xsp.designer"/>
    <import optional="true" plugin="com.ibm.xsp.domino"/>
    <import optional="true" plugin="com.ibm.notes.java.api"/>
    <import optional="true" plugin="com.ibm.xsp.rcp"/>
    <import optional="true" plugin="org.openntf.domino.xsp"/>
    <!--AUTOGEN-END-BUILDER: End of automatically generated section-->
  </requires>
</plugin>

You can see it here declaring a name for the pseudo-plugin that "is" the XPages application (oddly, "Domino Designer"), a couple other metadata bits, and, most importantly, the list of required plugins. This is the list that Designer historically (and maybe still; it's not clear) uses to populate the "Plug-in Dependencies" section in the Package Explorer view. It trawls through the Target Platform, finds a matching version of each of the named plugins (the latest version, since these have no specified ranges), adds it to the list, and recursively does the same for any re-exported dependencies of those plugins. "Re-exported" isn't exposed here as a concept, but it is a distinction in normal OSGi plugins.

Designer derives its starting points here from implicit required libraries in XPages applications (all those "org.eclipse" and "com.ibm" ones above) as well as through the special mechanism of XspLibrary extension contributions from plugins installed in the Running Platform. This is why a plugin like ODA has to be installed in Designer itself: in the runtime, it asks its plugins if they have any XspLibrary classes and uses those to determine the third-party plugin to load. Here, ODA declares that its library needs org.openntf.domino.xsp, so Designer adds that and its re-exported dependencies to the Plug-in Dependencies group.

With its switch to OSGi in the 3.x series circa 2005, most of the functionality of plugin.xml moved to a file called "META-INF/MANIFEST.MF". This starkly-named file is a standard part of Java, and OSGi extends it to include bundle/plugin metadata and dependency declarations. As of 9.0.1 FP10, Designer also generates one of these (or is supposed to) when assembling the XPages project. For the same project, it looks like this:

Manifest-Version: 1.0
Bundle-ManifestVersion: 2
Bundle-Name: Domino Designer
Bundle-SymbolicName: Galatea2dVCC_2fIKSG_dev5csyncagent_nsf;singleton:=true
Bundle-Version: 1.0.0
Bundle-Vendor: TODO
Require-Bundle: org.eclipse.ui,
  org.eclipse.core.runtime,
  com.ibm.commons,
  com.ibm.commons.xml,
  com.ibm.commons.vfs,
  com.ibm.jscript,
  com.ibm.designer.runtime.directory,
  com.ibm.designer.runtime,
  com.ibm.xsp.core,
  com.ibm.xsp.extsn,
  com.ibm.xsp.designer,
  com.ibm.xsp.domino,
  com.ibm.notes.java.api,
  com.ibm.xsp.rcp,
  org.openntf.domino.xsp
Eclipse-LazyStart: false

You can see much of the same information (though oddly not the Activator class) here, switched to the new format. This matches what you'll work with in normal OSGi plugins. For Eclipse/Equinox-targeted plugins, like XPages libraries, plugin.xml still exists, but it's reduced to just declaring extension points and contributions, and no longer includes dependency or name information.

Eclipse had moved to full OSGi by the time of Designer's pre-FP10 basis (2008's 3.4 Ganymede), but XPages's history goes back further, so I guess that the old-style Eclipse plugin.xml route is a relic of that. For a good while, Eclipse worked with the older-style plugins without batting an eye. FP10 brought a move to 2016's Eclipse 4.6 Neon, though, and I'm guessing that Eclipse dropped the backwards compatibility somewhere in the intervening eight years, so the XPages build process had to be adapted to generate both the older plugin.xml files for backwards compatibility as well as the newer MANIFEST.MF files.

I can't tell what the cause is, but, sometimes, Designer fails to populate the contents of this file. It might have something to do with the order of the builders in the internal Eclipse project file or some inner exception that manifests as an incomplete build. Regardless, doing a project clean and usually jogs Designer into doing its job.

Conclusion

The mix of layering a virtual Eclipse project over an NSF, the intricacies of OSGi, and IBM's general desire to insulate XPages developers from the black magic behind the scenes leads to any number of opportunities for bugs like these to crop up. Honestly, it's impressive that the whole things holds together as well as it does. Even though it doesn't seem like it to look at the user-visible changes, the framework changes in FP10 were massive, and it's not at all surprising that things like this would crop up. It's just a little unfortunate that the fixes are in no way obvious unless you've been stewing in this stuff for years.

Dealing with OSGi Fragments in Tycho and Designer

Fri Sep 11 19:30:20 EDT 2015

Tags: java tycho osgi

This post is partly to spread information publicly and partly a useful note to my future self for the next time I run into this trouble.

In OGSi, the primary type of entity you're dealing with is a "Bundle" or "Plug-in" (the two terms are effectively the same for our needs). However, there's a sort of specialized type that you may run into called a "Fragment". They're similar to a plug-in in that they're a contained unit of Java code and resources, but they have the special property that they're attached to another plug-in and automatically come along for the ride when the main plug-in is used. This is useful in a couple situations, such as code organization, serving platform-specialized native libraries, after-the-fact additions, or providing library dependencies.

In the basic case, the only requirement is to specify in the fragment what the "parent" plug-in is (Eclipse provides a field for this in its editor) and then including the fragment in the installable feature alongside the plug-in. However, there are a few situations where a bit more work is required if you want to access the classes in the fragment: when used as part of a Tycho build and when used as an XSP Library in Designer (which may also apply to Eclipse dependency use generally).

Tycho

When doing a full Tycho build, even if both the plug-in and its fragment(s) are part of the current build, another project won't automatically include the fragment when doing the compilation. This can lead to a situation where the projects will compile cleanly in Eclipse (which handles the fragment attachment) but fail in Tycho. The trick, though small, is non-obvious: you have to tell the project that is using the fragment code about the fragment in its build.properties.

So say you have three projects: the main plug-in (some.main.plugin), a fragment attached to it (some.main.plugin.fragment), and the project consuming them (some.dependent.plugin). The normal first step is to include the main plug-in in the dependent plug-in's MANIFEST.MF as usual:

Require-Bundle: some.main.plugin

In Eclipse, this will suffice: both the main plug-in and its fragment will show up in the "Plug-in Dependencies" library. For Tycho, though, you have to tip it off using a line like this in build.properties:

extra.. = platform:/fragment/some.main.plugin.fragment

Think of this as saying "hey, dummy, don't forget about the fragment". Once you have that line, the Tycho-enabled Maven build should be able to resolve the fragment's classes and all will be well.

Designer

When using the plug-in and its fragment in an XSP Library in Designer, there's a similar-seeming problem: though Designer will include any direct dependencies of your Library plug-in in the class path, it won't pick up on any fragments by default (though it seems that Domino does). The trick here is that the primary plug-in has to tell Designer that it accepts fragments, which is done by setting Eclipse-ExtensibleAPI in the MANIFEST.MF file for some.main.plugin, like so:

Eclipse-ExtensibleAPI: true

Once that's in place, the fragment should start showing up in your NSF's classpath when the library is enabled.