Converting Tycho Projects to maven-bundle-plugin, Initial Phase

Thu Aug 22 15:27:10 EDT 2019

Tags: maven osgi tycho
  1. Converting Tycho Projects to maven-bundle-plugin, Initial Phase
  2. Winter Project #2: Maven P2 Repository Resolver
  3. OpenNTF Fork of p2-maven-plugin
  4. The Intricate Work of OSGi Dependencies on Domino

To date, Tycho has been my tool of choice for developing Domino-targeted Maven projects. However, it's not without protest.. Unlike most Maven plugins, Tycho inserts itself at the very start of the build process and takes over dependency management. Purely in Maven, you can use normal Maven dependencies, but only so long as you're pointing to a dependency that already has OSGi metadata (which, fortunately, most do), and only then to satisfy a Require-Bundle or Import-Package that also has to be present. This gets more annoying, though, when dealing with Eclipse, which removes the notion of Maven dependencies entirely when using Tycho and forces you to jump through hoops to do what you want. And, as a final kicker, Tycho's p2 repository support is completely broken in the latest release version of Maven.

So why do I keep using it, anyway?

Well, it brings a couple major benefits that are of particular importance for Domino:

  • It can use p2 repositories for dependencies. This matters because the XPages runtime plugins are not (yet?) available as normal Maven dependencies. Years back, IBM [provided a "Build Management" update site](https://openntf.org/main.nsf/project.xsp?r=project/IBM Domino Update Site for Build Management), which is helpful, but it's still an Eclipse-style p2 repository, not a Maven repository. Tycho can use p2 repositories natively, though, just as Eclipse does.
  • It constructs a true Equinox environment. This matters both when compiling your project and when running automated tests. The environment created by Tycho is the same Equinox OSGi runtime that Domino uses, and so it supports the same styles of bundle resolution and extensions that you get in Domino. Without this happening during the build, you lose some assurance that things at runtime will match your expectations.
  • It spawns tests in a separate process. This is a little esoteric, but it matters because launching a Notes environment on a non-Windows platform more-or-less requires setting up environment variables for the Notes/Domino directory and others, and these variables are not successfully set when using the normal maven-surefire-plugin runtime. This means that reliably running tests requires setting up the environment ahead of time, which is fiddlier and less automated.
  • It can generate new- and old-style Eclipse Update Sites. To be used in Designer and NSF-based Update Sites, an OSGi project has to be packaged up into a p2 repository along with an old-style "site.xml" file. Tycho can generate these (and can be assisted with "site.xml" when using the newer style), and it can also auto-generate source bundles, features, and repositories.

Alternatives and Workarounds

Some of the "hard" requirements for Tycho can be at least worked around.

Years ago, I wrote a Ruby script that would take a p2 site like IBM's or one generated from a newer version and "Mavenize" it by creating artifact information based on each bundle's OSGi manifest. I since converted it to Java and included it in Darwino's Studio plugins, and yesterday added it to the generate-domino-update-site Maven plugin. Using that lets you declare dependencies on any of the bundles or embedded JARs in a normal Maven project:

1
2
3
4
5
6
7
        <dependency>
            <groupId>com.ibm.xsp</groupId>
            <artifactId>com.ibm.notes.java.api.win32.linux</artifactId>
            <version>[10.0.0,)</version>
            <classifier>Notes</classifier>
            <scope>provided</scope>
        </dependency>

This isn't perfect, since it's neither standardized nor generally available (go vote for the aha idea!), but at least it's reproducible and can be something of a de-facto standard if used enough.

Then there's the matter of generating appropriate OSGi metadata. Outside of the Tycho-using world, the main way that generating this is via a tool called bnd and its related tools. bnd is kind of a parallel world and there's even an alternate tooling set for Eclipse instead of the default PDE. There are a couple ways to use bnd in a Maven build, but the one I'm familiar with to date is the maven-bundle-plugin. I've used this with Darwino to incidentally create OSGi metadata for the otherwise non-OSGi core modules, and I suspect that it gets used heavily this way. It's more powerful than that, though, and is a nice wrapper for bnd under the hood, supporting Declarative Services annotations and all the other OSGi goodies. In my case, I used it to generate the MANIFEST.MF with most of the defaults, but then added in some specifics to play nice in my Domino Equinox target.

I suspect that these bnd-based tools can also be a route to solving my automated-testing woes. For the Open Liberty Runtime project, I don't have to worry about that, since it's so dependent on running in actual Domino that the return-on-investment for setting up JUnit tests wouldn't be worth it. However, I recall seeing some Maven testing plugin that let you spawn an OSGi environment of your choice, and I think that something like that may be able to replace Tycho for me there.

Since p2 repositories/update sites are entirely an Eclipse-ism, most OSGi tooling doesn't care about them. That's where p2-maven-plugin comes in. Not only will it allow you to create p2 repositories, but it lets you define features in the configuration, meaning they don't have to be separate modules like in Tycho. And not only that, but it will also auto-OSGi-ify any Maven dependencies you bring in if they don't already have OSGi bundle information. It also lets you override existing bundle data on the fly if needed, such as if the dependencies and imports conflict with something on Domino.

Eclipse Friendliness

Since I still use Eclipse to develop these projects, I want to be able to make use of the [XPages SDK](https://openntf.org/main.nsf/project.xsp?r=project/XPages SDK for Eclipse RCP)'s ability to run Domino's HTTP stack pointed at my active workspace. For that to work, I need to be able to get Eclipse to recognize my projects as functional PDE-compatible bundles even if I'm not using PDE for them. Fortunately, that process isn't difficult: once I set the location for MANIFEST.MF to be in "META-INF" in the project root, maven-bundle-plugin started generating the files there instead of within "target", and Eclipse started working with the projects as OSGi bundles. The only thing left to do then was to gitignore the generated files, since they don't need to be checked into source control anymore.

Future Improvements

The big thing that is still an open problem is dealing with testing. I have some ideas for taking a swing at it, but for now it's the main thing preventing me from doing this for all of my Tycho projects.

Beyond that, I want to look a bit into bnd-maven-plugin. This diverges from maven-bundle-plugin in that it's geared towards using bnd configuration files directly. During the build process, I think the results would be the same, since maven-bundle-plugin can already pass through whatever configuration I want, but it would be a better match for the Eclipse bndtools tooling. Additionally, externalizing the bnd config files would mean they'd be the same if I decided to switch to Gradle, as Open Liberty uses.

Finally, and specific to this Open Liberty project, I may want to consider using bnd to generate Liberty Feature manifests, as it itself does. These features are implemented as OSGi "subsystems" packaged .esa files. Currently, I'm using esa-maven-plugin to generate their specialized manifests, but I've already hit some limitations in the area of cross-feature dependencies. Apparently, bnd takes some wrangling to suit this, but is worth it. I'll consider that one a "stretch goal", though.

For now, I'm pretty pleased with the new setup. The projects still work on Domino, I can run them on there from the workspace, I was able to eliminate the p2 feature projects outright, and now I don't have to worry about packaging up a dependencies site just to have something to point at in Eclipse. Heck, I can even use Visual Studio Code now! It's pretty nice.

Developing Open Liberty Features, Part 2

Sun Aug 18 09:14:24 EDT 2019

  1. Developing an Open/WebSphere Liberty UserRegistry with Tycho
  2. Developing Open Liberty Features, Part 2

In my earlier post, I went over the tack I took when developing a couple extension features for Open Liberty, and specifically the way I came at it with Tycho.

Shortly after I posted that, Alasdair Nottingham, the project lead for Open Liberty, dropped me a line to mention how programmatic service registration isn't preferred, and instead the idiomatic way is to use Declarative Services. I had encountered DS while fumbling my way through to getting these things working, but I had run into some bit of trouble or another, and I ended up settling on what I got working and not revisiting it.

Concepts

This was a perfect opportunity to go back and do things the right way, though, so I set out to do that this morning. In my initial reading up, I ran across a blog post from the always-helpful Vogella Blog that talks about coming at OSGi DS from essentially the same perspective I have: namely, having been used to Equinox and the Eclipse plugin/extension mechanism. As it turns out, when it comes to generic OSGi, Equinox can kind of poison your brain. The whole term "plug-in" instead of "bundle" comes from earlier Eclipse; "features", "update sites", and all of p2 are entirely Equinox-specific; and the "plugin.xml" extension mechanism is of a similar vintage. However, unlike some other vestiges that were tossed aside, "plugin.xml" is still in active use.

At its core, the Declarative Services system is generally similar to that route, in that you write classes to implement a given interface and then declare that your bundle provides that using XML files. The specifics are different - DS is more type-safe and it uses individual XML files in the "OSGI-INF" directory for each service - but the concept is similar. DS also has an annotation-based mechanism for this, which allows you to annotate your service classes directly and not worry about maintaining XML files. It's something of a compiler trick: the XML files still exist, but your tooling of choice (PDE, bnd, etc.) will generate the files based on your annotations. It took a bit for Eclipse to get on board with this, but, as of Neon, you can enable this processing in the preferences.

Implementation

Fortunately for me, my needs are simple enough that making the change was pretty straightforward. The first step was to delete the Activator class outright, as I won't need it for this. The second was to add an optional import for the org.osgi.service.component.annotations package in my Liberty extension bundle. I suspect that this is a bit of a PDE-ism: the annotations aren't even retained at runtime (and the package isn't present in the Liberty server), but this is the only mechanism Eclipse has to add a dependency for a plug-in project.

The annotation for the user registry was as straightforward as can be, needing a single line in this heavily-clipped version of the class:

1
2
3
@Component(service=UserRegistry.class, configurationPid="dominoUserRegistry")
public class DominoUserRegistry implements UserRegistry {
}

With that, Eclipse started generating the associated XML file for me, and the registry showed up at runtime just as it had before.

The TrustAssociationInterceptor was slightly more complicated because it had some extra initialization properties set, in particular the one to mark it as executing before normal SSO. This was a little tricky in two ways: Java annotations don't have any mechanism for specifying a literal Map for properties, and the before-SSO property is a boolean, but I could only write a string. It turned out that the property, uh, property on the annotation has a little mini-DSL where you can mark a property with its type. The result was:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
@Component(
	service=TrustAssociationInterceptor.class,
	configurationPid=DominoTAI.CONFIG_PID,
	property={
		"invokeBeforeSSO:Boolean=true",
		"id=org.openntf.openliberty.wlp.userregistry.DominoTAI"
	}
)
public class DominoTAI implements TrustAssociationInterceptor {
}

Further Features

This is proving to be a pretty fun side project within a side project, and I think I'll take a crack at developing some more features when I have a chance. In particular, I'd like to try developing some API-contribution features so that they can be deployed to the server once and then used by web apps without having to package them (similar in concept to XPages Libraries). This is how Liberty implements its Jakarta EE specifications, and I could see making some extra ones. That's also exactly what CrossWorlds does, and so I imagine I'll crib a bunch of that work.

Upcoming Webinar with Cameron Gregor: How We Are Using XPages

Sat Aug 17 10:05:54 EDT 2019

Tags: xpages

I ended my XPages post the other day with a request for people who are working on large XPages applications to hit me up on Twitter to tell me about them. Shortly thereafter, the estimable Cameron Gregor did just that. Moreover, he had the suggestion of turning the discussion into an open webinar, so that others can join.

He made a post on his site with the details and a handy time-zone table to account for our respective locations, and the summary is:

  • Sydney Time: 7:00 AM on August 27th, 2019
  • US Eastern: 5:00 PM on August 26th
  • GMT: 10:00 PM on August 26th
  • Zoom Link

I'm pretty curious to take a look myself, and I hope you'll join us in just over a week!

Developing an Open/WebSphere Liberty UserRegistry with Tycho

Fri Aug 16 15:08:10 EDT 2019

  1. Developing an Open/WebSphere Liberty UserRegistry with Tycho
  2. Developing Open Liberty Features, Part 2

In my last post, I put something of a stick in the ground and announced a multi-blog-post project to discuss the process of making an XPages app portable for the future. In true season-cliffhanger fashion, though, I'm not going to start that immediately, but instead have a one-off entry about something almost entirely unrelated.

Specifically, I'm going to talk about developing a custom UserRegistry and TrustAssociationInterceptor for Open Liberty/WebSphere Liberty. IBM provides documentation for this process, and it's alright enough, but I had to learn some specific things coming at it from a Domino perspective.

What These Services Are

Before I get in to the specifics, it's worth discussing what specifically these services are, especially TrustAssociationInterceptor with its ominous-sounding name.

A UserRegistry class is a mechanism to provide a Liberty server with authentication and user info services. Liberty has a couple of these built-in, and the prototypical ones are the basic and LDAP registries. Essentially, these do the job of the Directory and Directory Assistance on Domino.

A TrustAssociationInterceptor class is related. What it does is take an incoming HTTP request and look for any credentials it understands. If present, it tells Liberty that the request can be considered authenticated for a given user name. The classic mechanisms for this are HTTP Basic and form-cookie authentication, but this can also cover mechanisms like OAuth. In Domino, this maps to the built-in authentication mechanisms and, more particularly, to DSAPI filters.

How I Used Them

My desire to implement these developed when I was working on the Domino Open Liberty Runtime. I wanted to allow Liberty to use the containing Domino server as a user registry without having to enable LDAP and, as a stretch goal, I wanted to have some sort of implicit SSO without having to configure LTPA.

So I ended up devising something of an ad-hoc directory API exposed as a servlet on Domino, which Liberty could use to make the needed queries. To pair with that, I wrote a TrustAssociationInterceptor implementation that looks for Domino auth cookies in incoming requests, make a call to a small servlet with that cookie, and grabs the associated username. That provides only one-way SSO, but that's good enough for now.

The Easy Part

The good part was that my assumption that my comfort with Tycho going in would help was generally correct. Since the final output I wanted was a bundle, I was able to just add it to my project structure like any other, and work with it in Eclipse's PDE normally. Tycho and PDE didn't necessarily help much - I still had to track down the Liberty API plugins and make a local update site out of them, but that was old hat by this point.

What Made Development Weird

I went into the project in high spirits: the interfaces required weren't bad, and Liberty uses OSGi internally. I figured that, with my years of OSGi experience, this would be a piece of cake.

And, admittedly, it kind of was. The core concepts are the same: building with Tycho, bundle activators, MANIFEST.MF, and all that. However, Liberty's use of OSGi is, I believe, much more modern than Domino's, and certainly much less focused on Equinox specifically.

For one, though Liberty is indeed OSGi-based, it doesn't use Maven Tycho for its build process. Instead, it uses Gradle and the often-friendlier bnd tooling to handle its OSGi composition. That's not too huge of a difference, and the build process doesn't really affect the final built feature. The full differences are a whole big topic on their own, but the way they shake out for this purpose is essentially a difference in philosophy, and the different build mechanism was something of a herald of the downstream distinctions.

One big way this shows is in service registration. Coming from an Eclipse heritage, Equinox-based apps tend to use "plugin.xml" to register services, Liberty (and most others, I assume) favors programmatic registration of services inside the bundle activator. While this does indeed work on Equinox (including on Domino), this was the first time I'd encountered it, and it took some getting used to.

The other oddity was how you encapsulate your bundle as a feature in Liberty parlance. Liberty uses the term "feature" to refer to individual components that make up the server, and which you can configure in the "server.xml" file. These are declared using files similar to MANIFEST.MF with specialized headers to declare the name of the feature, the bundles that make it up, and any APIs it provides to the server and apps. In my case, I wrote a generic mechanism to deploy these features when a server is established, which writes the manifest files to the server's feature directory. Once they're deployed, they become available to the server as a feature with the "usr" prefix, like "usr:dominoUserRegistry-1.0" for my case.

In The Future

I have some ideas for additional features I'd like to develop - providing implicit APIs for Darwino and Jakarta NoSQL/JNoSQL would be handy, for example. This way went pretty smoothly, but I'll probably develop non-Domino ones using either Gradle or Maven with the maven-bundle-plugin. Either way, it ended up fairly pleasant once I discarded my old assumptions, and it's another good entry in the "pros" column for Liberty.

How Do You Solve a Problem Like XPages? Redux

Tue Aug 13 17:33:00 EDT 2019

Tags: xpages

The better part of a year ago, I mused about what to do about the XPages problem. For better or for worse, I'm still musing about it to this day, without a good answer. The nice part for me is that it's not actually my job to come up with a real plan to "solve" XPages in a grand scale, but I do have my own set of clients I'm responsible for, and I've been personally invested in it for so long that I'd like to be involved in bringing it to a safe landing.

Moving Away

And I do think that that "landing" almost definitely has to be a path from XPages to something non-XPages. Hypothetically, a path forward would be for HCL to staff up on a new XPages team and improve the platform. Even if they did, though, I don't think it'd be wise for customers to rely on that - not even because I'd doubt the intent, but rather because any single-vendor, closed-source web stack without a large developer community to buffer it is an unreliable foundation.

If it'd be unsafe to rely on a revitalized platform, I think it's certainly unsafe to rely on one that's clearly in maintenance mode. The nature of web development is such that a stationary platform may as well be moving backwards, and not just because it will miss out on Web Workers or other new technology. Sooner or later, Safari or Chrome will remove a capability for security purposes and either Domino or the Dojo version XPages uses will be caught flat-footed. We've already been dealing with different definitions of "define" and various security improvements tripping us up for pretty much the entire life of XPages, and that kind of thing certainly isn't going to go away. Heck, how long to you figure user agents are going to remain reliable? It's unfortunate, but the fact that XPages came out of that "Web 2.0" era means a given page is less likely to function properly in five years than a JavaScript-free page made in 1995 that's still going strong.

Candidates for a Path

So I do think it's important to have a path, but it's not yet defined. A couple candidates spring to mind for me, but each one has one major drawback or another:

Hoist XPages Back to Jakarta EE

By this I mean taking XPages more-or-less as-is and running it on a normal JEE server. This certainly works, and access to the source would let me make it work better, but it kind of kicks the can down the road. XPages itself would still be moribund, and just running it on, say, a Domino-connected Open Liberty runtime wouldn't magically make it modern.

This would, though, provide some breathing room to manipulate an app in a better environment and transform it gradually. A JEE app can use a number of technologies that an XPages app can't currently, and so this would be a way to migrate the code without ever having a hard cutoff.

Bring to XSP Components to JSF 2.x

This is essentially the "upgrade JSF" request that has followed XPages since just after its birth, but with the slightly-lower goal of leaving XPages-the-stack where it is but making a copy of the components and infrastructure that can be brought into a normal JSF runtime like any other set of components. This would possibly be the hairiest of all options, since it wouldn't really be worth it unless things work near-100%, and there are so many little edge cases that it's harrowing to think about. Take the _xsp* methods grafted onto XPages's javax.faces.component.UIComponent alone, or whatever weird ways the XPages Ajax model differs from JSF's.

Still, it'd be doable if desired, and it'd provide a reasonable path to progressively "melt" the XSP components down until they're very-thin wrappers over normal JSF stuff, until they don't really diverge at all. With infinite resources, this would probably be the nicest route.

Try Transforming XSP Markup

By this I mean two similar possibilities: making an XSP-to-Java "compiler" that emits stock JSF components, or one-pass transforming XSP XML into JSF-compatible XML, though I think only the latter would be worth pursuing. While this could potentially rival the complexity of the JSF "update" route, I think that this would allow more room for things to break. For example, if you made an XML transformer, it could target a subset of controls but emit standard comments with TODOs to cover the parts it doesn't handle. That wouldn't be perfect, but you'd end up with real-world-compatible code without having some sort of intermediary translation layer like keeping the old components would essentially be.

This would probably have to involve porting over the SSJS EL extension and thus retaining support for various uncomfortable XPages- and Domino-isms, but them's the breaks, I suppose.

Focus on REST APIs Only

This is the route where we would basically wash our hands of traditional XPages applications (minus bug fixes) and instead target writing only REST services, whether it be in the NSF, in plugins, or in normal JEE apps. This has the advantage of being easiest for HCL, since it more or less works (though the ExtLib's use of Wink holds back the JAX-RS version significantly), but still would mostly be a "rewrite all your applications" route. The only salvaged code would be anything that's already cleanly separated in Java or SSJS, and I suspect that that's not the bulk of it.

Progressing Without a Defined Path

In the mean time, aside from personally becoming acquainted with other technology, I think it behooves all of us with actively-maintained XPages apps to step up the progress on making them portable. I've been doing this heavily for one of my client projects, and I'll have more specifics to say about that in the future. Some parts are straightforward and have remained good advice for a long time: don't use SSJS, do adopt Jakarta EE technologies, and adopt automated builds (including for your non-XPages NSFs).

The specifics get a lot more complicated, unfortunately. Since we've been swimming in the same XPages pond for a decade, even mostly-clean Java code is likely to be infested with XPages-isms, both out of habit and out of necessity. For example, hooking up file uploads to a Java bean requires using an XPages-specific class, which barely transfers to OSGi-based servlets, let alone any other environment. And there are tons of little things, like using XSPContext to get URL parameters. It's going to be a messy process, but I think it will be necessary for any apps that you plan to keep using for more than a year or so.

I'll probably end up turning this into a series, where I'll discuss the various hurdles I've overcome in taking a complicated XPages apps and gradually laying the groundwork for a different UI technology.

And, in the mean time, if you're working with large active XPages applications, hit me up on Twitter and let me know how you've gone about making them. I realized earlier that, while I certainly have detailed knowledge of how people could write XPages applications, I don't have a good bead on how many actual XPages apps of each stripe exist and what the prevailing methods still are.

How the ODP Compiler Works, Part 7

Mon Jul 08 12:24:26 EDT 2019

Tags: nsfodp
  1. Next Project: ODP Compiler
  2. NSF ODP Tooling 1.0
  3. NSF ODP Tooling Example Project
  4. NSF ODP Tooling 1.2
  5. How the ODP Compiler Works, Part 1
  6. How the ODP Compiler Works, Part 2
  7. How the ODP Compiler Works, Part 3
  8. How the ODP Compiler Works, Part 4
  9. How the ODP Compiler Works, Part 5
  10. How the ODP Compiler Works, Part 6
  11. How the ODP Compiler Works, Part 7

In this probably-final entry in the series, I'd like to muse a bit about possible future improvements and additions to the compiler and the NSF ODP Tooling generally. For the most part, the big-ticket future additions seek to answer one question:

Could this be used to replace Designer?

The quick answer is "yes, it could", but that would take a lot of work. There are a couple things inherent in the task and specific to my implementation that both help and hinder this kind of thing.

Notes Runtime

The biggest stumbling block is the hard requirement on a Notes or Domino runtime initialized for the current process. Being able to use C API calls is required by both my code and some of the underlying XPages bits, and that means initializing the runtime. The good news here is that it doesn't require any specific Notes-based program - it can be run with either the libraries that come with Notes or Domino, and it doesn't require Designer at all. That loosens things up a bit, but still means that one of the supported platforms is obligatory at some step of the process.

Even on a supported platform, though, it's not just as easy as calling an init function - the process's environment needs to be set up specifically to know about the Notes program and data directories, and this varies platform-by-platform. This means that it wouldn't be straightforward to have, for example, an Eclipse plugin that initializes the process, since it would depend on initialized environment variables and loading paths implicitly referenced by lower layers, and over which the programmer doesn't have much control after the fact.

The good news here is that the tooling is already designed to support remote work for compilation and export, both truly remote and with the local Equinox runners. For a true IDE experience, the communication between the IDE and compiler would have to be more complex than the "tell the compiler what to do and hear messages back" simple mechanism it has now, but it'd still be a natural evolution.

OSGi Runtime

The requirements posed by compiling complicated XPages applications presents a similar dependency as above, but on an Equinox environment. Though it's possible to fake the basics of OSGi for known plugins, that wouldn't work for arbitrary third-party libraries.

For integration in Eclipse, this wouldn't necessarily mean any new work - Eclipse is already the premier Equinox product, and so it supports what XPages compilation needs innately. However, Eclipse wouldn't be the only target; any work done here should work with other IDEs like IntelliJ, but also continue to work IDE-free via a Maven or Gradle environment.

So this ends up being another strong argument for retaining the "separate process" model that already exists.

Incremental Compilation

Beyond retaining the runtime requirements, the big thing would be a switch to supporting incremental compilation. Currently, the compiler is designed to do everything in one pass: you point it at an ODP and it spits out a freshly-created NSF. This allows it to build up and tear down its environment cleanly, initializing the OSGi plugins for any XPages libraries at the start and doing similarly for any custom classpath jars to be included in the Java runtime.

What supporting incremental compilation would require to be at all speedy and efficient is having a persistent compilation environment. Instead of everything happening sequentially, the IDE would init the compiler process and then send it requests as files need compilation. This has implications for both local and server-based compilation.

Local complication would need to change less: mostly, it would require picking an IPC mechanism and having the launched Equinox process remain alive until it's no longer needed.

Server-based compilation would be similar in implementation, probably using something like HTTP long polling to be able to run in the Domino HTTP container. The trouble would be that a straightforward implementation of this would mean that the Domino server would pretty much have to be dedicated to a single IDE. There's already a potential conflict scenario with two developers doing compilation at the same time: since the XPages compiler needs to install and uninstall OSGi bundles, they could step on each others' toes if any of them overlap. Keeping the compiler environment resident on the server would mean it would have to be effectively locked out to one connection for long periods of time. Assuming HCL continues the Community Edition licensing model, this will be legal to do, but it's still cumbersome.

This could lead into something I've been mulling over: running a Domino compiler server in Docker. This would loosen a lot of the runtime requirements and mean that the encapsulated Domino server would be both dedicated to the purpose and consistent from the perspective of the compiler. Domino's setup requirements initially made it an awkward fit for Docker, but it looks like things have progressed along nicely.

This would all tie perfectly into the Language Server Protocol, which is an IDE-agnostic way to do basically this: have a little running process that knows about the nitty-gritty of the language, and then tell the IDE only what it needs to auto-complete and other features.

Live NSFs

Currently, the compiler starts with an ODP and emits a clean NSF with each build, and this is absolutely, 100% the correct way that it should work. However, Notes being what it is, it'd be expected that a Designer replacement would be able to work with a live NSF, so you could just crack one open, change a view, and be done. The second part of this process is in there, since the compiler uses the normal Notes APIs to store in an NSF as it is. It's the first part that would have to be new, allowing the tooling to selectively look into an NSF.

The exporter already does this, but, like the compiler, goes in one pass. What would potentially make sense would be to do essentially what Designer does: implement a VFS layer to represent an NSF in an equivalent way to the on-disk project. It's more easily said than done, but would be particularly straightforward for Eclipse, for the same reasons that it was straightforward for IBM to do it for Designer.

The secondary question here would be if it would be better to do continue to use DXL as the sole transport mechanism (so editing a view in a live NSF would export it to DXL and then re-import on save) or to instead try to represent things differently. Though DXL is less efficient, particularly for large notes, I think it'd make sense to stick with it - there would be tremendous work involved in trying to make it smarter, and that would be a breeding ground for bugs that just wouldn't exist with DXL.

IDE Features

Getting an NSF to compile dynamically is one thing, but the other part of this kind of project would be making the experience of working with design elements pleasant. In Designer, we have the benefit of having purpose-built editors for each design element type, but these aren't portable even if licensing allowed: the legacy ones all are just wrappers around C++-based "native" UIs, while the newer-era ones are based on Designer's bizarre internal RPC system.

I've done some work along these lines, initially to add autocomplete for custom controls and known core+ExtLib controls to .xsp files. Since that earlier work, I also added a contributor that tells Eclipse to use the DXL schema file for DXL files. While this doesn't give a proper GUI editor, it does provide enough information for Eclipse to pick up on the allowed elements and properties:

Eclipse DXL schema support

While I don't think it'd be worth trying to fully reproduce the various WYSIWYG editors Designer gives you (particularly the view editor, which is laughably bad for data-centric use), I think it'd be worth adding some editors along the lines of my old Forms 'n' Views project. Having some basic editors with a strong focus on the resulting data structure would be perfect for XPages support use and even mostly useful for legacy use.

Time

The core trouble with getting to all these goals, though, is time. For the main compilation and export work, I could justify spending a good amount of time because it eventually more than paid off in less time fighting with Designer to create consistent builds. For this other stuff, though, it's more dependent on whether my hatred for using Designer is enough to tilt the scales. Sometimes, it almost gets there, but I do also need to be able to pay my mortgage, so that puts a bit of a limit on things. It sure would be nice to leave Designer in the dust for good, though.

How the ODP Compiler Works, Part 6

Sun Jul 07 12:46:39 EDT 2019

Tags: nsfodp
  1. Next Project: ODP Compiler
  2. NSF ODP Tooling 1.0
  3. NSF ODP Tooling Example Project
  4. NSF ODP Tooling 1.2
  5. How the ODP Compiler Works, Part 1
  6. How the ODP Compiler Works, Part 2
  7. How the ODP Compiler Works, Part 3
  8. How the ODP Compiler Works, Part 4
  9. How the ODP Compiler Works, Part 5
  10. How the ODP Compiler Works, Part 6
  11. How the ODP Compiler Works, Part 7

In this post, I'd like to go over another main component of the NSF ODP Tooling project, the ODP Exporter. The exporter is significantly simpler than the compiler, but still had a surprising number of gotchas of its own.

My goal in writing the exporter was to replace the need to use Designer to create an on-disk project out of an NSF - in one of my projects, in addition to the primary NSF we use, there are also a dozen or so secondary NSFs inheriting from templates and being modified by people not using Git (I know.), and keeping them all in sync is a giant PITA. Previously, I had a dedicated VM just to open the DBs periodically to sync them, but even that took a long time and got error-prone when Designer would miss a change or generally trip over itself.

So I set out to create a compatible replacement, so that I could run a script and update the ODPs en masse.

The Basics

At its core, the exporter does what you might expect: it reads through each design note and sends them through a DXL exporter. For its work, it makes use of the aforementioned design collection and IBM's NAPI. I went with IBM's variant in this case for one class: com.ibm.designer.domino.napi.design.FileAccess. Though this class let me down when it came to importing, it has just enough encapsulated composite-data reader methods to save me a ton of work here, though I had to cheat to access one of them.

For each design note, it determines its type, which contains behavioral information for each type, including whether it should be included in the normal export process at all, where it's placed in the ODP, and the type of export treatment it should get. The main categories of exported note types line up with what the compiler had to know about, with some special knowledge of whether a note is one of many in a folder (e.g. an XPage) or one-per-database (like the icon note).

The Gotchas

Unsurprisingly, things aren't quite as simple as a basic loop. For elements that are just DXL files, it is that easy, but the ones that exist either as just file data (e.g. "plugin.xml") or file data plus metadata require special handling.

Skipped Items

The first thing to note with split data/metadata items isn't complicated, but bears mentioning: the metadata file is generated by exporting the design note as DXL but ignoring data items. Some of these are common among all types, but others (like indexed "$ClassData0", etc. fields) are best matched and excluded with a regex.

LotusScript, Again

LotusScript libraries threw me for an unexpected loop this time. Their storage format is actually not even composite data: the script itself is stored in plain non-summary text items, multiple ones with the same name. However, the NAPI's DXL exporter doesn't actually export the full script test properly, instead only outputting the content of the first item. Additionally, the legacy Java API shows the presence of multiple items with the same name, but also only gives you the value for the first.

So I ended up having to use the raw note format in memory, which DOES include all of the items, and then stitch the script content together onto the filesystem.

Other File Data

The other data types aren't too complicated, but need special cases for each composite data structure, which is where the FileAccess class comes in. Without the convenience methods there, I would have had to write CD iterators to read the data based on the appropriate structures - not terribly difficult, but it's all the better to have the work already done for me. Especially so since FileAccess pleasantly writes directly to a java.io.OutputStream, just like I'd want if I wrote it myself.

Special-Case Files

There are three final special cases that the exporter handles:

  1. The icon note is specially-exported not once, but twice. It's exported using an NAPI-specific special method to create the "database.properties" file, which includes the ACL and and formatted settings alongside the icon note, and then also exported specially after the main loop as "Resources/IconNote". I've always appreciated how much Lotus wildly overloaded the icon note.

    • There's also a distinct "$DBIcon" note that houses the 32-bit icon introduced in R8 (if I recall correctly), but that's just a normal old image resource with a special name and not related to the icon note.
  2. The "META-INF/MANIFEST.MF" file resource became important in 9.0.1 FP10, but Designer's handling of it is a little schizophrenic. FP10+ will usually fill it in with plugin information when it rebuilds an NSF, but nonetheless exports it as a zero-byte file. It's important for it to exist, so I create a blank file if it doesn't exist.

  3. The Eclipse ".project" file is also not present in older NSFs, but is critical for ODPs. If it wasn't exported, I create a generic stub version.

Swiper

When dealing with on-disk projects, Swiper is a mandatory tool, cleaning up the generated XML and (critically) removing extraneous items that change too frequently to be source-control-friendly.

The core of Swiper is an XSLT stylesheet to do the transformation, and I incorporated this wholesale, with a minor modification to retain the ACL that's stripped out by stock Swiper. I then created an OutputStream implementation that passes DXL output through Swiper if configured. As a small note, I think there was a specific reason why I have the Swiper path buffer the DXL into an in-memory ByteArrayOutputStream first instead of just wrapping the file output stream, but I don't remember what that was.

Final Steps

With this post, I think I've covered the big topics I set out to with the two main components of the Tooling. I plan on having at least one final post in the series to cover some potential future additions and enhancements, since I have a lot of ideas in mind for it. Unfortunately, a lot of the most-useful ideas would also be tremendous amounts of work, but the payoff may eventually be worth it.

How the ODP Compiler Works, Part 5

Fri Jul 05 12:06:26 EDT 2019

Tags: nsfodp
  1. Next Project: ODP Compiler
  2. NSF ODP Tooling 1.0
  3. NSF ODP Tooling Example Project
  4. NSF ODP Tooling 1.2
  5. How the ODP Compiler Works, Part 1
  6. How the ODP Compiler Works, Part 2
  7. How the ODP Compiler Works, Part 3
  8. How the ODP Compiler Works, Part 4
  9. How the ODP Compiler Works, Part 5
  10. How the ODP Compiler Works, Part 6
  11. How the ODP Compiler Works, Part 7

One of the things that came up frequently when writing both the compiler and exporter portions of the NSF ODP Tooling was rationalizing the multiple ways an NSF is viewed, and determining which aspects are reified in the design notes themselves and which are entirely runtime conjurations.

The Traditional View

To describe what I mean, I'll start with the "traditional" way that design notes work, which is also the mechanism the other views are built upon. The starting point there is the distinction between data and design notes, represented in the API as the note class. For our purposes, there's "data note" and "everything else". The design notes are kept track of internally by what can be considered a magic view, the design collection, which is used implicitly whenever something looks up a design element, and can be accessed automatically by API calls like NIFFindDesignNoteExt.

The design collection itself acts like a normal view, containing columns with pertinent design element information for fast lookups. Beyond the note class value, design notes are distinguished by character-based flags, which you can see in the "Fields" part of the property pane in Designer in the $Flags item. These will look something like "gC~4K" - this value comes from an XPage, and can be interpreted by taking each character and looking for it in "stdnames.h" from the C API:

  • g is DESIGN_FLAG_FILE, referring to a "file resource"-type design element (more on this later)
  • C apparently matches to DESIGN_FLAG_NO_COMPOSE, used to refer to forms that don't show up in the "Create" menu. I'm not sure why it's included here; it may have some second meaning
  • ~ maps to DESIGN_FLAG_HIDEFROMDESIGNLIST, presumably to keep XPages out of standard File Resource pickers
  • 4 maps to DESIGN_FLAG_HIDE_FROM_V4, which is reasonable advice, but the fact that this is 4 and not 7 makes me suspect there's a second meaning here too
  • K maps to DESIGN_FLAG_XSPPAGE, cheerily documented in the API as "an xpage, much like a file resource, but special!"

The importance of these flags and the reuse of some note classes (file resources in particular) bares a bit of the evolution of the platform. The older the note type is, the more likely it is to have a dedicated note class value. Forms, views, ACLs, and other primordial elements have eponymous classes, but, starting around the web era, new elements started piggybacking on existing classes. This has so far culminated with the XPages-era additions, where almost everything is considered a "file resource", which are themselves already specialized "forms". This mirrors the evolution of file data stored as Composite Data structures, where the first file types added in got their own dedicated structures, later types (like JavaScript libraries) were either crammed awkwardly into similar types or just plopped in as CDFILESEGMENTs (which Domino adorably refers to universally as "CSS").

With the heavy use of flags came something of a mini query language to distinguish collections of design notes. In the API, you can see these in the DFLAGPAT_ C constants. For example, DFLAGPAT_FORM maps to "-FQMUGXWy#i:|@0nK;g~%z^" - the - at the start means that this is a "none of these" matcher, so it's resolved by looking up all notes with NOTE_CLASS_FORM and then filtering out any of the ones with those flags. From our XPage example, you can see it's excluded thrice over, via g, K, and ~. There are other permutations in the language for "match all of", "match any of", and combinations of all three types, and the patterns allow you to select each type of design element you see in Notes and Designer, and a few more categories besides.

Designer's View

Designer really has two views of the NSF. The first view is essentially a codification of what's above, and has been how Notes and Designer have worked forever. When you go to the "Forms" list in Designer, it does a query in the design collection similar to the above form example, and each category of design elements has its equivalent query.

Its second view came along with the Eclipse transition, and it's what you see in Package Explorer. This version takes the core querying capabilities of the design collection and maps it on to an Eclipse File System plugin. From Eclipse-Designer's point of view, the NSF becomes a file-based project as if it was a set of folders and files on the filesystem, but is in reality composed of some dynamic lookups from the design collection paired with a truly local temporary directory for the Local XPages compilation scratch area.

This view of the design became the basis of on-disk project support, with the ODP mirroring what you see in the virtual Eclipse project.

It's also where we start to see a secondary hierarchy within the database design. Traditionally, design notes are largely "flat": while the UI and some APIs have special support for the use of \ within design element names, there's no concept of containers beyond the main categories. For XPages, though, they started to add items like $ClassIndexItem, which contain values like "WEB-INF/classes/frostillicus/controller/ControllingViewHandler.class". These files show up within the "WebContent" folder in the virtual project - "WEB-INF/classes" is by default hidden in Package Explorer, but you can see it in the Navigator view. The use of "WebContent" as a folder for this is itself a holdover from Eclipse-based web app development.

Domino's View

Like Designer, Domino has two views of an NSF and the first is a pretty direct use of the design collection. It has simpler needs, usually just looking up elements by type + flags + name, using reverse view order for web elements.

The second view can be thought of as a stripped-down version of Designer's VFS, but it isn't implemented in the same way and doesn't include all the top-level folders you see in Designer. Instead, Domino uses the aforementioned Java class index items and some other existing values like file-resource names to compose something that resembles a WAR file - you can see this reflected in its use of "WEB-INF/classes". It's this view of the NSF that the XPages runtime container and its many abstraction classes use, allowing them to treat it as an app container in the same way as a normal JEE web app, as if it was just another WAR file. It's not treated fully the same as a WAR file - you can't plop a web.xml file in there and use some other web toolkit - but that's the concept that the XSP stack is going for in classes like com.ibm.domino.xsp.module.nsf.NSFComponentModule.

The On-Disk Project Version

As I mentioned, the ODP is based closely on the Designer view, which in turn is partially based on the "web app" view used for XPages. For the compiler, it's not too big of a deal - it just needs to gather files and import them based on their existing DXL for the most part - but the exporter has to do some fiddly work to shuttle notes to their right spots. By total coincidence, that will be a nice lead-in to my next post, which I expect to cover some details of the ODP exporter portion of the NSF ODP Tooling.

How the ODP Compiler Works, Part 4

Wed Jul 03 11:33:44 EDT 2019

Tags: nsfodp
  1. Next Project: ODP Compiler
  2. NSF ODP Tooling 1.0
  3. NSF ODP Tooling Example Project
  4. NSF ODP Tooling 1.2
  5. How the ODP Compiler Works, Part 1
  6. How the ODP Compiler Works, Part 2
  7. How the ODP Compiler Works, Part 3
  8. How the ODP Compiler Works, Part 4
  9. How the ODP Compiler Works, Part 5
  10. How the ODP Compiler Works, Part 6
  11. How the ODP Compiler Works, Part 7

In today's post, I'd like to go over a bit of how the NSF ODP Tooling project is organized, and specifically how I structured it to support both server-based and local compilation.

Setting aside the feature, update site, and distribution modules, the tooling consists of seventeen code-bearing components:

For our purposes today, we care about the first six in the "plugins" directory and then the "nsfodp-maven-plugin" at the bottom - the rest have to do with the different capabilities of the suite.

Commons

The three "commons" plugins contain a set of utilities and data-description classes, and they're broken up into those three modules due to differing dependencies. The core "commons" plugin relies only on org.eclipse.core.runtime, while the "dxl" plugin adds an IBM Commons dependency, and finally the "odp" plugin relies outright on a Notes/Domino runtime. By keeping these things distinct, it lets me keep track of which things are safe to include in the Eclipse UI plugins or the Maven plugin, where I can't count on the present of a Notes runtime.

"Servlet" and "Equinox"

The compiler, like the other "action" components of the Tooling, is split up into the core "compiler" plugin that does the actual heavy lifting, and then two "interface" plugins for running the code from different directions.

The "servlet" plugin came first and is the mechanism by which a local Maven-run plugin communicates with a remote Domino server with the Tooling installed. It contains a primary entrypoint servlet that accepts a packaged zip file from the client containing the ODP and any extra update sites to use while building, as well as a set of HTTP headers describing the various parameters that can be set for compilation. Strictly speaking, this plugin doesn't depend on Domino as such, but rather on having a servlet container and a Notes or Domino runtime - it could hypothetically run in e.g. Tomcat with the right dependencies, but in effect it's the "Domino side" of it.

The "equinox" plugin supports local compilation and it's a bit of an interesting beast. Since the compiler is intended to work with any given NSF and XPages application, it has a hard requirement on the presence of an Equinox ("Eclipse-style") OSGi runtime. A local Maven build doesn't use Equinox, so I wrote this plugin to provide what Equinox refers to as an "application" - essentially a named executable class that can be run once you initialize an Equinox runtime. Eclipse itself uses this mechanism for running, and you can see these in action in the "Eclipse Application" run configuration type in Eclipse-the-IDE:

The code itself behaves similarly to the servlet, but can skip the "zip container" step of the process, instead referencing the local files based on system properties set by the Maven bootstrapper.

Having these two entrypoints lets me keep the actual business of the compiler independent. Depending on need, I could add any number of other entrypoints without having to modify the core code at all.

Maven-side

The actual action of the process is kicked off by a Maven plugin, which consists of what Maven calls a "mojo". It's effectively the same idea as the Equinox application: a specially-tailored executable class. In this case, it gains the ability to specify parameters that are passed in by the pom.xml configuration or via the command line, which are then available by the time the Maven runtime calls the execute() method.

When run, the Maven mojo branches based on what type of compilation it can do. The servlet-based compilation branch is a little wordy in the class, but conceptually simpler than the local compilation. The mojo creates a temporary zip file, pours the ODP into it, and then adds in any update sites to include. Then, it creates an HTTP connection to the remote server, adds headers to configure the compiler, sets the POST body to the zip file, and then lets the server do its thing.

The Equinox runner, though... that's something that took a surprising amount of fiddly magic to get working. On a conceptual level, running the compiler in a local Equinox container is essentially the same thing as the Equinox container launched by the Domino HTTP process - same Eclipse runtime, same infrastructure, and so forth. However, the trouble came in both in some of the fiddly ways that the Domino OSGi runtime is configured and in the assumptions it makes about the active JVM, and the battle resulted in a complicated bootstrapping process. The Notes JVM comes packaged with a handful of critical jar files, not the least of which being "Notes.jar", and those need to be added to the active classpath, which in turn needs a specialized provider plugin to get Equinox to see them. There's also whatever the heck "JEmpower" is, which has its own special needs to be wrapped up into a "shim" OSGi plugin because of the way other plugins depend on it. The runner is also riven with special behavior for running on recent macOS Notes builds, which switched to an embedded non-J9 JVM (I wouldn't be surprised if this changes subtly again in the future). This is all in service of creating a compatible Equinox configuration so that, finally, the compiler can be run in a child process. It's not pretty, but it works.

Progress Messages

Since the process can take a while, I created an extremely-bare-bones messaging system, where the server sends out a series of JSON objects delimited by newlines and the client watches for these and emits human-friendly messages. The compiler process itself just uses an Eclipse-style IProgressMonitor - the servlet uses an implementation called LineDelimitedJsonProgressMonitor, while the local Equinox runner uses one that just prints to the console directly. This is another area where things are kept generic enough at the internal level so that a different mechanism entirely could be hooked in - a GUI progress monitor, for example.

Overall Structure

I'm pretty pleased with how the structure of the Tooling has taken shape. Being able to separate out the entrypoints like this definitely made it much easier to have the local compilation, and breaking it all out into multiple modules kept me from baking in any incorrect assumptions about the runtime environment in the core code. I've been toying with ideas for how to get this stuff to run in an Eclipse/IntelliJ/etc. environment, and I think the Maven Equinox runner will provide a pretty good template for that. There'd be a lot of work to make that good, but it'd definitely be possible.

How the ODP Compiler Works, Part 3

Tue Jul 02 11:26:58 EDT 2019

Tags: nsfodp
  1. Next Project: ODP Compiler
  2. NSF ODP Tooling 1.0
  3. NSF ODP Tooling Example Project
  4. NSF ODP Tooling 1.2
  5. How the ODP Compiler Works, Part 1
  6. How the ODP Compiler Works, Part 2
  7. How the ODP Compiler Works, Part 3
  8. How the ODP Compiler Works, Part 4
  9. How the ODP Compiler Works, Part 5
  10. How the ODP Compiler Works, Part 6
  11. How the ODP Compiler Works, Part 7

In the first two posts in this series, I focused on the XPages compilation and runtime environment, independent of anything to do with an NSF specifically. I'll return to the world of OSGi and servlets in later entries, but I'd like to take a bit of time to talk about some specifics of grafting the compiled XPage results and the rest of the on-disk project's contents into an actual NSF.

The Basics

The primary tool that makes an on-disk project work is DXL, the XML representation of a note. DXL defines representations for several kinds of Notes elements, but the three main kinds that you run into with an on-disk project are:

  • Database metadata, found in the annotingly-suffixed AppProperties/database.properties file. This contains information from a couple places, in particular the ACL and icon notes

  • "Raw" representations of design notes. These show up quite a bit if you select "Use binary DXL" in Designer's preferences, and they show up in a couple parts regardless of that selection. These are distinguished by their use of <note/> as the root element, and contain close to raw data from the NSF. Strings, numbers, and dates are represented in human-readable form, but things like composite data/rich text are stored as Base64-encoded byte arrays matching their in-memory C structures. These "blobs" are opaque to work with but are the safest to round-trip.

    • A subtype of this is the "*.metadata" files, which I'll cover shortly.
  • "Encapsulated" design notes, with root elements like <form/> and <view/>. These are friendly to look at and work with programmatically, but the forms in particular run the risk of some edge-case compatibility issues.

The Process

The ODP Compiler uses DXL for almost all of its NSF manipulation, and imports the ODP in a couple of passes based on the different needs of different design elements.

"Direct DXL" Elements

The easiest elements are the ones that are just single DXL files in the ODP and can be imported directly. The compiler iterates over these files as determined by the OnDiskProject class and just passes them in to the DXL importer. Easy peasy.

"Split" Elements

The second main type are resource files that are stored in the ODP as their "normal" file data and paired with a ".metadata" file. The prime example of this are file resources: if you have a file named "foo.txt" stored as a file resource in your NSF, it will exist in the NSF as a normal text file named "foo.txt" and next to it will be a trimmed-down DXL file named "foo.txt.metadata". These metadata files are an export of the "raw" format of the DXL, but then the actual file data items are removed, leaving them contain just the additional items that go along with that (flags, in-NSF file name, etc.).

The conceptual task here is straightforward: encode the file data back into the appropriate composite-data format as Base64 inside the DXL, and then import that. The actual task of doing that, though, gets pretty arcane. There were two ways I could go about it: import the metadata only and then use the C API (via one route or another) to create the structures in memory and append them to the note, or create a C-struct-compatible representation in-memory in Java and add it to the DXL to import. I originally planned on doing the former, as the com.ibm.designer.domino.napi.design.FileAccess class in IBM's NAPI has promisingly-named classes to do this, but I ran into some trouble with some file types that it doesn't support - though file resources, images, script libraries, and others are all conceptually the same thing, the actual C-level storage mechanism for each is slightly different. So I ended up going the latter route, which entailed writing some gnarly code to do it in memory.

XPages Elements

For the most part, XPages and related elements (Custom Controls, themes, Java class files, and Jars) are supersets of file resources: they use the same composite-data structures and store the programmer-visible data in the same $FileData items in the destination notes. Each has an extra layer, though, in order to store the Java bytecode and other info.

Both XPages and Custom Controls share a code path that stores their compiled data into the $ClassData0, $ClassData1, $ClassSize0, and $ClassSize1 items, since they consistently have one class to represent the main page and then a second inner "Page" class to act as an internal component constructor. In addition, Custom Controls store their ".xsp-config" data in $ConfigData and $ConfigSize items in the same note in the NSF.

Java design elements are conceptually similar, but have less predictable class names, and so the code is a little more complex. There's also some special behavior here, in that there are a handful of compiled classes that show up in the compilation result that aren't directly stored in those files. I forget what those are specifically - they might be for secondary, non-public classes that appear at the top level of a Java source file but aren't inner classes.

All of these, in addition to storing their source and class names, also sprout a $ClassIndexItem item that lists the "file paths" for the classes to be used as part of the virtual filesystems that Domino and Designer use when initializing the XPages app.

LotusScript

LotusScript libraries are… special. Though LotusScript embedded in other design notes (forms, views, agents, etc.) doesn't require any special handling beyond importing the DXL, libraries stored as ".lss" files in the on-disk project aren't automatically compiled.

These libraries are brought in with their source stored as normal text items named $ScriptLib, but then need to be compiled from there. There's no mechanism for compiling LotusScript in the normal Java API, and IBM's NAPI doesn't have a binding to the NSFNoteLSCompileExt function involved, so I had to dip into Java C API bindings, initially via Karsten Lehmann's excellent Domino JNA and then switched over to Darwino's NAPI implementation.

If you look at the algorithm I'm using to compile the libraries, you may notice how brute-force it is. Any given library may depend on any given other library, but I don't have a way to know that ahead of time without parsing the code (which I don't want to do). So, in lieu of the kind of dependency graph that Designer creates when you do "Recompile all LotusScript", the ODP Compiler tries each library in turn and, if one fails, it adds it to a "try again" queue. It does this until it's had a chance to effectively try each combination, at which point it will either have a clean queue and can proceed or it'll have one or more libraries that failed to compile for a different reason. It's not pretty, but it gets the job done.

Standalone Elements

There are a handful of components of an ODP that are stored as plain files without associated DXL, generally to do with XPages support files like "xsp.properties". These have some special support in the FileResource class to auto-vivify an associated DXL file on the fly. Fortunately, these files are pretty basic to create, and the only catch was figuring out the appropriate $Flags and $FlagsExt values to fill in. For this, the OnDiskProject class has a set of matchers to match known paths to the specialized file-resource behavior needed for each.

Miscellany

Beyond just importing the ODP files into their right places, the compiler does a few other notable things.

It has the option to populate the $TemplateBuild shared field with template name and build time information, which I've found to be extremely handy. I used to have an agent in a separate DB that would update this in my template DB, and it's much nicer to have the compiler do this automatically. It's also a pleasant fit-and-finish thing.

Similarly, I used to have to remember to take a moment to make sure that the Xsp Properties file in the NTF was set to use compressed and aggregated resources, which was easy to forget. Now, I can have that happen automatically, via filtering during resource import.

Designer exports the "database.properties" file with the current full-text-index status intact, which can actually cause trouble when importing back into a new database. I had to strip that part out if present.

LotusScript compilation relies on the presence of web-service classes in the current Java runtime, which caused trouble when I added local compilation without a Domino server. I guess that this is a knock-on effect, with loading the compiler having the secondary effect of warming the JRE in case you're compiling web services.

Because I forgot that DbDirectory#createDatabase exists (or maybe it was limited, I don't remember), I ended up adding DB creation to Darwino's NAPI, which is a handy capability to have anyway.

Remaining Topics

The more I write about this, the more I find is still left worth covering. In particular, I'd like to go over the architecture of the compiler, how it's architected to run both on a remote server and via a local Equinox OSGi environment. There's also the whole matter of the ODP exporter, which is technically separate but related by workflow and a source of its own bits of arcane knowledge. So much to cover!