Showing posts for tag "nsfodp"

CollabSphere 2020 Slides and Video

Thu Oct 29 16:09:02 EDT 2020

One of the nice bonuses of an all-online conference is that session recording comes built-in, so I was able to snag that and put it up on YouTube for posterity:

Additionally, I uploaded my slides to SlideShare, though that loses out on the extremely-fancy 5-second videos I used:

CollabSphere 2020: DEV101 - Add Continuous Delivery to Domino with the NSF ODP Tooling

Mon Oct 26 09:54:55 EDT 2020

CollabSphere 2020 is starting tomorrow, this year naturally taking the form of an online conference, which has the nice benefit of meaning that you can still sign up if you haven't done so, and you're only restricted by your time zone offset for attending.

For my part, I'll be giving a presentation on the NSF ODP Tooling, currently slated for tomorrow:

DEV101 - Add Continuous Delivery to Domino with the NSF ODP Tooling

Domino applications, stored in NSFs, have been historically difficult to add to Continuous Integration tools like Jenkins and to have participate in Continous Delivery workflows. This session will discuss the NSF ODP Tooling project on OpenNTF, which allows you to take Domino-based projects - whether targetting the Notes client or web, XPages or not - and integrate them with modern tooling and flows. It will demonstrate use with projects ranging from a single NSF to a suite of a dozen OSGi plguins and two dozen NSFs, showing how they can be built and packaged automatically and consistently.

I hope you'll be able to attend - there are definitely some very-interesting topics lined up.

NSF ODP Tooling: Setting Up Jenkins Builds

Thu Aug 27 10:50:43 EDT 2020

Tags: nsfodp
  1. Getting Started with the NSF ODP Tooling
  2. NSF ODP Tooling: Setting Up Jenkins Builds

In my last post, I talked about the process of setting up a basic NSF ODP project from an NSF without worrying about OSGi plugins or other complicated aspects.

In this post, I'll go over one of the main reasons why you might want to do this: automated builds via Jenkins or other CI server. This process assumes that you're keeping your project in source control of some sort, most likely a Git repository.

Jenkins Setup

The specifics for installing Jenkins are a bit outside the bailiwick of my blog, but they have some good instructions on their site. Those instructions currently start out heavily with Docker, which would work well, but I've found it pretty easy to set up with a Linux VM. That usually involves adding the Jenkins package source and letting the package manager do its thing. You should also install git while you're here.

Once it's configured, the Maven configuration is the same as in the previous post: find the home directory for the user running Jenkins (generally jenkins with those Linux installs or your current user in a simpler local setup) and configure the .m2/settings.xml file the same way.

Beyond the normal Jenkins setup with your default user, there are a few things to configure.

To start out with, we'll add support for Maven projects. Jenkins is trending towards doing everything via "Pipeline" projects, which is a fine idea, but the older Maven support will suit our needs better for now. Go to "Manage Jenkins" and then "Manage Plugins". On the "Available" tab, search for "maven". You should find the "Maven integration plugin" - in my case, it's under "Installed" since I already have it:

Maven Jenkins plugin

Then, make your way back to "Manage Jenkins" and to "Global Tool Configuration". In there, add a JDK if one doesn't already exist. You can either point to an existing Java installation or install one automatically:

JDK Setup

Do similarly for Git. If you installed it in Linux or are running on macOS, you can just write "git" in for the executable path. On Windows, you should install it first.

Git Setup

Finally, do the same for Maven. Like Java, this is one that you can configure automatically. 3.6.3 is a good choice:

Mavan Setup

Project Setup

Now that that's all set up, go back to the main Jenkins page and click on "New Item". Here, you should be able to select "Maven project". In general, I like to give my Jenkins projects names without too many special characters, in particular without spaces - there's always the chance that an odd tool here or there will cause trouble with complicated path names.

Maven item

When you create the item, you'll be presented with an intimidating tower of options, but fortunately only a few are important at the moment.

Our first stop is the "Source Code Management" section, where you should configure the location of your source repository. In my case here, I'm building one of the examples in the public NSF ODP Tooling repository, but you may have to add credentials if you're using a private repository.

Source Code Management

The next important step is the "Build" section. In here, pick your Maven version if you have multiple ones, fill in the path to your root POM file (most likely "pom.xml" if your project is in the root of the repo, but it's within a subdirectory here), and set the goals to be "clean install":

Build config

Finally, go to "Post-build Actions" and add an "Archive the artifacts" action. Set the "Files to archive" to "**/target/*.nsf":

Post-build Actions

Then, hit "Save".

Back on the project page, click "Build Now" on the left:

Build Now

If all goes well, you should see the build churn for a bit below the actions and eventually go blue. Unfortunately, there's also plenty of room here for things to go awry. If they do, your best bet is to hover over the build, click the disclosure triangle next to the timestamp, and click "Console Output". That should hopefully illuminate the trouble.

Console Output

Assuming it went well, though, you should be able to refresh the page and see your NSF in the "Last Successful Artifacts" section.

Last Successful Artifacts

And that's one of the key benefits to the CI/CD process: you can have the server run a repeatable build on command, on a schedule, or on triggers (like when you push a change) and have the result ready for you when it's done.

More In Practice

Once you have these basics working, you can get more complicated from there. The most common next step will be to set up either push notifications from your repository host (if your Jenkins server is visible to your repo) or scheduled polling for changes. That way, this will start to happen automatically without the need to manually trigger it.

You can also set up email notifications on failure, which is handy even when you're the only developer - that can help remove some "works on my machine" trouble.

There are a few more things that I think will be worth covering. In particular, I'll want to demonstrate a multi-NSF build that creates a deployment ZIP - something that's present in the complicated OSGi example, but which can be done just as well in a less-complex project.

Getting Started with the NSF ODP Tooling

Wed Aug 26 10:57:53 EDT 2020

Tags: maven nsfodp
  1. Getting Started with the NSF ODP Tooling
  2. NSF ODP Tooling: Setting Up Jenkins Builds

I've mentioned the NSF ODP Tooling project quite a bit here, and a lot of that is just a reflection of how much use I've gotten out of it and how much time it's been saving me in my regular work.

Part of it is also, though, that I think that it should see wider use. I realized that the project can seem off-putting, or reserved only for the sort of lost-in-the-weeds sort of work I do. Generally, when I mention it, it's in the context of a massive project with a bunch of OSGi plugins, or describing the intricate work that went in to implementing it.

So I figured this was as good a time as any to describe the simplest-case scenario to get use out of the project: wrapping a normal ODP, without plugins, and then building it into an NSF outside of Designer.

Environment Setup

Domino Installation

To get started, you'll first need either a local Notes/Domino installation or a remote Domino server. Since it involves slightly-less local configuration, we'll go with the remote Domino path for now. Download the latest distribution ZIP [from the project on OpenNTF](https://openntf.org/main.nsf/project.xsp?r=project/NSF ODP Tooling/releases) and install the update site from the "Domino" directory on your server in the same way you would the OpenNTF Domino API or other XPages library, and restart HTTP.

Maven and Java

The second thing you'll need is a Maven installation locally. If you're running on macOS or Linux, the easiest way to install this is with a package manager, such as Homebrew or apt. On any platform, you can also follow the download and installation instructions from the official Maven site. You'll also need Java installed - nowadays, I use AdoptOpenJDK.

You'll also need a Maven "settings.xml" file to point to your server. If you don't have such a file already, create an ".m2" directory (with the leading dot) in your home directory. This is the same process as in my original Maven setup guide, but with different contents. Configure the contents to look like this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
<?xml version="1.0"?>
<settings xmlns="http://maven.apache.org/SETTINGS/1.0.0"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="http://maven.apache.org/SETTINGS/1.0.0 http://maven.apache.org/xsd/settings-1.0.0.xsd">
    <profiles>
        <profile>
            <id>nsfodp</id>
            <properties>
                <!-- the server name can be anything as long as it matches below -->
                <nsfodp.compiler.server>some-server-name</nsfodp.compiler.server>
                <!-- specify the HTTP/HTTPS URL for your Domino server -->
                <nsfodp.compiler.serverUrl>https://some.server/</nsfodp.compiler.serverUrl>
                
                <!-- set to true if you use a self-signed SSL certificate -->
                <nsfodp.compiler.serverTrustSelfSignedSsl>true</nsfodp.compiler.serverTrustSelfSignedSsl>
            </properties>
        </profile>
    </profiles>
    <activeProfiles>
        <activeProfile>nsfodp</activeProfile>
    </activeProfiles>
    
    <servers>
        <server>
            <id>some-server-name</id>
            <!-- Use a Domino HTTP username and password -->
            <username>builduser</username>
            <password>buildpassword</password>
        </server>
    </servers>
</settings>

NSF Project Setup

The core On-Disk Project you create for your NSF is done using the normal Designer source-control. This process hasn't changed over the years; if you're unfamiliar with creating ODPs and working with source control, resources like the NotesIn9 episode remain very useful (though using Mercurial is an odd choice nowadays).

For this example, I just created a new NSF, but you can start with any simple-to-moderate NSF. For now, avoid anything that uses external XPages libraries or platform-specific things like ODBC in LotusScript. Right-click the NSF and go to "Team Development" ? "Set Up Source Control for this Application":

Set up source control in Designer

In the following wizard, give it a name (your choice) and uncheck "Use default location". Pick a destination for your created project, but make sure to put it within an "odp" subfolder of your main project folder - that'll be important later.

Source control wizard

I also uncheck "Go to Navigator view after project is created" because I use Package Explorer for this. It wouldn't hurt to use the Navigator view, tough - it's basically the same idea.

At this point, you can close out of Designer if you want - it won't be needed for the rest of this.

Maven Project Setup

Create a new text file called "pom.xml" and put it in the project folder, next to the "odp" directory.

pom.xml placement

Set its contents to this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
<?xml version="1.0"?>
<project
    xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"
    xmlns="http://maven.apache.org/POM/4.0.0"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
    <modelVersion>4.0.0</modelVersion>
    <groupId>com.example</groupId>
    <artifactId>nsfodp-example</artifactId>
	<version>1.0.0-SNAPSHOT</version>
    <packaging>domino-nsf</packaging>

    <pluginRepositories>
        <pluginRepository>
            <id>artifactory.openntf.org</id>
            <name>artifactory.openntf.org</name>
            <url>https://artifactory.openntf.org/openntf</url>
        </pluginRepository>
    </pluginRepositories>

    <build>
        <plugins>
            <plugin>
                <groupId>org.openntf.maven</groupId>
                <artifactId>nsfodp-maven-plugin</artifactId>
                <version>3.1.0</version>
                <extensions>true</extensions>
            </plugin>
        </plugins>
    </build>
</project>

In a terminal window, go to the project directory (the one containing this "pom.xml") and run mvn install. After a bit of churning, you should see some output ending like this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
[INFO] --- nsfodp-maven-plugin:3.1.0:compile (default-compile) @ nsfodp-example ---
[INFO] Compiling ODP
[INFO] Installing bundles
[INFO] - Installed no bundles
[INFO] Creating destination NSF
[INFO] Importing DB properties
[INFO] Importing basic design elements
[INFO] Importing file resources
[INFO] Importing LotusScript libraries
[INFO] Uninstalling bundles
[INFO] org.openntf.nsfodp.compiler.equinox.CompilerApplication#end
[INFO] Generated NSF: /Users/jesse/Projects/nsfodp-example/target/nsfodp-example-1.0.0-SNAPSHOT.nsf
[INFO]
[INFO] --- maven-install-plugin:3.0.0-M1:install (default-install) @ nsfodp-example ---
[INFO] Installing /Users/jesse/Projects/nsfodp-example/target/nsfodp-example-1.0.0-SNAPSHOT.nsf to /Users/jesse/.m2/repository/com/example/nsfodp-example/1.0.0-SNAPSHOT/nsfodp-example-1.0.0-SNAPSHOT.nsf
[INFO] Installing /Users/jesse/Projects/nsfodp-example/pom.xml to /Users/jesse/.m2/repository/com/example/nsfodp-example/1.0.0-SNAPSHOT/nsfodp-example-1.0.0-SNAPSHOT.pom
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time:  9.346 s
[INFO] Finished at: 2020-08-26T10:29:10-04:00
[INFO] ------------------------------------------------------------------------

The specifics will change a bit based on your system, but the main things are to see those "Compiling" and "Importing" lines followed by the "BUILD SUCCESS" banner at the end. If you look in your project directory, you'll see some generated support files and, within the "target" directory, the built NSF:

Build results

Conclusion

And that's it! Probably, at least. You can use this with most classic Notes apps and with XPages apps that just use the built-in components and JARs inside the NSF. Things can get more complex from there, and the repository contains an example of an XPages application that uses an OSGi-based library.

I plan to go into some of those details in future posts. In addition, I will demonstrate how to do this compilation in Jenkins, which allows you to have the NSF built automatically whenever you or someone else on your team commits a change to source control.

NSF ODP Tooling 3.1.0: Dynamically Including Web Resources

Fri Jul 17 14:10:24 EDT 2020

  1. XPages: The UI Toolkit and the App Framework
  2. The RuntimeEnvironment Idiom
  3. NSF ODP Tooling 3.1.0: Dynamically Including Web Resources

I just released version 3.1.0 of the NSF ODP Tooling project and, while I entirely forgot to make a blog post about 3.0 the other week, I think that one the additions in this one deserves some special mention.

In one of my client projects, we're replacing an old XPages-based UI with an Angular UI backed by our set of JAX-RS resources. This is part of the same sprawling client app I've mentioned a few times so far, but this is a new module within it and doesn't face the same "convert from XPages mid-flight" remit. Since the UI itself is just going to be a bunch of static resource files, that freed up our options for presenting it to the user. In order to keep the benefits of using Domino ACLs, I figured that wrapping it up in an NSF would be the way to go.

The way to do this is to bring your (potentially-transpiled) HTML/JS/CSS files into the WebContent folder in the NSF's Package Explorer representation, either manually or by coaxing Designer to sync it in for you.

My purpose in life is to eliminate Designer from existence, though, so I certainly couldn't be content with that. Instead, I adapted a Maven-based technique for building WAR-packaged JS apps to emit an NSF.

The Project Structure

From that "Targeting Domino for Webapps Incidentally" post, the pertinent part is the use of maven-frontend-plugin to kick off an NPM build of the web app. In that post, I put the JavaScript project files inside a Maven project of their own, but that's optional. In my client's case, the JS team is separate from the Java team, so I didn't want to force them to have to dig through the Maven project tree to get to their files, and the JS apps are in a separate top-level folder in the repository. The simplified structure looks like this:

  • Repository Root
    • ui-projects
      • someuiproject
    • nsfodp-project

My goal is to be able to kick off a Maven build, have it run the NPM build of the JS project in its separate directory, and then pull in the results for the final NSF, all automatically.

The Maven Configuration

By combining frontend-maven-plugin and the NSF ODP Tooling, that's exactly what I get. Here's the <build> section of the ODP project's pom:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
<build>
  <plugins>
    <plugin>
      <groupId>com.github.eirslett</groupId>
      <artifactId>frontend-maven-plugin</artifactId>
      <version>1.10.0</version>
  
      <configuration>
        <nodeVersion>v14.3.0</nodeVersion>
        <npmVersion>6.14.4</npmVersion>
        <installDirectory>target</installDirectory>
      </configuration>
        
      <executions>
        <execution>
          <?m2e ignore?>
          <id>install node and npm</id>
          <goals>
            <goal>install-node-and-npm</goal>
          </goals>
          <phase>generate-resources</phase>
        </execution>
        
        <execution>
          <?m2e ignore?>
          <id>jsapp install</id>
          <goals>
            <goal>npm</goal>
          </goals>
          <phase>generate-resources</phase>
          <configuration>
            <workingDirectory>${project.basedir}/../ui-projects/someuiproject</workingDirectory>
          </configuration>
        </execution>
        <execution>
          <?m2e ignore?>
          <id>jsapp build</id>
          <goals>
            <goal>npm</goal>
          </goals>
          <phase>generate-resources</phase>
          <configuration>
            <workingDirectory>${project.basedir}/../ui-projects/someuiproject</workingDirectory>
            <arguments>run build</arguments>
          </configuration>
        </execution>
      </executions>
    </plugin>
  
    <plugin>
      <groupId>org.openntf.maven</groupId>
      <artifactId>nsfodp-maven-plugin</artifactId>
      <version>3.1.0</version>
      
      <configuration>
        <webContentResources>
          <webContentResource>
            <directory>${project.basedir}/../ui-projects/someuiproject/dist/app</directory>
          </webContentResource>
        </webContentResources>
      </configuration>
    </plugin>
  </plugins>
</build>

Now, the final result will be an NSF with whatever other design elements are needed, ready to be deployed with a design replace/refresh. In my client's case, that ends up also getting bundled up into the distribution ZIP, but in a basic case the NSF would be enough.

Winter Project #1: XPages LSP4XML Extension

Fri Dec 27 16:27:14 EST 2019

Tags: nsfodp xml xpages

The last couple weeks of the year are always a good time to work on some side projects or small utilities to scratch an itch, and this year definitely ended up that way, seeing me work on a couple interesting things over this Christmas week.

The Project

The first of these is an enhancement to the NSF ODP Tooling project. Among the various components that I've put in there over time is an Eclipse content assistance plugin that provides some autocomplete capabilities when working with XPages and Custom Controls within an ODP. Specifically, it knows about the stock and ExtLib controls that ship with Domino, as well as any Custom Controls within the same project, and it allows Eclipse to provide completion suggestions. The code in there is actually hairier than you might think, and that's because it's building on top of Eclipse's generic text completion system, with only a little assistance from an existing implementation I cribbed. Still, it does the job.

The Problem

However, most of my XPages development nowadays takes place first outside Domino and then only ends up back on Domino when I want to make sure it works.

The trouble there is that the content assistance plugin I wrote is tied to both the ODP nature (an Eclipse-ism meaning that it knows a given project is associated with a set of capabilities) and the specific layout of an on-disk project.

The Quick Route

I originally set out to just loosen up that association a bit - take the existing plugin and allow it to work with any .xsp file and to try to find custom controls in a more webapp-type layout of a project.

However, I figured this was a good option to look beyond that. Though the plugin does indeed do what I want, the trouble is that it's thoroughly tied to Eclipse specifically. All of the classes use Eclipse core and XML tooling classes extensively and none of that would be portable to any other IDE. I figured this would be a perfect time to jump into the world of the Language Server Protocol.

Aside: What is LSP?

The Language Server Protocol is a standard for a way to provide IDE-type support in a way that's not dependent on any specific IDE. It grew out of Visual Studio Code and is gradually seeping its way across the whole development landscape.

It allows for the specifics of handling a language - checking validity, resolving classes and other entities, identifying keywords, and so forth - to be separated from the IDE used for editing it. Using LSP, if an IDE wants to support editing, for example, JavaScript, its creator no longer needs to create and maintain a tool to handle all of the intricacies and changing rules of the language - instead, it can bring in the LSP implementation for it and then focus just on the specifics of what the IDE does differently from others.

Additionally, this decoupling means that the LSP implementation doesn't even have to be written in the same language as the IDE using it, and are often just written in the same language that they're implementing. If you look across the list of implementations, you can see that the Swift server is written in Swift, the Ruby one in Ruby, and the Java one is actually Eclipse's Java tooling extracted from the IDE.

Wild Web Developer

Since Eclipse long predates LSP, it's historically had its own implementations for any languages it supports. Though it's had enough clout to support a lot of languages, some of them have trailed behind. Like, really far behind. Prime among these have been its web-language-related editors, which do an okay-enough job editing basic HTML, CSS, and JavaScript, but pretty much missed the boat on newer features, transpiled languages like TypeScript, and project structures like NPM. While there have long been plugins for Angular and TypeScript, they never fully kept up and the whole thing ended up falling far, far behind other IDEs like VS Code.

Enter Wild Web Developer, the whimsically-named project to bring the fruits of the LSP development to bolster Eclipse's web-tech support. Though it's named for its web language implementations, what it really is is a combination of two things: a generic text editor backed by a small array of LSP implementations and a syntax-coloring system derived from TextMate, which itself became a pseudo-standard for syntax coloring.

LSP4XML

That brings us to the piece that ties together LSPs and my immediate desires: LSP4XML, the most-popular XML Language Server implementation, which is used by both VS Code and Eclipse, and just so happens to be written in Java and is designed to be extended.

Since LSP4XML is so smoothly extensible and Wild Web Developer just added a way to contribute these extensions in Eclipse, that meant I could accomplish what I want without having to worry about writing a whole LSP implementation just to support XSP and DXL.

XSP Completion Participant

Contributing to an LSP4XML server involves creating an extension class that then registers the individual capabilities you want to provide.

In this case, I contributed an ICompletionParticipant implementation. ICompletionParticipant has a delightfully-straightforward API, and all you have to do is provide tag, content, and attribute suggestions based on the context the user's cursor is in.

With this simpler API, I was able to significantly refactor down my earlier implementation, making it much more readable and focused.

DXL Schema Contributor

The other piece of XML completion that I added to the NSF ODP Tooling was to provide the (blessedly-redistributable) DXL schemas that ship with Domino to Eclipse. Unlike the XSP completion assistant, this plugin is entirely code-free, consisting solely of the schema file itself and an extension contribution in plugin.xml. The reason this works is that each DXL file declares its XML namespace at the top of the file, and so I can tell Eclipse to look for the schema file I'm providing when editing DXL.

LSP4XML also provides a way to provide schema files, but it's a little more complicated, involving a resolver class implementation. The idea is the same, though, mapping the namespace to the DTD file.

In both cases, the standardized and descriptive nature of XML schemas means that merely providing them to the IDE allows for all sorts of code assistance, even down to the level of suggesting and validating attribute values. It's pretty great.

Side Benefits

Making this switch to LSP4XML accomplished my original goal: by changing the XSP handling in the NSF ODP Tooling for Eclipse, I switched over to the Wild Web Developer editor and got editing in .xsp files anywhere (and a bit snappier to boot, since it's inherently heavily multithreaded).

But, like I mentioned, Language Servers are used across IDEs, most notably Visual Studio Code. Thanks to VS Code's equivalent LSP4XML extension mechanism, I was able to contribute the same extensions used for Eclipse there, and get the same type of results. That's a far cry from being able to get all of the NSF ODP Tooling capabilities outside of Eclipse, but it's a big start.

The Next Version

Currently, these additions are just in the develop branch of the tooling and haven't made their way to a proper release yet, but they've proven themselves so far in my use. My hope is to make a few more improvements, get the VS Code extension into shape, and make it part of a "3.0" release of the Tooling.

How the ODP Compiler Works, Part 7

Mon Jul 08 12:24:26 EDT 2019

Tags: nsfodp
  1. Next Project: ODP Compiler
  2. NSF ODP Tooling 1.0
  3. NSF ODP Tooling Example Project
  4. NSF ODP Tooling 1.2
  5. How the ODP Compiler Works, Part 1
  6. How the ODP Compiler Works, Part 2
  7. How the ODP Compiler Works, Part 3
  8. How the ODP Compiler Works, Part 4
  9. How the ODP Compiler Works, Part 5
  10. How the ODP Compiler Works, Part 6
  11. How the ODP Compiler Works, Part 7

In this probably-final entry in the series, I'd like to muse a bit about possible future improvements and additions to the compiler and the NSF ODP Tooling generally. For the most part, the big-ticket future additions seek to answer one question:

Could this be used to replace Designer?

The quick answer is "yes, it could", but that would take a lot of work. There are a couple things inherent in the task and specific to my implementation that both help and hinder this kind of thing.

Notes Runtime

The biggest stumbling block is the hard requirement on a Notes or Domino runtime initialized for the current process. Being able to use C API calls is required by both my code and some of the underlying XPages bits, and that means initializing the runtime. The good news here is that it doesn't require any specific Notes-based program - it can be run with either the libraries that come with Notes or Domino, and it doesn't require Designer at all. That loosens things up a bit, but still means that one of the supported platforms is obligatory at some step of the process.

Even on a supported platform, though, it's not just as easy as calling an init function - the process's environment needs to be set up specifically to know about the Notes program and data directories, and this varies platform-by-platform. This means that it wouldn't be straightforward to have, for example, an Eclipse plugin that initializes the process, since it would depend on initialized environment variables and loading paths implicitly referenced by lower layers, and over which the programmer doesn't have much control after the fact.

The good news here is that the tooling is already designed to support remote work for compilation and export, both truly remote and with the local Equinox runners. For a true IDE experience, the communication between the IDE and compiler would have to be more complex than the "tell the compiler what to do and hear messages back" simple mechanism it has now, but it'd still be a natural evolution.

OSGi Runtime

The requirements posed by compiling complicated XPages applications presents a similar dependency as above, but on an Equinox environment. Though it's possible to fake the basics of OSGi for known plugins, that wouldn't work for arbitrary third-party libraries.

For integration in Eclipse, this wouldn't necessarily mean any new work - Eclipse is already the premier Equinox product, and so it supports what XPages compilation needs innately. However, Eclipse wouldn't be the only target; any work done here should work with other IDEs like IntelliJ, but also continue to work IDE-free via a Maven or Gradle environment.

So this ends up being another strong argument for retaining the "separate process" model that already exists.

Incremental Compilation

Beyond retaining the runtime requirements, the big thing would be a switch to supporting incremental compilation. Currently, the compiler is designed to do everything in one pass: you point it at an ODP and it spits out a freshly-created NSF. This allows it to build up and tear down its environment cleanly, initializing the OSGi plugins for any XPages libraries at the start and doing similarly for any custom classpath jars to be included in the Java runtime.

What supporting incremental compilation would require to be at all speedy and efficient is having a persistent compilation environment. Instead of everything happening sequentially, the IDE would init the compiler process and then send it requests as files need compilation. This has implications for both local and server-based compilation.

Local complication would need to change less: mostly, it would require picking an IPC mechanism and having the launched Equinox process remain alive until it's no longer needed.

Server-based compilation would be similar in implementation, probably using something like HTTP long polling to be able to run in the Domino HTTP container. The trouble would be that a straightforward implementation of this would mean that the Domino server would pretty much have to be dedicated to a single IDE. There's already a potential conflict scenario with two developers doing compilation at the same time: since the XPages compiler needs to install and uninstall OSGi bundles, they could step on each others' toes if any of them overlap. Keeping the compiler environment resident on the server would mean it would have to be effectively locked out to one connection for long periods of time. Assuming HCL continues the Community Edition licensing model, this will be legal to do, but it's still cumbersome.

This could lead into something I've been mulling over: running a Domino compiler server in Docker. This would loosen a lot of the runtime requirements and mean that the encapsulated Domino server would be both dedicated to the purpose and consistent from the perspective of the compiler. Domino's setup requirements initially made it an awkward fit for Docker, but it looks like things have progressed along nicely.

This would all tie perfectly into the Language Server Protocol, which is an IDE-agnostic way to do basically this: have a little running process that knows about the nitty-gritty of the language, and then tell the IDE only what it needs to auto-complete and other features.

Live NSFs

Currently, the compiler starts with an ODP and emits a clean NSF with each build, and this is absolutely, 100% the correct way that it should work. However, Notes being what it is, it'd be expected that a Designer replacement would be able to work with a live NSF, so you could just crack one open, change a view, and be done. The second part of this process is in there, since the compiler uses the normal Notes APIs to store in an NSF as it is. It's the first part that would have to be new, allowing the tooling to selectively look into an NSF.

The exporter already does this, but, like the compiler, goes in one pass. What would potentially make sense would be to do essentially what Designer does: implement a VFS layer to represent an NSF in an equivalent way to the on-disk project. It's more easily said than done, but would be particularly straightforward for Eclipse, for the same reasons that it was straightforward for IBM to do it for Designer.

The secondary question here would be if it would be better to do continue to use DXL as the sole transport mechanism (so editing a view in a live NSF would export it to DXL and then re-import on save) or to instead try to represent things differently. Though DXL is less efficient, particularly for large notes, I think it'd make sense to stick with it - there would be tremendous work involved in trying to make it smarter, and that would be a breeding ground for bugs that just wouldn't exist with DXL.

IDE Features

Getting an NSF to compile dynamically is one thing, but the other part of this kind of project would be making the experience of working with design elements pleasant. In Designer, we have the benefit of having purpose-built editors for each design element type, but these aren't portable even if licensing allowed: the legacy ones all are just wrappers around C++-based "native" UIs, while the newer-era ones are based on Designer's bizarre internal RPC system.

I've done some work along these lines, initially to add autocomplete for custom controls and known core+ExtLib controls to .xsp files. Since that earlier work, I also added a contributor that tells Eclipse to use the DXL schema file for DXL files. While this doesn't give a proper GUI editor, it does provide enough information for Eclipse to pick up on the allowed elements and properties:

Eclipse DXL schema support

While I don't think it'd be worth trying to fully reproduce the various WYSIWYG editors Designer gives you (particularly the view editor, which is laughably bad for data-centric use), I think it'd be worth adding some editors along the lines of my old Forms 'n' Views project. Having some basic editors with a strong focus on the resulting data structure would be perfect for XPages support use and even mostly useful for legacy use.

Time

The core trouble with getting to all these goals, though, is time. For the main compilation and export work, I could justify spending a good amount of time because it eventually more than paid off in less time fighting with Designer to create consistent builds. For this other stuff, though, it's more dependent on whether my hatred for using Designer is enough to tilt the scales. Sometimes, it almost gets there, but I do also need to be able to pay my mortgage, so that puts a bit of a limit on things. It sure would be nice to leave Designer in the dust for good, though.

How the ODP Compiler Works, Part 6

Sun Jul 07 12:46:39 EDT 2019

Tags: nsfodp
  1. Next Project: ODP Compiler
  2. NSF ODP Tooling 1.0
  3. NSF ODP Tooling Example Project
  4. NSF ODP Tooling 1.2
  5. How the ODP Compiler Works, Part 1
  6. How the ODP Compiler Works, Part 2
  7. How the ODP Compiler Works, Part 3
  8. How the ODP Compiler Works, Part 4
  9. How the ODP Compiler Works, Part 5
  10. How the ODP Compiler Works, Part 6
  11. How the ODP Compiler Works, Part 7

In this post, I'd like to go over another main component of the NSF ODP Tooling project, the ODP Exporter. The exporter is significantly simpler than the compiler, but still had a surprising number of gotchas of its own.

My goal in writing the exporter was to replace the need to use Designer to create an on-disk project out of an NSF - in one of my projects, in addition to the primary NSF we use, there are also a dozen or so secondary NSFs inheriting from templates and being modified by people not using Git (I know.), and keeping them all in sync is a giant PITA. Previously, I had a dedicated VM just to open the DBs periodically to sync them, but even that took a long time and got error-prone when Designer would miss a change or generally trip over itself.

So I set out to create a compatible replacement, so that I could run a script and update the ODPs en masse.

The Basics

At its core, the exporter does what you might expect: it reads through each design note and sends them through a DXL exporter. For its work, it makes use of the aforementioned design collection and IBM's NAPI. I went with IBM's variant in this case for one class: com.ibm.designer.domino.napi.design.FileAccess. Though this class let me down when it came to importing, it has just enough encapsulated composite-data reader methods to save me a ton of work here, though I had to cheat to access one of them.

For each design note, it determines its type, which contains behavioral information for each type, including whether it should be included in the normal export process at all, where it's placed in the ODP, and the type of export treatment it should get. The main categories of exported note types line up with what the compiler had to know about, with some special knowledge of whether a note is one of many in a folder (e.g. an XPage) or one-per-database (like the icon note).

The Gotchas

Unsurprisingly, things aren't quite as simple as a basic loop. For elements that are just DXL files, it is that easy, but the ones that exist either as just file data (e.g. "plugin.xml") or file data plus metadata require special handling.

Skipped Items

The first thing to note with split data/metadata items isn't complicated, but bears mentioning: the metadata file is generated by exporting the design note as DXL but ignoring data items. Some of these are common among all types, but others (like indexed "$ClassData0", etc. fields) are best matched and excluded with a regex.

LotusScript, Again

LotusScript libraries threw me for an unexpected loop this time. Their storage format is actually not even composite data: the script itself is stored in plain non-summary text items, multiple ones with the same name. However, the NAPI's DXL exporter doesn't actually export the full script test properly, instead only outputting the content of the first item. Additionally, the legacy Java API shows the presence of multiple items with the same name, but also only gives you the value for the first.

So I ended up having to use the raw note format in memory, which DOES include all of the items, and then stitch the script content together onto the filesystem.

Other File Data

The other data types aren't too complicated, but need special cases for each composite data structure, which is where the FileAccess class comes in. Without the convenience methods there, I would have had to write CD iterators to read the data based on the appropriate structures - not terribly difficult, but it's all the better to have the work already done for me. Especially so since FileAccess pleasantly writes directly to a java.io.OutputStream, just like I'd want if I wrote it myself.

Special-Case Files

There are three final special cases that the exporter handles:

  1. The icon note is specially-exported not once, but twice. It's exported using an NAPI-specific special method to create the "database.properties" file, which includes the ACL and and formatted settings alongside the icon note, and then also exported specially after the main loop as "Resources/IconNote". I've always appreciated how much Lotus wildly overloaded the icon note.

    • There's also a distinct "$DBIcon" note that houses the 32-bit icon introduced in R8 (if I recall correctly), but that's just a normal old image resource with a special name and not related to the icon note.
  2. The "META-INF/MANIFEST.MF" file resource became important in 9.0.1 FP10, but Designer's handling of it is a little schizophrenic. FP10+ will usually fill it in with plugin information when it rebuilds an NSF, but nonetheless exports it as a zero-byte file. It's important for it to exist, so I create a blank file if it doesn't exist.

  3. The Eclipse ".project" file is also not present in older NSFs, but is critical for ODPs. If it wasn't exported, I create a generic stub version.

Swiper

When dealing with on-disk projects, Swiper is a mandatory tool, cleaning up the generated XML and (critically) removing extraneous items that change too frequently to be source-control-friendly.

The core of Swiper is an XSLT stylesheet to do the transformation, and I incorporated this wholesale, with a minor modification to retain the ACL that's stripped out by stock Swiper. I then created an OutputStream implementation that passes DXL output through Swiper if configured. As a small note, I think there was a specific reason why I have the Swiper path buffer the DXL into an in-memory ByteArrayOutputStream first instead of just wrapping the file output stream, but I don't remember what that was.

Final Steps

With this post, I think I've covered the big topics I set out to with the two main components of the Tooling. I plan on having at least one final post in the series to cover some potential future additions and enhancements, since I have a lot of ideas in mind for it. Unfortunately, a lot of the most-useful ideas would also be tremendous amounts of work, but the payoff may eventually be worth it.

How the ODP Compiler Works, Part 5

Fri Jul 05 12:06:26 EDT 2019

Tags: nsfodp
  1. Next Project: ODP Compiler
  2. NSF ODP Tooling 1.0
  3. NSF ODP Tooling Example Project
  4. NSF ODP Tooling 1.2
  5. How the ODP Compiler Works, Part 1
  6. How the ODP Compiler Works, Part 2
  7. How the ODP Compiler Works, Part 3
  8. How the ODP Compiler Works, Part 4
  9. How the ODP Compiler Works, Part 5
  10. How the ODP Compiler Works, Part 6
  11. How the ODP Compiler Works, Part 7

One of the things that came up frequently when writing both the compiler and exporter portions of the NSF ODP Tooling was rationalizing the multiple ways an NSF is viewed, and determining which aspects are reified in the design notes themselves and which are entirely runtime conjurations.

The Traditional View

To describe what I mean, I'll start with the "traditional" way that design notes work, which is also the mechanism the other views are built upon. The starting point there is the distinction between data and design notes, represented in the API as the note class. For our purposes, there's "data note" and "everything else". The design notes are kept track of internally by what can be considered a magic view, the design collection, which is used implicitly whenever something looks up a design element, and can be accessed automatically by API calls like NIFFindDesignNoteExt.

The design collection itself acts like a normal view, containing columns with pertinent design element information for fast lookups. Beyond the note class value, design notes are distinguished by character-based flags, which you can see in the "Fields" part of the property pane in Designer in the $Flags item. These will look something like "gC~4K" - this value comes from an XPage, and can be interpreted by taking each character and looking for it in "stdnames.h" from the C API:

  • g is DESIGN_FLAG_FILE, referring to a "file resource"-type design element (more on this later)
  • C apparently matches to DESIGN_FLAG_NO_COMPOSE, used to refer to forms that don't show up in the "Create" menu. I'm not sure why it's included here; it may have some second meaning
  • ~ maps to DESIGN_FLAG_HIDEFROMDESIGNLIST, presumably to keep XPages out of standard File Resource pickers
  • 4 maps to DESIGN_FLAG_HIDE_FROM_V4, which is reasonable advice, but the fact that this is 4 and not 7 makes me suspect there's a second meaning here too
  • K maps to DESIGN_FLAG_XSPPAGE, cheerily documented in the API as "an xpage, much like a file resource, but special!"

The importance of these flags and the reuse of some note classes (file resources in particular) bares a bit of the evolution of the platform. The older the note type is, the more likely it is to have a dedicated note class value. Forms, views, ACLs, and other primordial elements have eponymous classes, but, starting around the web era, new elements started piggybacking on existing classes. This has so far culminated with the XPages-era additions, where almost everything is considered a "file resource", which are themselves already specialized "forms". This mirrors the evolution of file data stored as Composite Data structures, where the first file types added in got their own dedicated structures, later types (like JavaScript libraries) were either crammed awkwardly into similar types or just plopped in as CDFILESEGMENTs (which Domino adorably refers to universally as "CSS").

With the heavy use of flags came something of a mini query language to distinguish collections of design notes. In the API, you can see these in the DFLAGPAT_ C constants. For example, DFLAGPAT_FORM maps to "-FQMUGXWy#i:|@0nK;g~%z^" - the - at the start means that this is a "none of these" matcher, so it's resolved by looking up all notes with NOTE_CLASS_FORM and then filtering out any of the ones with those flags. From our XPage example, you can see it's excluded thrice over, via g, K, and ~. There are other permutations in the language for "match all of", "match any of", and combinations of all three types, and the patterns allow you to select each type of design element you see in Notes and Designer, and a few more categories besides.

Designer's View

Designer really has two views of the NSF. The first view is essentially a codification of what's above, and has been how Notes and Designer have worked forever. When you go to the "Forms" list in Designer, it does a query in the design collection similar to the above form example, and each category of design elements has its equivalent query.

Its second view came along with the Eclipse transition, and it's what you see in Package Explorer. This version takes the core querying capabilities of the design collection and maps it on to an Eclipse File System plugin. From Eclipse-Designer's point of view, the NSF becomes a file-based project as if it was a set of folders and files on the filesystem, but is in reality composed of some dynamic lookups from the design collection paired with a truly local temporary directory for the Local XPages compilation scratch area.

This view of the design became the basis of on-disk project support, with the ODP mirroring what you see in the virtual Eclipse project.

It's also where we start to see a secondary hierarchy within the database design. Traditionally, design notes are largely "flat": while the UI and some APIs have special support for the use of \ within design element names, there's no concept of containers beyond the main categories. For XPages, though, they started to add items like $ClassIndexItem, which contain values like "WEB-INF/classes/frostillicus/controller/ControllingViewHandler.class". These files show up within the "WebContent" folder in the virtual project - "WEB-INF/classes" is by default hidden in Package Explorer, but you can see it in the Navigator view. The use of "WebContent" as a folder for this is itself a holdover from Eclipse-based web app development.

Domino's View

Like Designer, Domino has two views of an NSF and the first is a pretty direct use of the design collection. It has simpler needs, usually just looking up elements by type + flags + name, using reverse view order for web elements.

The second view can be thought of as a stripped-down version of Designer's VFS, but it isn't implemented in the same way and doesn't include all the top-level folders you see in Designer. Instead, Domino uses the aforementioned Java class index items and some other existing values like file-resource names to compose something that resembles a WAR file - you can see this reflected in its use of "WEB-INF/classes". It's this view of the NSF that the XPages runtime container and its many abstraction classes use, allowing them to treat it as an app container in the same way as a normal JEE web app, as if it was just another WAR file. It's not treated fully the same as a WAR file - you can't plop a web.xml file in there and use some other web toolkit - but that's the concept that the XSP stack is going for in classes like com.ibm.domino.xsp.module.nsf.NSFComponentModule.

The On-Disk Project Version

As I mentioned, the ODP is based closely on the Designer view, which in turn is partially based on the "web app" view used for XPages. For the compiler, it's not too big of a deal - it just needs to gather files and import them based on their existing DXL for the most part - but the exporter has to do some fiddly work to shuttle notes to their right spots. By total coincidence, that will be a nice lead-in to my next post, which I expect to cover some details of the ODP exporter portion of the NSF ODP Tooling.

How the ODP Compiler Works, Part 4

Wed Jul 03 11:33:44 EDT 2019

Tags: nsfodp
  1. Next Project: ODP Compiler
  2. NSF ODP Tooling 1.0
  3. NSF ODP Tooling Example Project
  4. NSF ODP Tooling 1.2
  5. How the ODP Compiler Works, Part 1
  6. How the ODP Compiler Works, Part 2
  7. How the ODP Compiler Works, Part 3
  8. How the ODP Compiler Works, Part 4
  9. How the ODP Compiler Works, Part 5
  10. How the ODP Compiler Works, Part 6
  11. How the ODP Compiler Works, Part 7

In today's post, I'd like to go over a bit of how the NSF ODP Tooling project is organized, and specifically how I structured it to support both server-based and local compilation.

Setting aside the feature, update site, and distribution modules, the tooling consists of seventeen code-bearing components:

For our purposes today, we care about the first six in the "plugins" directory and then the "nsfodp-maven-plugin" at the bottom - the rest have to do with the different capabilities of the suite.

Commons

The three "commons" plugins contain a set of utilities and data-description classes, and they're broken up into those three modules due to differing dependencies. The core "commons" plugin relies only on org.eclipse.core.runtime, while the "dxl" plugin adds an IBM Commons dependency, and finally the "odp" plugin relies outright on a Notes/Domino runtime. By keeping these things distinct, it lets me keep track of which things are safe to include in the Eclipse UI plugins or the Maven plugin, where I can't count on the present of a Notes runtime.

"Servlet" and "Equinox"

The compiler, like the other "action" components of the Tooling, is split up into the core "compiler" plugin that does the actual heavy lifting, and then two "interface" plugins for running the code from different directions.

The "servlet" plugin came first and is the mechanism by which a local Maven-run plugin communicates with a remote Domino server with the Tooling installed. It contains a primary entrypoint servlet that accepts a packaged zip file from the client containing the ODP and any extra update sites to use while building, as well as a set of HTTP headers describing the various parameters that can be set for compilation. Strictly speaking, this plugin doesn't depend on Domino as such, but rather on having a servlet container and a Notes or Domino runtime - it could hypothetically run in e.g. Tomcat with the right dependencies, but in effect it's the "Domino side" of it.

The "equinox" plugin supports local compilation and it's a bit of an interesting beast. Since the compiler is intended to work with any given NSF and XPages application, it has a hard requirement on the presence of an Equinox ("Eclipse-style") OSGi runtime. A local Maven build doesn't use Equinox, so I wrote this plugin to provide what Equinox refers to as an "application" - essentially a named executable class that can be run once you initialize an Equinox runtime. Eclipse itself uses this mechanism for running, and you can see these in action in the "Eclipse Application" run configuration type in Eclipse-the-IDE:

The code itself behaves similarly to the servlet, but can skip the "zip container" step of the process, instead referencing the local files based on system properties set by the Maven bootstrapper.

Having these two entrypoints lets me keep the actual business of the compiler independent. Depending on need, I could add any number of other entrypoints without having to modify the core code at all.

Maven-side

The actual action of the process is kicked off by a Maven plugin, which consists of what Maven calls a "mojo". It's effectively the same idea as the Equinox application: a specially-tailored executable class. In this case, it gains the ability to specify parameters that are passed in by the pom.xml configuration or via the command line, which are then available by the time the Maven runtime calls the execute() method.

When run, the Maven mojo branches based on what type of compilation it can do. The servlet-based compilation branch is a little wordy in the class, but conceptually simpler than the local compilation. The mojo creates a temporary zip file, pours the ODP into it, and then adds in any update sites to include. Then, it creates an HTTP connection to the remote server, adds headers to configure the compiler, sets the POST body to the zip file, and then lets the server do its thing.

The Equinox runner, though... that's something that took a surprising amount of fiddly magic to get working. On a conceptual level, running the compiler in a local Equinox container is essentially the same thing as the Equinox container launched by the Domino HTTP process - same Eclipse runtime, same infrastructure, and so forth. However, the trouble came in both in some of the fiddly ways that the Domino OSGi runtime is configured and in the assumptions it makes about the active JVM, and the battle resulted in a complicated bootstrapping process. The Notes JVM comes packaged with a handful of critical jar files, not the least of which being "Notes.jar", and those need to be added to the active classpath, which in turn needs a specialized provider plugin to get Equinox to see them. There's also whatever the heck "JEmpower" is, which has its own special needs to be wrapped up into a "shim" OSGi plugin because of the way other plugins depend on it. The runner is also riven with special behavior for running on recent macOS Notes builds, which switched to an embedded non-J9 JVM (I wouldn't be surprised if this changes subtly again in the future). This is all in service of creating a compatible Equinox configuration so that, finally, the compiler can be run in a child process. It's not pretty, but it works.

Progress Messages

Since the process can take a while, I created an extremely-bare-bones messaging system, where the server sends out a series of JSON objects delimited by newlines and the client watches for these and emits human-friendly messages. The compiler process itself just uses an Eclipse-style IProgressMonitor - the servlet uses an implementation called LineDelimitedJsonProgressMonitor, while the local Equinox runner uses one that just prints to the console directly. This is another area where things are kept generic enough at the internal level so that a different mechanism entirely could be hooked in - a GUI progress monitor, for example.

Overall Structure

I'm pretty pleased with how the structure of the Tooling has taken shape. Being able to separate out the entrypoints like this definitely made it much easier to have the local compilation, and breaking it all out into multiple modules kept me from baking in any incorrect assumptions about the runtime environment in the core code. I've been toying with ideas for how to get this stuff to run in an Eclipse/IntelliJ/etc. environment, and I think the Maven Equinox runner will provide a pretty good template for that. There'd be a lot of work to make that good, but it'd definitely be possible.

How the ODP Compiler Works, Part 3

Tue Jul 02 11:26:58 EDT 2019

Tags: nsfodp
  1. Next Project: ODP Compiler
  2. NSF ODP Tooling 1.0
  3. NSF ODP Tooling Example Project
  4. NSF ODP Tooling 1.2
  5. How the ODP Compiler Works, Part 1
  6. How the ODP Compiler Works, Part 2
  7. How the ODP Compiler Works, Part 3
  8. How the ODP Compiler Works, Part 4
  9. How the ODP Compiler Works, Part 5
  10. How the ODP Compiler Works, Part 6
  11. How the ODP Compiler Works, Part 7

In the first two posts in this series, I focused on the XPages compilation and runtime environment, independent of anything to do with an NSF specifically. I'll return to the world of OSGi and servlets in later entries, but I'd like to take a bit of time to talk about some specifics of grafting the compiled XPage results and the rest of the on-disk project's contents into an actual NSF.

The Basics

The primary tool that makes an on-disk project work is DXL, the XML representation of a note. DXL defines representations for several kinds of Notes elements, but the three main kinds that you run into with an on-disk project are:

  • Database metadata, found in the annotingly-suffixed AppProperties/database.properties file. This contains information from a couple places, in particular the ACL and icon notes

  • "Raw" representations of design notes. These show up quite a bit if you select "Use binary DXL" in Designer's preferences, and they show up in a couple parts regardless of that selection. These are distinguished by their use of <note/> as the root element, and contain close to raw data from the NSF. Strings, numbers, and dates are represented in human-readable form, but things like composite data/rich text are stored as Base64-encoded byte arrays matching their in-memory C structures. These "blobs" are opaque to work with but are the safest to round-trip.

    • A subtype of this is the "*.metadata" files, which I'll cover shortly.
  • "Encapsulated" design notes, with root elements like <form/> and <view/>. These are friendly to look at and work with programmatically, but the forms in particular run the risk of some edge-case compatibility issues.

The Process

The ODP Compiler uses DXL for almost all of its NSF manipulation, and imports the ODP in a couple of passes based on the different needs of different design elements.

"Direct DXL" Elements

The easiest elements are the ones that are just single DXL files in the ODP and can be imported directly. The compiler iterates over these files as determined by the OnDiskProject class and just passes them in to the DXL importer. Easy peasy.

"Split" Elements

The second main type are resource files that are stored in the ODP as their "normal" file data and paired with a ".metadata" file. The prime example of this are file resources: if you have a file named "foo.txt" stored as a file resource in your NSF, it will exist in the NSF as a normal text file named "foo.txt" and next to it will be a trimmed-down DXL file named "foo.txt.metadata". These metadata files are an export of the "raw" format of the DXL, but then the actual file data items are removed, leaving them contain just the additional items that go along with that (flags, in-NSF file name, etc.).

The conceptual task here is straightforward: encode the file data back into the appropriate composite-data format as Base64 inside the DXL, and then import that. The actual task of doing that, though, gets pretty arcane. There were two ways I could go about it: import the metadata only and then use the C API (via one route or another) to create the structures in memory and append them to the note, or create a C-struct-compatible representation in-memory in Java and add it to the DXL to import. I originally planned on doing the former, as the com.ibm.designer.domino.napi.design.FileAccess class in IBM's NAPI has promisingly-named classes to do this, but I ran into some trouble with some file types that it doesn't support - though file resources, images, script libraries, and others are all conceptually the same thing, the actual C-level storage mechanism for each is slightly different. So I ended up going the latter route, which entailed writing some gnarly code to do it in memory.

XPages Elements

For the most part, XPages and related elements (Custom Controls, themes, Java class files, and Jars) are supersets of file resources: they use the same composite-data structures and store the programmer-visible data in the same $FileData items in the destination notes. Each has an extra layer, though, in order to store the Java bytecode and other info.

Both XPages and Custom Controls share a code path that stores their compiled data into the $ClassData0, $ClassData1, $ClassSize0, and $ClassSize1 items, since they consistently have one class to represent the main page and then a second inner "Page" class to act as an internal component constructor. In addition, Custom Controls store their ".xsp-config" data in $ConfigData and $ConfigSize items in the same note in the NSF.

Java design elements are conceptually similar, but have less predictable class names, and so the code is a little more complex. There's also some special behavior here, in that there are a handful of compiled classes that show up in the compilation result that aren't directly stored in those files. I forget what those are specifically - they might be for secondary, non-public classes that appear at the top level of a Java source file but aren't inner classes.

All of these, in addition to storing their source and class names, also sprout a $ClassIndexItem item that lists the "file paths" for the classes to be used as part of the virtual filesystems that Domino and Designer use when initializing the XPages app.

LotusScript

LotusScript libraries are… special. Though LotusScript embedded in other design notes (forms, views, agents, etc.) doesn't require any special handling beyond importing the DXL, libraries stored as ".lss" files in the on-disk project aren't automatically compiled.

These libraries are brought in with their source stored as normal text items named $ScriptLib, but then need to be compiled from there. There's no mechanism for compiling LotusScript in the normal Java API, and IBM's NAPI doesn't have a binding to the NSFNoteLSCompileExt function involved, so I had to dip into Java C API bindings, initially via Karsten Lehmann's excellent Domino JNA and then switched over to Darwino's NAPI implementation.

If you look at the algorithm I'm using to compile the libraries, you may notice how brute-force it is. Any given library may depend on any given other library, but I don't have a way to know that ahead of time without parsing the code (which I don't want to do). So, in lieu of the kind of dependency graph that Designer creates when you do "Recompile all LotusScript", the ODP Compiler tries each library in turn and, if one fails, it adds it to a "try again" queue. It does this until it's had a chance to effectively try each combination, at which point it will either have a clean queue and can proceed or it'll have one or more libraries that failed to compile for a different reason. It's not pretty, but it gets the job done.

Standalone Elements

There are a handful of components of an ODP that are stored as plain files without associated DXL, generally to do with XPages support files like "xsp.properties". These have some special support in the FileResource class to auto-vivify an associated DXL file on the fly. Fortunately, these files are pretty basic to create, and the only catch was figuring out the appropriate $Flags and $FlagsExt values to fill in. For this, the OnDiskProject class has a set of matchers to match known paths to the specialized file-resource behavior needed for each.

Miscellany

Beyond just importing the ODP files into their right places, the compiler does a few other notable things.

It has the option to populate the $TemplateBuild shared field with template name and build time information, which I've found to be extremely handy. I used to have an agent in a separate DB that would update this in my template DB, and it's much nicer to have the compiler do this automatically. It's also a pleasant fit-and-finish thing.

Similarly, I used to have to remember to take a moment to make sure that the Xsp Properties file in the NTF was set to use compressed and aggregated resources, which was easy to forget. Now, I can have that happen automatically, via filtering during resource import.

Designer exports the "database.properties" file with the current full-text-index status intact, which can actually cause trouble when importing back into a new database. I had to strip that part out if present.

LotusScript compilation relies on the presence of web-service classes in the current Java runtime, which caused trouble when I added local compilation without a Domino server. I guess that this is a knock-on effect, with loading the compiler having the secondary effect of warming the JRE in case you're compiling web services.

Because I forgot that DbDirectory#createDatabase exists (or maybe it was limited, I don't remember), I ended up adding DB creation to Darwino's NAPI, which is a handy capability to have anyway.

Remaining Topics

The more I write about this, the more I find is still left worth covering. In particular, I'd like to go over the architecture of the compiler, how it's architected to run both on a remote server and via a local Equinox OSGi environment. There's also the whole matter of the ODP exporter, which is technically separate but related by workflow and a source of its own bits of arcane knowledge. So much to cover!

 

How the ODP Compiler Works, Part 2

Mon Jul 01 11:36:57 EDT 2019

Tags: nsfodp xpages
  1. Next Project: ODP Compiler
  2. NSF ODP Tooling 1.0
  3. NSF ODP Tooling Example Project
  4. NSF ODP Tooling 1.2
  5. How the ODP Compiler Works, Part 1
  6. How the ODP Compiler Works, Part 2
  7. How the ODP Compiler Works, Part 3
  8. How the ODP Compiler Works, Part 4
  9. How the ODP Compiler Works, Part 5
  10. How the ODP Compiler Works, Part 6
  11. How the ODP Compiler Works, Part 7

In yesterday's post, I briefly touched on how the XPages runtime sees its environment by way of a FacesProject and related components. Today, I'd like to expand on that a bit, since it's useful to understand the various layers of what makes up an "XPages app" at compilation and runtimes.

Designer and Domino largely take two paths to try to arrive at the same location in how they view an NSF. The way Designer works is more complicated and opaque than Domino, with extra layers of VFS and an internal RPC mechanism(!) for editors, but there is at least some shared code from the XSP runtime. Beyond that, it does almost the same thing to determine the project's dependency classpath, while the internal NSF classpath is entirely distinct, using Eclipse's project structure to builds towards the different structure Domino will use.

Libraries

The notion of an XSP Library is one of the main parts of directly-shared code between the server and Designer. The way an XSP Library works is that you create a class that implements com.ibm.xsp.library.XspLibrary and then declare that as an IBM Commons extension contribution (more on that later) for the com.ibm.xsp.Library service type.

The fact that this is live code sitting in a plugin has a significant implication. Namely, anything that interprets it has to actually load the class and its dependencies. This is as opposed to just a static configuration file, which could be read without executing any custom code. For the server, the distinction doesn't matter too much, since you'll want to load all your class files anyway. For Designer, this is where we get the requirement to install libraries into Designer itself, rather than just adding plugins to the Target Platform. This is also an area that's a breeding ground for IDE bugs, since Designer needs the plugin available both internally and in the Target Platform, but they're not inherently tied together.

Though the XspLibrary implementation class is executable code, its main purpose is to point the runtime to various bits of static configuration information: the unique identifier for the library (e.g. com.ibm.xsp.extlibx.bazaar.library), lists of *.xsp-config and *-faces-config.xml files to define XSP and JSF contributions, and a list of other library IDs that this one depends on.

I believe that Designer and Domino use these bits of information slightly differently - I'm not sure that Domino cares too much about the *.xsp-config files, for example - but there's a lot of overlap here.

Configuration Files

The two main types of static configuration files used by libraries serve distinct purposes.

The *-faces-config.xml files (not required to be so named, but it's a good convention) are layered under the faces-config.xml file contained in your NSF. They define managed beans, converters, PhaseListeners, and other JSF-isms. These files come directly from the underlying JSF implementation and share the same syntax, at least until the JSF-1.2-era forking of XPages.

The *.xsp-config files look similar - they also use the <faces-config/> root element - but I believe that these are largely an XSP-specific detail. It looks like JSF 1.2 also uses the same <faces-config-extension/> tag, but to a different end - perhaps this evolution started the same way but then diverged there. In any event, these files are where Designer (and the XSP compilation process in general) looks for custom-defined components and their accessible properties. There's an interesting point to note there: though defined components are effectively beans with properties, Designer doesn't introspect the object to get its property names and types, but instead relies entirely on the definitions found in these files. It will still eventually use the component class when it goes to compile the translated XSP Java files, so they still need to be correct, but it's certainly a spot where it's easy to make a typo or mismatched property type.

I think that the latter files aren't used by the server, since their purpose is to provide the XSP source Java translator with mappings for components' XML elements to the Java classes. However, the core XPages runtime classes on the server still retain knowledge of this configuration, which is how the Bazaar and ODP Compiler do their thing. The com.ibm.xsp.registry package and sub-packages are filled with a mix of parser classes and in-memory representations, like com.ibm.xsp.registry.parse.ConfigParserImpl and com.ibm.xsp.registry.LibraryFragmentImpl.

Non-Library Contributions

Though not related to libraries, it's useful to know about a handful of XPages-specific class contributions that can come into play at runtime. These use the IBM Commons extension mechanism, like libraries themselves, but contribute to a good many different parts of the runtime and application flow. Some of these can be defined inside an NSF, while some are only recognized when defined in plugins - there's a good rundown of these on the ODA wiki. It's pretty rare to see these in the wild, but you may see an application here or there that uses these contributions, via in-NSF files like META-INF/services/com.ibm.xsp.core.events.ApplicationListener.

OSGi and Dependencies

In the early days, XPages was not OSGi-based. That came in in the 8.5.2 era (I believe - I wasn't aware enough in the 8.5.0/8.5.1 era to know the specifics) with the "extensibility API". For the most part, this lineage remains, and the XPages runtime itself isn't too dependent on OSGi, even when it comes to library contributions. Little bits have crept in here and there - the getPluginId() method in XspLibrary and the getOSGiBundle() method in ExtLibLoaderExtension, for example - but it's still largely incidental.

IBM Commons Extensions

If you've done both XPages plugin and Eclipse-the-IDE plugin development, you may have noticed that, while Eclipse plugins usually contribute to customized extension points with complicated schemas, XPages contributions all look like this:

	<extension point="com.ibm.commons.Extension">
		<service type="com.ibm.xsp.Library" class="com.example.SomeXPagesLibrary" />
	</extension>

There are still some places in Domino where you use different extension points, such as when you register a servlet with the Equinox OSGi runtime directly, but for the most part it's just this one point. This is because this extension point is designed to paper over the differences between OSGi extensions and the vanilla-Java-style ClassLoader#getResources mechanism. The type of the service you provide lines up with the META-INF/services/some.extension.type files you can use in your NSF and which still remain inside the embedded jars in the core XPages plugins.

The reason why this OSGi extension point exists is that OSGi intentionally creates separations between the individual plugins that make up your app runtime. In a "normal" web app, all of your dependency jars end up in the WEB-INF/lib sub-directory and are effectively all poured together to make a single class-loading environment. The ClassLoader#getResources route will look through all of the jars in the classpath for these META-INF/services files, but OSGi puts walls between them, and instead provides its own extension mechanism (among others, but this is the one Domino uses).

Dependency Resolution

Both Domino and Designer view the NSF like an OSGi plugin, but go about resolving the dependencies slightly differently. Fortunately, this is a case where the differences seldom crop up in practice - I've only seen some minor differences in how they honor the Export-Package directive in the bundle manifest and how fragment bundles are included.

When Designer is building an XPages app, it references the xsp.properties file to determine which XSP Libraries to include, and then uses their getPluginId() method to determine which OSGi bundle that matches up to (I think). It adds that plugin to the list of dependencies in plugin.xml and (since 9.0.1 FP10) META-INF/MANIFEST.MF. The Eclipse side of Designer then uses that to compose the Plug-in Dependencies list from those bundle IDs and any of their dependencies that are marked as re-exported. I think that Domino only cares about the generated plugin.xml/MANIFEST.MF files - I don't think that it does the resolution based on the library class, though I might be wrong about that.

ODPCompiler's Version

Currently, the ODP Compiler hews closer to the "Domino-style" route. For resolving the active class path, it trusts that the plugin.xml that exists in the ODP is correct and resolves dependencies from there. In the future, it may make sense to have the compiler generate the plugin.xml file itself, in which case it will also have to resolve the plugins based on the library classes. That wouldn't be too difficult, but for now it relies on the exported ODP.

Layer Cake

Looking at the whole XPages architecture, something that strikes me is how much it's simultaneously a giant stack of parts - config parsers, resolvers, runtime bootstrappers, and so forth - but also a pretty straightforward server-side web stack from a Java EE perspective. I've been diving deep into XPages in various ways for a long time now - building complex apps, writing library plugins, and even yanking the runtime out of Domino - yet writing the compiler led to this whole distinct set of capabilities. But a lot of this is essentially "just" ahead-of-time work, with Designer and the ODP Compiler's jobs being a lot of world-resolution followed by placing compiled pieces into the right places in the NSF.

By the time it gets to the NSF, it actually ends up as a pretty normal-style web app - XPages are just Java classes floating around, non-OSGi dependency jars are in WEB-INF/lib, and the faces-config.xml controls rendering in the same way as in JSF. A lot of that, though, will come up in later posts, where I go into the gotchas involved in taking these compilation results and other ODP resources and actually getting them into an NSF.

How the ODP Compiler Works, Part 1

Sun Jun 30 13:54:40 EDT 2019

Tags: nsfodp xpages
  1. Next Project: ODP Compiler
  2. NSF ODP Tooling 1.0
  3. NSF ODP Tooling Example Project
  4. NSF ODP Tooling 1.2
  5. How the ODP Compiler Works, Part 1
  6. How the ODP Compiler Works, Part 2
  7. How the ODP Compiler Works, Part 3
  8. How the ODP Compiler Works, Part 4
  9. How the ODP Compiler Works, Part 5
  10. How the ODP Compiler Works, Part 6
  11. How the ODP Compiler Works, Part 7

A year ago, I started a project to compile NSFs - and particularly large XPages projects - independently of Designer. Since the initial releases, the project has grown, gaining an ODP exporter, extra Eclipse UI integration, and the ability to run without installing components on a remote server. It's become an integral part of my workflow with several projects, where I include the NSFs as part of a large Maven build alongside OSGi plugins and a final distribution.

Building this tooling required learning a lot about the internals of XPages, the specifics of how various design elements are stored and handled in an NSF, and miscellaneous bits about Equinox and Maven. Since there's a good amount of arcane knowledge embedded in the project, I think it'll be helpful to take some time to dive deep into what's going on, starting with XPages.

XSP to Java to Bytecode

The first challenge for me to overcome was how to go from XPages XML source to the Java class files (like those seen in the "Local" source folder in Designer) and finally to compiled Java bytecode. Much like in Designer's process, the middle part is just incidental: only the XSP source and the bytecode are actually stored in the NSF.

The official XSP -> Java compiler exists only in Designer, and so one route would be to try to get those plugins working on Domino. I think that'd work, but it'd be a huge hassle and, fortunately, an unnecessary one. The XPages Bazaar project contains essentially a clean re-implementation of the glue code required to coax the runtime into emitting what I needed. I used the Bazaar as an incubator for the early versions of the compiler, and tweaked the core code with additions and fixes to work with various needs I ran into.

OSGi Bundles

On its own, the Bazaar did the job well of taking an XPage and compiling it with whatever the surrounding environment had. However, to compile a full XPages app as part of a Maven build, I'd need the ability to dynamically load XPages libraries and dependencies on the fly.

To do this, I added the option to include an update site directory, and then I have the compiler stream through all the plugins and initialize them. Fortunately, the OSGi environment makes this pretty easy. The BundleContext object that's available from each OSGi bundle has an installBundle method you can use by pointing at a bundle file URL, and then you can call start() on that result to actually initialize it. I had to do a little extra work to account for bundles that shouldn't be started (source-only bundles and "fragments", which are additions onto normal plugins) and the like, but it's not too complicated.

XSP Libraries

Just installing the OSGi plugins isn't enough to get the XPages runtime to know about any libraries that may be included, however. This is done by finding all of the library extension contributors, sorting them for compatibility, wrapping them in a com.ibm.xsp.library.LibraryWrapper for some reason, wrapping that in a com.ibm.xsp.registry.config.SimpleRegistryProvider, and then adding the results of that to an in-memory com.ibm.xsp.registry.FacesSharableRegistry instance.

This is one of those cases where the actual code involved in the end isn't terribly long, but the amount of delving into the framework to figure out the needed parts was immense. In essence, each XPages application is represented as a FacesProject implementation object, which retains a registry of the libraries it knows about. In a normal running server, this happens automatically during initialization: the runtime opens the NSF, figures out which libraries it needs, finds them from the ones it knows about, and constructs the app's contained world. For the compiler, I ended up having to do something of an ad-hoc version of this as it goes, finding all the parts that need to be kicked to notice the available libraries and have them around to map something like <abc:someCustomComponent/> to an instance of com.abc.xsp.CustomComponent.

Custom Controls

Custom controls are a similar story, but they use a specialized variant of the "real" library system called LibraryFragment. The way these work, before the actual XSP -> Java compilation step, the compiler reads their definitions and adds them to the in-memory FacesProject. It's important to define them all in this way before actually trying to interpret their source (or any XPages), because this allows for the interpreter to understand a reference to one CC from another. Fortunately, the process is separated enough that the definitions can all happen before any of the Java classes actually exist. Otherwise, I would have had to either try to make a dependency graph of which CC references which other, or just keep trying to compile them repeatedly until it got the right order by brute force. The latter process will actually show up in a future blog post.

Java Source Files

Compiling individual Java source files - both those that show up in the Code/Java part of an NSF (or a custom source folder) as well as the translated XSP source - is handled by the Bazaar, in a class called JavaSourceClassLoader. This class wraps around the in-memory Java compilation capabilities that were added to the JDK in Java 1.6. In particular, this class, paired with SourceFileManager, provides knowledge to resolve classes from OSGi bundles in the active environment, including specialized knowledge of dependency management based on re-exported dependencies, embedded jars, and other OSGi-isms that play a big part in XPages libraries.

In essence, these classes provide a similar compilation environment to what Designer does when it creates the Eclipse project for compiling an NSF. They build up an environment with knowledge of all the dependencies that, in Designer, show up in "Plug-in Dependencies", and then the compiler feeds it all of the Java source files in the project. With all of this environment provided, the underlying Java compiler is able to do its job of converting them to bytecode en masse, which is then passed back to the compiler for insertion into the NSF.

Other Big Components

In the next blogs posts in this series, I'll go over some of the other big hurdles I had to overcome to get everything working properly: namely, figuring out all of the specialized behavior necessary to flag imported/created design notes properly and then the arcane incantations necessary to get this OSGi environment working outside of the Domino server.