Musing About Multi-App Structure

Tue Apr 28 12:25:39 EDT 2015

Tags: musing

One of the projects I'm working on is going to involve laying the foundations for a suite of related XPages applications, and so I've been musing about the best way to accomplish that. I'll also want to do something similar down the line for OpenNTF, so this is a useful track for consideration. Traditionally, Notes and XPages applications are fairly siloed: though they share authentication information and some configuration via groups, integration between them is fairly ad-hoc.

I'm not sure what the final structure will involve, but for now I'll assume it will use my framework or something similar. Though my Framework provides a lot of the structure that will be needed in this sort of project, it doesn't answer the inter-app question. In each app, responsibilities are clearly defined, model objects follow a consistent structure (and can point to other data sources), and putting all the undergirding code in the plugin makes for very lean in-app codebases.

There are a couple main points of integration I can think of: permissions, findability (i.e. what's the URL to link from App A to App B), and business objects/models. The first way is best covered the traditional way - just make groups and use that in the ACLs - so I'm not going to sweat that one too much. Theoretically, it could be useful to have a companion app to manage ACLs in a friendly way, but that's not needed at this point. That leaves the others, which are related but have their distinct needs. I've been thinking of a couple potential setups.

One Big Honkin' App

I could just throw everything into one app. This would have some immediate benefits: everything is already "shared" by virtue of being in the same place already, and it would make it very easy to move the app "bundle" around and to have multiple distinct instances on a server. Additionally, caching opportunities would abound, so this could make for some heavy performance boosts - and I don't know of any particular performance down sides of having a lot of XPage code in one NSF. It would make it easier to handle common UI elements, but, since my development style already favors using stock or ExtLib controls paired with renderers, the need for reusable custom controls is lower than usual.

The down sides are fairly obvious, though. Putting everything in one NSF makes for a difficult-to-navigate codebase long-term, and with a suite of applications that will likely have multiple developers, that's a recipe for a miserable experience. Additionally, the XPages/NSF dev environment doesn't provide any affordances for compartmentalizing design elements - all XPages are just going to sit in a single flat list. There's the IBM-style convention of stuff like "admin_home.xsp", "prople_list.xsp", which I adopt for most apps, but that would fall on its face when you start getting to "app1_admin_home.xsp". This would also make it difficult to develop a new companion app down the line, adding to the compartmentalizing woes.

So... I don't think I want to do that.

Business Logic in a Plugin

I could go the almost-opposite route and move most of the business logic into an OSGi plugin, and then the individual apps would contain almost exclusively UI - XPages that refer to these plugin-side model objects and deal just with the mechanics of displaying the data and interacting with the user.

This would have some nice advantages. The XPages apps themselves would be extremely slim and I could move what common custom controls do exist into the plugin. That would remove a chunk of the worries about bundling absolutely everything into one place, and the split of "logic" and "UI" is a classic and important one, and would scale nicely in a situation where developers are split into front-end and back-end. It would also retain the cache and share-config benefits of the "one big app" approach.

However, this would retain the problem of monolithic business logic, solving very few of the issues of having multiple developers working on separate modules, or wanting to add new ones to an existing basis. Additionally, the process of developing each module would be a huge PITA: changes would involve parallel modifications to the plugin and the NSF, and the experience of rapid development in an OSGi plugin is not great. This would also wander into the multi-tenancy problem, where the centralized code would have to know about distinct instances of the app group deployed on the server. This is a problem that will show up in most configurations, but I don't think this option brings enough to the table to compensate.

"Normal" separation

I could also go the "normal" route and have the apps know about each other in a fairly ad-hoc way. This wouldn't be too bad, overall - the apps could find each other via in-app configuration documents (or, most likely, a more-centralized variant on that approach).

The primary benefit here is that developing each app would be very easy, since it would be the same as any other Framework-based app. And, while the apps wouldn't be truly sharing code, they would be structured similarly, lowering the hurdle of working on multiple ones. Modularity and splitting responsibilities between multiple developers would be fairly straightforward.

The severe down side, though, is the question of shared model logic. If one app has a notion of "Person" that is a shared concept across all apps, how do the others know about it? I could copy the Java classes around from app to app - they'd all point to the same data - and make sure they're in sync either via templates or manually, but that is awful. This seat-of-the-pants sort of integration wouldn't go far enough.

Services and Coordinator

This approach would be similar to the "normal" separation, but I would develop a way for apps to pull in the business logic from other ones. For example, if an app wants to use the same "Person" class that another app defined, I could write a "coordinator" plugin that would find the class in the appropriate app and load it for this one. This is actually similar to something my Framework already does: its REST services consist of a JAX-RS/Wink servlet in the OSGi plugin which loads model objects from inside the NSF without having to have in-depth knowledge of the class.

This is made relatively simple by the fact that my model objects and collections implement standard interfaces - DataObject and List respectively, plus more-specific variants for additional capabilities - and they enforce their behavior and constraints through the standardized methods. So if, for example, the Person object has a getManager method that returns the person's manager as an object, calling getValue("manager") would use this method and return the right thing. The "consumer" code wouldn't have the benefit of seeing the actual methods on the class and so would have to know what to call the fields by convention, but honestly I've found this convention-based route to be entirely practicable.

The other side of this would be the "coordinator" plugin, which would handle knowledge of how to access other parts of the suite, so the individual apps would request of it "I would like the Person manager from the StaffAdmin app", or something to that effect. This would have to include some way to solve the multi-tenant and database-location problem, which is a largely-unavoidable issue. Once this plugin is in place, it would presumably require little ongoing work, with primary development happening in the NSFs.

As the explanation-heavy and down-side-light nature of this section implies, this is the sort of approach I'm leaning towards.

Black Magic

There are also other approaches and techniques I've been thinking about - such as trying to figure out how to load XPages from one DB in a central one so they act like a single pool, coming up with some sort of abstract model-definition format and generating app UIs from that, or just write the whole thing in plugins. I probably won't go down those sorts of routes - this is supposed to be maintainable by Domino developers - but it's always good to think about the more-out-there options, so I don't cut off future possibilities.

Building on ODA's Maven-ization

Tue Mar 31 20:30:49 EDT 2015

Tags: maven oda

Over the weekend, I took a bit of time to apply some of my hard-won recent Maven knowledge to a project I wish I had more time to work with lately: the ODA. The development branches have been Maven-ized for half a year or so, but primarily just to the point of getting the compile to work. Now that I know more about it, I was able to go in and make great strides towards several important goals.

As a preliminary note: don't take my current implementations as gospel. There are parts that will no doubt change; for example, there are some intermittent timing issues currently with the final assembly. But the changes I did make have borne some early fruit.

Source Bundles

Over the releases, it's proven surprisingly fiddly to get parameter names, inline Javadoc, and attached source to work in Designer, leaving some builds no better off than the legacy API in those regards. The apparently-consistent fix for this is the use of "source" plugins: OSGi plugins that go alongside the normal one that just contain the source of each class. Those aren't too bad to generate manually from Eclipse, but the point of Maven is getting away from that sort of manual stuff.

Fortunately, Tycho (the OSGi toolkit for Maven) includes a plugin that allows you to generate these source bundles alongside the normal ones, by including this in the list of plugins executed during the build:

<plugin>
	<groupId>org.eclipse.tycho</groupId>
	<artifactId>tycho-source-plugin</artifactId>
	<version>${tycho-version}</version>
	<executions>
		<execution>
			<id>plugin-source</id>
			<goals>
				<goal>plugin-source</goal>
			</goals>
		</execution>
	</executions>
</plugin>

Once you have that (which I added to the top-level project, so it cascades down), you can then add the plugins to the OSGi feature with the same name as the base plugin plus ".source". Eclipse will give a warning that the plugins don't exist (since they exist only during a Maven build), but you can ignore that.

Javadoc

Javadoc generation is an area where I suspect I'll make the most changes down the line, but I managed to wrangle it into a spot that mostly works for now.

Not every project in the tree needs Javadoc (for example, we don't need to include docs for third-party modules necessarily), but it's still useful to specify configuration. So I took the already-existing basic config in the parent pom and moved it to pluginManagement for the children:

<pluginManagement>
	<plugins>
		<plugin>
			<!-- javadoc configuration -->
			<groupId>org.apache.maven.plugins</groupId>
			<artifactId>maven-javadoc-plugin</artifactId>
			<version>2.9</version>
			<configuration>
				<failOnError>false</failOnError>
				<excludePackageNames>com.sun.*:com.ibm.commons.*:com.ibm.sbt.core.*:com.ibm.sbt.plugin.*:com.ibm.sbt.jslibrray.*:com.ibm.sbt.proxy.*:com.ibm.sbt.security.*:*.util.*:com.ibm.sbt.portlet.*:com.ibm.sbt.playground.*:demo.*:acme.*</excludePackageNames>
			</configuration>
		</plugin>
	</plugins>
</pluginManagement>

Then, I added specific plugin references in the applicable child projects:

<plugin>
	<groupId>org.apache.maven.plugins</groupId>
	<artifactId>maven-javadoc-plugin</artifactId>
	<executions>
		<execution>
			<id>generate-javadoc</id>
			<phase>package</phase>
			<goals>
				<goal>jar</goal>
			</goals>
		</execution>
	</executions>
</plugin>

With those, the build can generate Javadoc appropriate for consumption in the final assembly down the line.

Assembly

The final coordinating piece is referred to as the "assembly". The job of the Maven Assembly Plugin is to take your project components and output - built Jars, source files, documentation, etc. - and assembly them into an appropriate final format, usually a ZIP file.

The route I took is to add a distribution project to the tree whose sole job it is to wait until the other components are done and then assemble the results. The pom for this project primarily consists of telling Maven to run the assembly plugin to create an appropriately-named ZIP file using what's called an "assembly descriptor": an XML file that actually provides the instructions. There are a couple stock descriptors, but for something like this it's useful to write your own. It's quite a file (and also liable to change as I figure out the best practices), but is broken down into a couple logical segments.

First off, we have a rule telling it to include all files from the "src/main/resources" folder in the current (assembly) projet:

<fileSets>
	<fileSet>
		<directory>src/main/resources</directory>
		<includes>
			<include>**/*</include>
		</includes>
		<outputDirectory>/</outputDirectory>
	</fileSet>
</fileSets>

This folder contains a README description of the result as well as the miscellaneous presentations and demo files the ODA has collected over time.

Next, in addition to the source bundles mentioned earlier, I want to include ZIP files of the important project sources in the distribution, for easy access (technically wasteful, but not by too much):

<moduleSet>
	<useAllReactorProjects>true</useAllReactorProjects>
	<includes>
		<include>org.openntf.domino:org.openntf.domino</include>
		<include>org.openntf.domino:org.openntf.domino.xsp</include>
		<include>org.openntf.domino:org.openntf.formula</include>
		<include>org.openntf.domino:org.openntf.junit4xpages</include>
	</includes>
	
	<binaries>
		<attachmentClassifier>src</attachmentClassifier>
		<outputDirectory>/source/</outputDirectory>
		<unpack>false</unpack>
		<outputFileNameMapping>${module.artifactId}.${module.extension}</outputFileNameMapping>
	</binaries>
</moduleSet>

I use the "binaries" tag here instead of "sources" because I want to include the ZIP forms (hence unpack=false) - this is one part that may change, but it works for now.

Next, I gather the Javadocs generated earlier, but these I do want to unpack:

<moduleSet>
	<useAllReactorProjects>true</useAllReactorProjects>
	<includes>
		<include>org.openntf.domino:org.openntf.domino</include>
		<include>org.openntf.domino:org.openntf.domino.xsp</include>
		<include>org.openntf.domino:org.openntf.formula</include>
	</includes>
	
	<binaries>
		<attachmentClassifier>javadoc</attachmentClassifier>
		<outputDirectory>/apidocs/${module.artifactId}</outputDirectory>
		<unpack>true</unpack>
	</binaries>
</moduleSet>

This results in an "apidocs" folder containing the Javadoc HTML for each of those three projects in subfolders.

Finally, I want to include the built and ZIP'd Update Site for use in Designer and Domino:

<moduleSet>
	<useAllReactorProjects>true</useAllReactorProjects>
	<includes>
		<include>org.openntf.domino:org.openntf.domino.updatesite</include>
	</includes>
	
	<binaries>
		<attachmentClassifier>assembly</attachmentClassifier>
		<outputDirectory>/</outputDirectory>
		<unpack>false</unpack>
		<includeDependencies>false</includeDependencies>
		<outputFileNameMapping>UpdateSite.zip</outputFileNameMapping>
	</binaries>
	
	<sources>
		<outputDirectory>/</outputDirectory>
		<includeModuleDirectory>false</includeModuleDirectory>
		<includes>
			<include>LICENSE</include>
			<include>NOTICE</include>
		</includes>
	</sources>
</moduleSet>

While grabbing the Update Site, I also copy the all-important LICENSE and NOTICE files from this current project - these may be best moved to the resources folder above.

The result of all this is a nicely-packed ZIP containing everything a user should need to get started with the API:

Next Steps

So, as I mentioned, this work isn't complete, in large part because I'm still learning the ropes. I suspect that the way I'm gathering the sources in the assembly and generating and gathering the Javadoc are not quite right - and this shows in the way that slightly-different host configurations (like on a Bamboo build server or when doing a multi-threaded build) fail during packaging.

Additionally, it's somewhat wasteful to include the source plugins even for server distributions; I won't really lose sleep over it, but it'd still be ideal to continue the recent policy of providing ExtLib-style distinct Update Sites. I'm not sure if this will require creating multiple feature and update-site projects or if it can be accomplished with build profiles.

Finally, I would love to be able to get rid of the source-form third-party dependencies like Guava and Javolution. One of the main benefits of Maven is that you can very-easily consume dependencies by listing them in the config, but Tycho and Eclipse throw a wrench into that: when you configure a project to use Tycho, then Eclipse stops referencing the Maven dependencies. Moreover, even though I believe all of the dependencies we use contain OSGi metadata, which would satisfy a Tycho command-line build, both Eclipse and the requirement that we build an old-style (non-p2) Update Site prevent us from doing that simply. It's possible that the best route will be to have Maven download and copy in the Jar files of the dependencies, but even that has its own suite of issues.

But, in any event, it's satisfying seeing this come together - and nice for me personally to build on the work Nathan, Paul, and Roland-and-co. have been doing lately. Maven is a monster and still suffers from severe "how the F does this stuff work?" problems, but it does feel good to put it to work.

Auto-OSGi-ifying Maven Projects

Sat Mar 28 16:15:59 EDT 2015

Tags: maven

In my last post, I discussed some of the problems that you run into when dealing with Maven projects that should also be OSGi plugins, particularly when you're mixing OSGi and non-OSGi projects in the same Maven build (in short: don't do that). Since then, things have smoothed out, particularly when I split the OSGi portion out into another Maven build, allowing it to consume the "core" artifacts cleanly, without the timing issue described previously.

But I ran into another layer of the task: consuming the Maven artifacts as plain Jars is all well and good, but the ideal would be to also have them available as a suite of OSGi plugins, so they can be managed and debugged more easily in an OSGi environment like Eclipse or Domino. Fortunately, this process, while still fairly opaque, is smoother than the earlier task.

A note on terminology: the term "plugin" can refer to both the OSGi component as well as the tools added into a Maven build. The term "bundle" aptly describes the OSGi plugins as well, but I'm used to "plugin", so that's what I use here. It's probably the case that an OSGi plugin is a specialized type of bundle, but whatever.

Preparing the Plugins

The route I'm taking, at least currently, is to tell the root Maven project that all of its Jar-producing children should also have a META-INF/MANIFEST.MF file packaged along to allow for OSGi use, and moreover to automatically generate that manifest using the maven-bundle-plugin. The applicable code in the parent pom.xml looks like this:

<build>
    <pluginManagement>
        <plugin>
            <groupId>org.apache.felix</groupId>
            <artifactId>maven-bundle-plugin</artifactId>
            <version>2.1.0</version>
            <configuration>
                <manifestLocation>META-INF</manifestLocation>
                <instructions>
                    <Bundle-RequiredExecutionEnvironment>JavaSE-1.6</Bundle-RequiredExecutionEnvironment>
                    <Import-Package></Import-Package>
                </instructions>
            </configuration>

            <executions>
                <execution>
                    <id>bundle-manifest</id>
                    <phase>process-classes</phase>
                    <goals>
                        <goal>manifest</goal>
                    </goals>
                </execution>
            </executions>
        </plugin>
        <plugin>
            <artifactId>maven-jar-plugin</artifactId>
            <version>2.3.1</version>
            <configuration>
                <archive>
                    <manifestFile>META-INF/MANIFEST.MF</manifestFile>
                </archive>
            </configuration>
        </plugin>
    </pluginManagement>
</build>

In order to actually generate the manifest files, I included a block like this in each child project that produces a Jar:

<build>
    <plugins>
        <plugin>
            <groupId>org.apache.felix</groupId>
            <artifactId>maven-bundle-plugin</artifactId>

            <configuration>
                <instructions>
                    <Bundle-SymbolicName>com.somecompany.someplugin</Bundle-SymbolicName>
                </instructions>
            </configuration>
        </plugin>
    </plugins>
</build>

The Bundle-SymbolicName bit is there to translate the project's Maven artifact ID (which would be like "foo-someplugin") into a nicer OSGi version. There are other ways to do this, including just letting it use the default, but it made sense to write them manually here.

Once you do that and then run a Maven package, each Jar project in the tree should get an auto-generated MANIFEST.MF file that exports all of the project's Java classes and specifies a Java 6 runtime and no imported packages. There are many tweaks you can make here - any of the normal MANIFEST entries can be specified in the <instructions/> block, so you could add imported packages, required bundles, or other metadata at will.

If you install these projects into your local repository, then downstream OSGi projects using Tycho can find the dependencies when you include them in the pom.xml by Maven artifact ID and in the downstream MANIFEST.MF by OSGi bundle name. There's one remaining hitch (at least): though Maven will be fine with that resolution, Eclipse doesn't pick up on them. To do that, it seems that the best route is to create a p2 repository housing the plugins, which would also be useful for other needs.

Creating an Update Site

Fortunately, there is actually an excellent example of this on GitHub. By following those directions, you can create a project where you list the plugins you want to include as dependencies in the pom.xml, and it will properly package them into a p2 site containing all the plugins with their OSGi-friendly names and nice site metadata.

As a Domino-specific aside, a "p2 Update Site" is somewhat distinct from the Update Sites we've gotten used to dealing with - namely, it's a newer format that is presumably unsupported by Notes and Domino's outdated infrastructure. You can tell the difference because the "old" ones contain a site.xml file while the p2 format contains content.jar and artifacts.jar (those may be .xml instead). It's just another one of those things for us to deal with.

In any event, the instructions on GitHub do what they say on the tin, but I wanted a bit more automation: I wanted to automatically include all of the plugins built in the project without specifying them each as a dependency. To do this, I replaced Step 2 in the example (the use of maven-dependency-plugin) with the maven-assembly-plugin, which is a generic tool for culling together the results of a build in some useful format. The replaced plugin block looks like this:

<plugin>
	<groupId>org.apache.maven.plugins</groupId>
	<artifactId>maven-assembly-plugin</artifactId>
	<version>2.5.3</version>
	<configuration>
		<descriptors>
			<descriptor>src/assembly/plugins.xml</descriptor>
		</descriptors>
		<outputDirectory>${project.basedir}/target/source</outputDirectory>
		<finalName>plugins</finalName>
		<appendAssemblyId>false</appendAssemblyId>
	</configuration>
	<executions>
		<execution>
			<id>make-assembly</id>
			<!-- Bump this up to earlier than package so that the plugins below see the results -->
			<phase>process-resources</phase>
			<goals>
				<goal>single</goal>
			</goals>
		</execution>
	</executions>
</plugin>

This block tells the assembly plugin to look for an assembly descriptor file (which is yet another specialized XML file format, naturally) named "plugins.xml" and execute its instructions during the phase where it's processing resources, coming in just before the later plugins.

In turn, the assembly descriptor looks like this:

<assembly
	xmlns="http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.2"
	xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
	xsi:schemaLocation="http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.2 http://maven.apache.org/xsd/assembly-1.1.2.xsd">
	<id>plugins</id>
	<formats>
		<format>dir</format>
	</formats>
	<includeBaseDirectory>false</includeBaseDirectory>
	<moduleSets>
		<moduleSet>
			<useAllReactorProjects>true</useAllReactorProjects>
			<includes>
				<include>*:*:jar:*</include>
			</includes>
			<binaries>
				<outputDirectory>/</outputDirectory>
				<unpack>false</unpack>
				<includeDependencies>true</includeDependencies>
			</binaries>
		</moduleSet>
	</moduleSets>
</assembly>

What this says is to include all of the modules (Maven artifacts) being processed in the current build that are packaged as Jars and copy them into the designated directory, where they will be picked up by the Tycho plugins down the line.

The result of this Rube Goldberg machine is that all of the applicable plugins in the current build (and their dependencies) are automatically gathered for inclusion in the update site, without having to maintain a specific list.

Missing Pieces

This process accomplishes a great deal automatically, alleviating the need to maintain MANIFEST.MF files or a repository configuration, but it doesn't cover quite everything that might be needed. For one, there's no feature project; the update site is just a bunch of plugins without features to go along with them. Honestly, I don't know if those are even required for most uses - Eclipse seems capable of consuming the site as-is. Secondly, though, the result isn't suitable for use in an old-style environment, so this isn't something you would go plugging into Designer. For that, you'd want a secondary project that wraps the plugins into a feature in an old-style update site, which would have to be done in a second Maven build. Regardless, this seems to get you most of the way, and saves a ton of hassle.

Tycho and Tribulations

Sat Mar 14 15:02:48 EDT 2015

Tags: maven

For the last few weeks, a large part of my work has involved dealing with Maven, and much of that also involves Eclipse. It's been quite the learning experience - very frustrating much of the time, but valuable overall. In particular, any time OSGi comes into play, things get very complicated and arcane, in non-obvious ways. Fair warning: this blog post will likely start out as an even-keeled description of the task at hand and descending into just ranting about Maven.

The Actors

To start out, it's important to know that this sort of development involves three warring factions, each overlapping and having distinct views of the world. In theory, there are plugins that sort out all the differences, but this doesn't play out very smoothly in reality. Our players are:

  • Maven. This is the source of our trouble, but overall worth it. Maven is a build system for Java (and other) projects that brings with it great powers to do with dependency management, project organization, packaging, distribution, and any number of other things. I've been increasingly dealing with it, initially as an observer while the ODA team descended into madness to convert that project, and then Maven-izing my own framework. Its view of the world is of "artifacts" - conceptual units like junit, poi, or other dependencies, plus your own project components - organized in a tree of modules and available via repositories. Its build process is a multi-stage lifecycle with hooks for plugins at each step of the way.
  • Eclipse. Eclipse-the-IDE has its own view of how a Java project should be organized and built, and it doesn't involve Maven. There is a plugin for Eclipse and Maven, m2eclipse, that is meant to patch over these differences, but it can only go so far - while it helps Eclipse know a bit about Maven dependencies and its plugins, it's very dodgy and often involves trying to sync the Eclipse build configuration to be an imperfect representation of the Maven config.
  • OSGi. OSGi is a packaging, dependency, and runtime model, and is spoken natively by Eclipse (and Domino). However, it butts heads with Maven: they both cover the "packaging and dependencies" ground, and this creates a mess. Again, there are plugins to help bridge the gap, but these bring another layer of complexity and brittleness to the process.

Maven, Eclipse, and OSGi go together like oil, water, and a substance that dissolves in water but reacts explosively with oil.

OSGi's Plugins

The interaction with OSGi deserves a bit of further explanation. Unlike the Maven/Eclipse bridge, where there's basically one tool to work with, imperfect as it is, dealing with Maven+OSGi has two distinct plugins, which may or may not be required for your needs:

  • Tycho. This is the big one, intended to give Maven a thorough understanding of OSGi's view of the world, parsing the MANIFEST.MF files and hunting down dependencies using both an OSGi environment and Maven's normal scheme (if you tell it to correctly). If you're writing a full-on Eclipse/OSGi plugin/feature set (like ODA or my Framework), Tycho will be involved.
  • The Maven Bundle Plugin. This confusingly-named plugin is specifically referring to "bundle" in the OSGi sense, which is the elemental form of an OSGi plugin (the terminology begins to really overlap here). Its role is to take a non-OSGi project - say, a "normal" Maven project you or a third party wrote - and generate a MANIFEST.MF for you, allowing you to create an OSGi-friendly project usable as a dependency elsewhere.

These two projects, though often both required, are not related, and are crucially incompatible in one major way: Tycho's dependency resolution runs before the Bundle Plugin can do its job. So you can't, for example, have a Bundle project that generates an OSGi-friendly plugin and then depend on it in a "real" OSGi context inside the same Maven build. As far as I can tell, the "fix" is to separate these out into separate Maven projects. So, if you want to consume Maven projects and convert them into OSGi plugins without also manually managing plugin stuff and dependency copying, you have to make it a two-step process. The reason for this is that computers suck.

Eclipse and Maven

Throughout this sort of development, there's a constant gremlin on your back: the distinct worlds of Eclipse and Maven. Many changes to the pom.xml (Maven's project descriptor file) will prompt Eclipse to tell you that its project config is out of date and that you must click a menu item to sync it, which it refuses to do itself for some reason. Additionally, you will frequently run into a case where you'll paste in a block of Maven XML from somewhere and it will be legal for Maven, but Eclipse will complain about not having lifecycle support for it. If you're lucky, you can click the "quick fix" to download an adapter automatically, or failing that tell it to ignore that part. Other times, it'll give you some cryptic error about packaging or the like and offer no solution. The "fix" at that point is often to stop trying to do what you want to do.

Because of these and other conditions, it's fairly easy to get into a situation where the project will compile in Eclipse but not in Maven or vice-versa. Sometimes, this isn't too bad to fix, such as when you just need to add a dependency to a given project in Maven. Other times, things will get more arcane, requiring seeking out more blocks of Maven XML (this is a common task) to either let Maven or Eclipse know about the other, or to at least tell Eclipse to not bother trying to process part of the Maven project. This process is most similar to an adventure game, trying different combinations of plugins and pasted XML until it works or you quit and try a different career path.

Documentation

Capping these problems off is the peculiar nature of documentation for all this. From my experience, it comes in a couple forms:

  • Official documentation that is either a very basic getting-started tutorial or assumes you have a complete understanding of Maven's conceits and idioms to read what they're talking about.
  • Individual plugin pages with varing levels of thoroughness, and usually no mention of interaction with other components.
  • Blog posts and Stack Overflow questions from 3-5 years ago, half of which amount to "X doesn't work", and most of the rest of which contain blocks of XML to try pasting into your pom without much explanation.

After working with Maven long enough, you start developing a vague, disjointed understanding of how it works - how the "plugins" inside "build" differ from those inside "pluginManagement", for example - but it's slow going. It seems to be the sort of thing where you just have to pay your dues.

Conclusion For Now

Things are very gradually coming together, and the benefits of Maven are paying off as I start avoiding the pitfalls and implementing things like Jenkins. Once I properly sort out the projects I'm working on, I'll post more about what I learn to be the right ways to accomplish these goals, but for now my assessment remains "Maven is a huge PITA, but overall probably worth it".

Quick Tip: Wrapping a lotus.domino.Document in a DominoDocument

Tue Mar 03 15:13:13 EST 2015

Tags: xpages java

In the XPages environment, there are two kinds of documents: the standard Document class you get from normal Domino API operations and DominoDocument (styled NotesXspDocument in SSJS). Many times, these can be treated roughly equivalently: DominoDocument has some (but not all) of the same methods that you're used to from Document, such as getItemValueString and replaceItemValue, and you can always get the inner Document object with getDocument(boolean applyChanges).

What's less common, though, is going the other direction: taking a Document object from some source (say, database.getDocumentByUNID) and wrapping it in a DominoDocument. This can be very useful, though, as the API on the latter is generally nicer, containing improved support for dealing with attachments, rich text and MIME, and even little-used stuff like getting JSON from an item directly. Take a gander at the API: com.ibm.xsp.model.domino.wrapped.DominoDocument.

It's non-obvious, though, how to make use of it from the Java side. For that, there are a couple wrap static methods at the bottom of the list. The methods are overflowing with parameters, but they align with many of the attributes used when specifying an <xp:dominoDocument/> data source and are mostly fine to leave as blanks. So if you have a given Document, you can wrap it like so:

Document notesDoc = database.getDocumentByUNID(someUNID);
DominoDocument domDoc = DominoDocument.wrap(
	database.getFilePath(), // databaseName
	notesDoc,               // Document
	null,                   // computeWithForm
	null,                   // concurrencyMode
	false,                  // allowDeletedDocs
	null,                   // saveLinksAs
	null                    // webQuerySaveAgent
);

Once you do that, you can make use of the improved API. You can also serialize these objects, though I believe you have to call restoreWrappedDocument on it after deserialization to use it again. I do this relatively frequently when I'm in a non-ODA project and want some nicer methods and better data-type support.

Be a Better Programmer, Part 5

Fri Jan 30 21:02:52 EST 2015

  1. Be a Better Programmer, Part 1
  2. Be a Better Programmer, Part 2
  3. Be a Better Programmer, Part 3
  4. Be a Better Programmer, Part 4
  5. Be a Better Programmer, Part 5

Inevitably, as a programmer, you're going to run into a situation where you're faced with a seemingly-impossibly-daunting task, whether it's because the task is large or because it's particularly arcane. While sometimes it's just going to be a long slog (some code bases are just monsters), there's usually some underlying conceit that brings everything together.

Figure Out How Things Are Simple

No matter how crazily complex or impossible-seeming a programming task can be, it's useful to remember that it was written by just some humans, and is meant to be understood. At the fundamental level, it's all just shifting bits around, but even above that everything has a fundamental logic that is usually surprisingly straightforward. There are two main types of things that tend to make me initially say "what the F is even going on here?": low-level, C-or-below-style processing and really huge, sprawling Java-style frameworks.

Low-Level Fiddling

Years ago, I found myself in a position where I had to reverse-engineer the Flash Media Server's temporary object-storage format. That's exactly the sort of thing that sounds freaking impossible at first, but actually isn't so bad when you get to it. The important thing to know going in was that the file was most likely going to be a simplest-possible implementation of pouring an in-memory data structure onto the filesystem. Since the Flash server didn't have any reason to encrypt or compress these files, nor were the data structures used particularly complex, there weren't likely to be any crazy tricks involved. Combine that with some logical assumptions about how the data might be stored - probably a record count and delimiters for records, map entries, etc. - and it's mostly a job of persistence, not arcana.

The C API stuff I've been doing lately is another example in this vein. A lot of C-related code looks like madness (and still does to me), but the thing with C is that everything is kind of pointing to a location in memory, which is treated like a huge array. So if you take a look at the struct definition in the linked post, what it really is is a map where, given a starting point in memory, you can expect to see certain numbers and sequences of bytes. Composite Data in particular has another, higher-level conceit: it is an array of records, each of which begins with indicators as to the type of the record and its length, providing the delightful property that you don't even have to know what a record is in order to safely shuttle it around.

Java Framework Skyscrapers

On the other end of the spectrum, you end up with huge, sprawling frameworks of code, for which Java is particularly prone. Take a look at the XPages stack: it's hundreds upon hundreds of classes and interfaces, just stacked in a huge mountain. But, when it comes down to it, it's still just a web server serving HTTP requests. So the server sits there, waiting for a request to come in (whether it's an initial request, a POST back for a partial refresh, or whatever), and routing those requests to the right place. It does the job of looking at the URL and request data and determining that, for example, "GET /foo.nsf/bar.xsp?baz=true" should create an environment with an appropriate "param" object, pick the XPages app module for the "/foo.nsf" database (loading it from scratch if need be), find the class named "xsp.Bar", and tell it to render itself to HTML.

And the same goes for the page structure. An XPage is a tree, and all of the various component classes, state holders, adapters, factories, renderers, and utilities are in service of that. So when you click a button that causes a partial refresh, at one level it's doing a whole bunch of crap about POSTing data and decoding values and whatnot, but the conceptual conceit is that you're sending a "click" message to a "Button" class (not exactly, but close enough). When you step back and look at it like that, the other parts may still be complicated, but you can really see the role they all play.

Conclusion

So the point is that daunting programming tasks aren't usually that hard. There are plenty of things in this vein that I haven't really tried - say, OpenGL game programming - that seem like "geez, how can anyone even do all of that", but which I am confident would make sense given an appropriate introduction. It helps to keep this in mind when you're approaching a new task or are asked to do something very new to you: it was all made by just some human.

I Hadn't Thought I'd End The Year Getting Into WebSphere

Wed Dec 31 15:47:01 EST 2014

Tags: oda websphere huh

...yet here we are. It's been a very interesting week in ODA circles. It started with Daniele Vistalli being curious if it was possible to run the OpenNTF Domino API on WebSphere, specifically the surprisingly-friendly Liberty profile. And not just curious: it turned out he had the chops to make it happen. Before too long, with the help of the ODA team, he wrote an extension to the WebSphere server that spins up native (lotus.domino.local) Notes RPC connections using Notes.jar and thus provides a functioning entrypoint to using ODA.

The conversation and possibilities spun out from there. It's too early to talk about too many specifics, but it's very exciting. After the initial bulk of the work, I was able to pitch in a bit yesterday and help figure out how to make this happen (click to view the full size):

CrossWorlds on OS X

That's a JSF app being built and deployed to a WebSphere Liberty server managed in Eclipse on the local machine, iterating over the contents of a DbDirectory pointed at a remote server. And it all happens to be running natively on the Mac - no VM involved.

So, like I said: it's been an interesting week. There are more possibilities than just this case, but even this alone holds tremendous promise: connecting to NSF data with the true, full API (brought into the modern day by ODA) but with the full array of J2EE support available.

Happy New Year!

Musing About Web App Structure

Tue Dec 16 18:25:35 EST 2014

Tags: angular rest

I've been giving a lot of thought lately to how I feel about different structures for web apps. Specifically, in this case, the "structure" I'm thinking about is the "client-server" balance in the app itself and the associated method of data access.

The impetus has been my minor fiddling with Angular over the past half a year or so. If you're not familiar with it, Angular is a client-side JavaScript framework focused on building full-fledged apps that run in the browser. The way it (and frameworks like it) distinguish themselves from more server-heavy methods of development is that the primary logic of running the UI happens in client-side code - resources and data are fetched from the server (including HTML snippets), but little or no presentation logic takes place there. You should go read Marky Roden's posts on the topic and then attend his session at ConnectED.

For a while, I thought of this as an interesting idea, but mostly just another method of writing apps, like PHP vs. Ruby vs. XPages. However, the idea kept churning away in the back of my brain, offering itself up as the right answer as I started writing the REST APIs in my Framework (I swear I'll get to that final post in the series eventually) and thought more about mobile development. The thing that's really latched onto my psyche isn't that Angular or JavaScript are particularly compelling, but the idea of writing a decoupled app is. Writing an app in Angular isn't really like writing an app in XPages or PHP - instead, it's (at a very high level) like writing a native mobile or desktop application. You split your app into two main tiers: the model/data/connection tier on the server and the app tier on the client.

That is something that is starting to seem really right to me. Even if it did nothing else, having an architecture like that forces you to think in terms of "services" instead of just data access - once you've written a model layer that provides REST services with access and validation rules enforced, then you have a single interface that can be used by browser-based Angular apps, mobile and desktop apps, and remote applications and services you didn't write. There's also no reason you couldn't write a server-side app that consumes those services, continuing to build XPages apps that don't use the normal data sources. This is sort of what my framework has evolved towards, just with a first-among-equals twist where the XPage app gets direct Java access to the same objects REST serves up.

Once you have this sort of setup, the answers to what your data-access and client UI frameworks should be both become "whatever". Want to write your REST API in Node and consume it in XPages? Sure, go ahead. Decide later that you'd rather have the data served up by Rails? Can do - the XPages side wouldn't need a change if the API is the same. Similarly, if you want to swap out XPages for Angular served up as static files or, god forbid, a PHP app, the way is smoothed.

Even though it starts as a small thing - switching from accessing data "directly" on the server to always thinking about that REST-API abstraction - this really seems compelling to me. Not so compelling that I've actually written any non-test apps this way yet, but I've opened the door for myself with my framework.

Figuring Out Maven: Group/Artifact Names and Repositories

Mon Dec 08 16:34:11 EST 2014

Tags: maven

As I fiddle with Maven, I figure it may be useful to share my growing understanding of it - or at least preliminary assumptions. Any of these posts should not be taken as a true guide to learning Maven, since I'm just muddling through myself, but I suspect that my path will be similar to a lot of other Domino developers'.

The first thing I feel I grokked about Maven is its concept of repositories, mostly because it's the easiest concept I've run across. Repositories in Maven seem to match up nicely to their analogues in other environments, such as Eclipse Update Sites or Debian/Ubuntu apt repositories. There's the default "Maven Central" repository, which is similar to the main apt repositories: it contains a very large collection of software projects, available by group+artifact name. This is what you see on the pages for popular software projects: they mention the group/artifact pair and that's enough to use it.

For projects that aren't in Central, it's similar to adding a repo to Debian or an Update Site to Eclipse. You add some repository information to your project or the your user environment's settings.xml and then refer to the plugin similar to how you would with Central ones; Hibernate OGM is one such plugin.

In addition to remote repositories, there is also your local repository, stored in ~/.m2/repository. This contains any Maven projects where you built and ran install locally, and are then available to other Maven projects. This is how I handled my dependencies on the ExtLib and ODA: I ran Maven installs for each to add them to my local repository.

You can also download and store repositories of pre-built plugins locally, and the IBM Domino Update Site for Build Management is an example of this. The way to use this is to extract the ZIP file and then point to the updateSite directory in the same way that you would a remote repository, albeit with a file:// URL (in this case, ideally stored in a Maven environment variable).

The final aspect of this is the way bits of software are designated within a repository: by "group ID" and "artifact ID". The group ID seems like it should be globally unique, and tends to follow the reverse-DNS convention of Java package names. So a group ID might be something like "com.google.guava" or "com.igm.xsp.extlib". These don't have a specific analogue with OSGi development, but are effectively similar to the naming scheme for update site projects (even though Maven groups may contain OSGi update sites). Within a repository, individual projects, called "artifacts", are identified in a way that just needs to be unique in the repository, and it looks like conventions differ here. Sometimes, the artifacts have simple base names, like "guava" or "el", while other times they have OSGi-style full reverse-DNS names. I gather that the convention falls along OSGi lines: for generic projects, short names rule the day, while for OSGi-plugin projects, the name matches the plugin ID.

So... that's the easiest part! I'm slowly getting more of a grasp of other aspects of Maven, but at least repositories seem to make sense so far.

How I Maven-ized My Framework

Mon Dec 08 10:31:14 EST 2014

Tags: maven miasma

This past weekend, I decided to take a crack at Maven-izing the frostillic.us Framework (I should really update the README on there). If you're not familiar with it, Maven is a build system for Java projects, basically an alternative to the standard Eclipse way of doing things that we've all gotten pretty used to. Though I'm not in a position to be a strong advocate of it, I know that it has advantages in dependency-resolution and popularity (most Java projects seem to include a "you can get this from Maven Central" bit in their installation instructions), helps with the sort of continuous-integration stuff I think we're looking to do at OpenNTF, and has something of a "wave of the future" vibe to it, at least for our community: IBM's open-source releases have all been Maven-ized.

A month or so ago, Nathan went through something of a trial by fire Maven-izing the OpenNTF Domino API (present in the dev branches). Converting an existing project can be something of a bear, scaling exponentially with the complexity of the original project. Fortunately, thanks to his (and others', like Roland's) work, the ODA is nicely converted and was useful as a template for me.

In my case, the Framework is a much-simpler project: a single plugin, a feature, and an update site. It was almost a textbook example of how to Maven-ize an OSGi plugin, except for three dependencies: on the ODA, on the Extension Library, and, as with both of those, on the underlying Domino/XPages plugins. Fortunately, my laziness on the matter paid off, since not only is the ODA Maven-ized, but IBM has put their Maven-ized ExtLib right on GitHub and, better still, released a packaged Maven repository of the required XSP framework components. So everything was in place to make my journey smooth. It was, however, not smooth, and I have a set of hastily-scrawled notes that I will translate into a recounting of the hurdles I faced.

Preparing for the Journey

First off, if you're going to Maven-ize a project, you'll need a few things. If it's an XPages project, you'll likely need the above-linked IBM Domino Update Site. This should go, basically, "somewhere on your drive". IBM seems to have adopted the convention internally of putting it in C:\updateSite. However, since I use a good computer, I have no C drive and this didn't apply to me - instead, I adopted a strategy seen in projects like this one, where the path is defined in a variable. This is a good introduction to a core concept with Maven: it's basically a parallel universe to Eclipse. This nature takes many forms, ranging from its off-and-on interaction with the workspace to its naming scheme for everything; Eclipse's built-in Maven tools are a particularly-thin wrapper around the command-line environment. But for now the thing to know is that this environment variable is not an Eclipse variable; it comes from Maven's settings.xml, which is housed at ~/.m2/settings.xml. It doesn't exist by default, so I made a new one:

<settings xmlns="http://maven.apache.org/SETTINGS/1.0.0"
      xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
      xsi:schemaLocation="http://maven.apache.org/SETTINGS/1.0.0
                          http://maven.apache.org/xsd/settings-1.0.0.xsd">

    <profiles>
        <profile>
            <id>main</id>
            <properties>
                <notes-platform>file:///Users/jesse/Documents/Java/IBM/UpdateSite</notes-platform>
            </properties>
        </profile>
    </profiles>
    <activeProfiles>
        <activeProfile>main</activeProfile>
    </activeProfiles>
</settings>

I'm not sure that that's the best way to do it, but it works. The gist of it is that you can fill in the properties block with arbitrarily-named environment variables.

Secondly, you'll need a decent tutorial. I found this one and its followups to do well. Not everything fit (I didn't treat the update site the same way), but it was a good starting point. In particular, you'll need Tycho, which is explained there. Tycho is a plugin to Maven that gives it some knowledge of Eclipse/OSGi plugin development.

Third, you'll need some examples. Now that my Framework is converted, you can use that, and the projects linked above are even better (albeit more complex). There were plenty of times where my troubleshooting just involved looking at my stuff and figuring out where it was different from the others.

Finally, if your experience ends up anything like mine, you'll want something like this.

Prepping Dependencies

Since my project depended on the ExtLib and ODA, I had to get those in the local repository first. As I found, it's not enough to merely have the projects built in your workspace, as it is when doing non-Maven OSGi development - they have to be "installed" in your local repository (~/.m2/repository). Though the Extension Library is larger, it's slightly easier to do. I cloned the ExtLib repository (technically, I cloned my fork of it) and imported the projects into the Eclipse workspace using Import → Maven → Existing Maven Projects. By pointing that to the repository root, I got a nice Maven tree of the projects and imported them all into a new working set. Maven, like many things, likes to use a tree structure for its projects; this allows it to know about module dependencies and provides inheritance of configuration (there's a LOT of configuration, so this helps). Unfortunately, Eclipse doesn't represent this hierarchy in the Project Explorer; though you can see the other projects inside the container projects, they also appear on their own, so you get this weird sort of doubled-up effect and you just have to know what the top-level project you want is. In this case, it's named well: com.ibm.xsp.extlib.parent.

So once you've found that in the sea of other projects (incidentally, this is why I like to click on the little triangle on top of the Project Explorer view and set Top Level Elements to Working Sets), there's one change to make, unless you happened to put the Update Site from earlier at C:\updateSite. If you didn't, open up the pom.xml file (that's the main Maven config file for each project) and change the url on line 28 to <url>${notes-platform}</url>. After that, you can right-click the project and go to Run As → Maven Install. If it prompts you with some stuff, do what the tutorial above does ("install verify" or something). This is an aspect of the thin wrapper: though you're really building, the Maven tasks take the form of Run Configurations. You just have to get used to it.

Once you do that, maybe it'll work! You'll get a console window that pops up and runs through a slew of fetching and building tasks. If all goes well, there'll be a cheery "BUILD SUCCESS" near the bottom. If not, it'll be time for troubleshooting. The first step for any Maven troubleshooting is to right-click the project and go to Maven → Update Project, check all applicable projects, and let it do its thing. You'll be doing that a lot - it's your main go-to "this is weird" troubleshooting step, like Project → Clean for a misbehaving XPage app. If the build still fails, it's likely a problem with your Update Site location. Um, good luck.

Next up comes the ODA, if you're using that. As before, it's best to clone the repository from GitHub (using one of the dev branches, like Nathan's or mine) and import the Maven projects. There's good news and bad news compared to the ExtLib. The good news is that it already uses ${notes-platform} for the repository location, so you're set there. The bad news is that trying to install from the main domino parent project doesn't work - it fails on the update site for some reason. So instead, I had to install each part in turn. In particular, you'll need "externals" (covers a lot of dependencies), "org.openntf.junit4xpages", "org.openntf.formula", and "org.openntf.domino".

Converting the Projects

Okay! So, now we can actually start! For the plugin project, the first page of the tutorial works word-for-word. One thing to note is that the "eclipse-plugin" option isn't actually in the Packaging drop-down; you just have to type it in. Again: thin wrapper. It may not work immediately after following the directions, but the divergences are generally due to the non-standard Domino-related dependencies. In particular, I ran into trouble with forbidden-access rules in Notes.jar - Maven, being a separate world, ignores your Eclipse preferences on the matter. To get around that, I added the parts in the plugin block of this pom.xml - among other things, they tell the compiler to ignore such problems. I still ran into trouble with lotus.domino.local.NotesBase specifically after the other classes started working, and I "solved" that by deleting the code (it was related to recycle checking, which I no longer need).

It may also be useful to change build.properties so that the output.. = bin/ line reads output.. = target/classes. I don't know if this is actually used, but it was a troubleshooting step I took elsewhere and it makes conceptual sense: Maven puts its output classes in target/classes, not bin.

During this process, I quickly realized the value of having a parent project. I had a hitch in mine in that I wanted to call the parent frostillicus.framework, which meant renaming the plugin to frostillicus.framework.plugin and dealing with the associated updating of Eclipse and git, but that was an unforced error. The normal layout of parent projects seems to be that they're parents both conceptually by pom.xml and also physically by folder structure. I haven't done the latter yet, and the process works just as well if you don't. Still, I should move it eventually. So, following the third part of the tutorial, I created a near-empty project (no Java code) just to house the pom.xml with common settings and told it to adopt the plugin as a child. Converting the feature project was the easiest step and went exactly as described in the tutorial.

Where I diverged from both the tutorial and ODA is in the Update Site. The tutorial suggests renaming site.xml to category.xml and using the Maven type eclipse-repository, but none of the examples I used did that. Instead, I followed those projects and left site.xml as-is (other than making sure that the versions in the XML source use ".qualifier" instead of any timestamp from building) and used the Maven type eclipse-update-site in the pom.xml.

I then spent about two hours pulling my hair out over bizarre problems I had wherein the update site would build but not actually include the compiled classes in the plugin jar if I clicked on "Build" in the site.xml editor and would fail with bizarre error messages if I did Run As → Maven Install. I'll spare you the tribulations and cut to the chase: my problem was that I had the modules in the parent project's pom.xml out of order, with the update site coming before the feature project. When I fixed that, I was able to start building the site the "Maven way". Which is to say: not using the site.xml's Build button (which still had the same problem for me), but using Run As → Maven Install. This ends up putting the built update site inside the target/site directory rather than directly in plugins and features folders. This is a case of "okay, sure" again.

Conclusion

So, after a tremendous amount of suffering and bafflement, I have a converted project! So what does it buy me? Not much, currently, but it feels good, and I had to learn this stuff eventually one way or another. Over the process, some aspects of Maven started to crystallize in my mind - the repositories, the dependencies, the module trees - and that helps me understand why other Maven-ized projects look the way they do. Other aspects are still beyond my ken (like most of the terminology), but it's a step in the process. This should also mean I'm closer to ready for future build processes and am more in line with the larger Java world.

If you have a similar project, I'd say it's not required that you make the switch, but if you're planning on working on larger projects that use Maven, it'd be a good idea. Maven takes a lot of getting used to, since everything feels like it's a from-scratch rethinking of the way to structure Java projects with no regard to the structure or terminology of "normal" Eclipse/OSGi development, and something like this conversion is as good a start down the path as any.