My MWLUG 2015 Presentation, "Maven: An Exhortation and Apology"

Sun Aug 30 19:07:11 EDT 2015

Tags: mwlug

As prophesied, I gave a presentation on MWLUG last week. Keeping with my tradition, the slides from the deck are not particularly useful on their own, so I'm not going to post them as such. However, Dave Navarre once again did the yeoman's work of recording my session, so it and a number of other sessions from the conference are available on YouTube.

In addition, my plan is to expand, as I did earlier today, on the core components of my session in blog form, in a way that wouldn't have made much sense in a conference session anyway. And, if I'm as good as my word, I'll make a NotesIn9 episode or two on the subject.

Wrangling Tycho and Target Platforms

Sun Aug 30 17:16:13 EDT 2015

Tags: maven tycho

One of the persistent problems when dealing with OSGi projects with Maven is the interaction between Maven, Tycho, and Eclipse. The core trouble comes in with the differing ways that Maven and OSGi handle dependencies.

Dependency Mechanisms

The Maven way of establishing dependencies is to list them in your Maven project's POM file. A standard one will look something like this:

<dependencies>
	<dependency>
		<groupId>com.google.guava</groupId>
		<artifactId>guava</artifactId>
		<version>18.0</version>
	</dependency>
</dependencies>

This tells Maven that your project depends on Guava version 18.0. The "groupId" and "artifactId" bits are essentially arbitrary strings that identify the piece of code, and, following Java standards, convention dictates that they are generally reverse-DNS-style. There are variations on this setup, such as specifying version ranges or sub-artifacts, but that's what you'll usually see. The term "artifact" is a Maven-ism referring to a specific entity, usually a single Jar file, and I've taken to using it casually.

One of the key things Maven brings to the table here is Maven Central: a warehouse of common Maven-ized projects. Without specifying any additional configuration, the dependency declaration above will cause Maven to check with Maven Central to find the Jar, download it, and store it in your local repository (usually ~/.m2/repository). Then, during the build process, Java can reference the local copy of the Jar in the consistently-organized local folder structure. It will also, if needed, download "transitive" dependencies: the dependencies listed by the project you're depending on.

OSGi's dependency system is conceptually similar. Instead of the POM file, it piggybacks on the Jar's MANIFEST.MF file with something like this:

Require-Bundle: com.google.guava;bundle-version="18.0"

This is essentially the same idea as the Maven dependency: you reference an OSGi-enabled Jar (called a "Bundle" in OSGi parlance... which can also be a "Plug-in") by its usually-reverse-DNS name and provide restrictions on versions, plus other potential options.

There is no equivalent here of Maven Central: OSGi artifacts are found in Update Sites for each project and are added to the OSGi environment. When you install a plug-in in Eclipse/Designer or Domino, you are contributing to your installation's pool of OSGi artifacts. There are some conveniences to make this experience easier in some cases, such as the Eclipse Marketplace and the primary Eclipse Update Site, but it's not as coordinated as Maven.

The Overlap

Though often redundant, these two dependency mechanisms are not inherently incompatible. A given Jar file can be represented as both a Maven artifact and an OSGi bundle - and, indeed, a great many of the artifacts in Maven Central come pre-packaged with OSGi metadata, and there are Maven plugins to make generating this invisible to the developer.

Tycho - the Maven plugin that creates an OSGi environment for your Maven development - has the capability to more-or-less bridge this gap. By adding the Tycho plugins to your Maven build, you can point Maven at OSGi Update Sites (called "p2" sites) and Tycho will be able to find the artifacts referenced by your project's MANIFEST.MF Require-Bundle line. Even better, by using <pomDependencies>consider</pomDependencies> in your Tycho config, it will be able to look at the Maven dependencies of your project, check them for OSGi metadata, and then use that to satisfy the MANIFEST.MF requiremenets.

Though convoluted to say, the upshot is that, when you have that pomDependencies option, things work out pretty well... from the command line. The trouble comes in when you want to develop these projects in Eclipse.

Target Platforms

The aggregate set of OSGi bundles known by your OSGi environment (either Tycho or Eclipse in this case) and used for compilation is the "Target Platform". If you've used the XPages SDK or otherwise set up a non-Designer Eclipse installation for XPages plug-in development, you've seen Target Platforms in action: the installation process locates your Notes and Domino installations and adds their OSGi bundles to Eclipse's Target Platform, allowing them to be references by your own OSGi projects.

The trouble is that Eclipse is a bit... inflexible when it comes to specifying a project's Target Platform. Though Eclipse has the capacity to have many Target Platform definitions, only one is active at a time for your entire workspace. Moreover, this Target Platform (plus any projects in your workspace) makes up the entirety of what Eclipse is willing to acknowledge for OSGi development.

This causes serious trouble for Maven dependencies.

If you have a Tycho-enabled project, Eclipse's adapter will not use its Maven dependencies for OSGi requirement resolution. So if your project lists Guava in both OSGi and Maven, even though Maven can see it, and Tycho can see it, and the Guava Jar sitting in your local Maven repository is brimming with OSGi metadata, Eclipse will not acknowledge it and you will have an error that com.google.guava can't be found.

Workarounds

There are a couple potential workarounds for this, none of which are particularly great.

Just Do It Manually

One option is to just have any developers working on the project also track down and manually add all applicable OSGi bundles to their Eclipse installation. It's not ideal, but it could work in a pinch, especially if you only have a single dependency or two.

Include the Project Wholesale

This is the approach the OpenNTF Domino API has taken to date: several of its external dependencies are included wholesale in source form in the project tree. This accomplishes the goal because, with the projects in your workspace, Eclipse will happily acknowledge them as part of the Target Platform, while Tycho will also be able to recognize them. However, it carries with it the significant down side of importing a whole heap of foreign code into your project and then having to ensure that it builds in your environment.

Maven-Generated Target Platform

Another option is to have Maven create a Target Platform file (*.target) dynamically, and then have Eclipse use that as its Target Platform definition. You can do that by including a Maven project like this in your tree:

<?xml version="1.0"?>
<project
	xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"
	xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
	<modelVersion>4.0.0</modelVersion>
	<parent>
		<groupId>com.example</groupId>
		<artifactId>project-parent</artifactId>
		<version>1.0.0-SNAPSHOT</version>
	</parent>
	<artifactId>example-osgi-target</artifactId>
	
	<packaging>eclipse-target-definition</packaging>
	
	<build>
		<plugins>
			<plugin>
				<groupId>lt.velykis.maven</groupId>
				<artifactId>pde-target-maven-plugin</artifactId>
				<version>1.0.0</version>
				<executions>
					<execution>
						<id>pde-target</id>
						<goals>
							<goal>add-pom-dependencies</goal>
						</goals>
						<configuration>
							<baseDefinition>${project.basedir}/osgi-base.target</baseDefinition>
							<outputFile>${project.basedir}/${project.artifactId}.target</outputFile>
						</configuration>
					</execution>
				</executions>
			</plugin>
		</plugins>
	</build>
</project>

By creating a shell Target file in Eclipse named osgi-base.target, this project will locate its known dependencies (namely, any dependencies listed in it or in parent projects) and glom the paths of any of those OSGi plugins found in your local Maven repository onto it. In Eclipse, you can then open the generated Target file and set it as your active.

This... basically works, but it's ugly. Moreover, it limits your Target Platform customization options. If you want to include other Update Sites in your platform (say, the XPages targets generated by the SDK), you would have to modify the base Target file manually, making it fragile for multi-developer use.

Maven-Generated p2 Site

This is the option I'm tinkering with now, and it's similar to the Target-file approach. However, instead of creating an exclusive Target Platform, you can have Maven create a p2 Update Site and then add that directory to your Target Platform manually. That manual step is still unfortunate, but it's not too bad, and it should adapt automatically as more dependencies are added. A Maven plugin named p2-maven-plugin can do a tremendous amount of heavy lifting here: it can track down Maven dependencies, add OSGi metadata if they don't have them already, do the same for their dependencies, and then put them all into a nicely-organized Update Site:

<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
	xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
	<modelVersion>4.0.0</modelVersion>

	<groupId>com.example</groupId>
	<artifactId>example-osgi-site</artifactId>
	<version>1.0.0-SNAPSHOT</version>
	<packaging>pom</packaging>

	<pluginRepositories>
		<pluginRepository>
			<id>reficio</id>
			<url>http://repo.reficio.org/maven/</url>
		</pluginRepository>
	</pluginRepositories>

	<build>
		<plugins>
			<plugin>
				<groupId>org.reficio</groupId>
				<artifactId>p2-maven-plugin</artifactId>
				<version>1.2.0-SNAPSHOT</version>
				<executions>
					<execution>
						<id>default-cli</id>
						<phase>validate</phase>
						<goals>
							<goal>site</goal>
						</goals>
						<configuration>
							<artifacts>
								<artifact><id>com.google.guava:guava:18.0</id></artifact>
							</artifacts>
						</configuration>
					</execution>
				</executions>
			</plugin>
		</plugins>
	</build>
</project>

Once this project is executed, you can then add the generated folder to Eclipse's active Target Platform and be set. Though I haven't put this into practice yet, it may be the best out of a bad bunch of options.

Don't Use Eclipse

Well, I guess this final option may be the best if you're not an Eclipse fan - other IDEs may handle this whole thing much more smoothly. So, if you use IntelliJ and it doesn't have this problem, that's good.


These problems cause a lot more heartburn than you'd think they should, considering that this is basic project setup and not even part of the task of actually developing your project, but such is life. As long as you have a dependency on non-Mavenized OSGi artifacts (such as the XPages runtime) or want to use Tycho's full abilities (such as OSGi-environment unit tests or building full Eclipse-based applications) while also developing in Eclipse, you're stuck with this sort of workaround.

MWLUG 2015 - Maven: An Exhortation and Apology

Sun Aug 16 11:55:17 EDT 2015

Tags: mwlug maven

At MWLUG this coming week, I'll be giving a presentation on Maven. Specifically, I plan to cover:

  • What Maven is
  • Why Domino developers should know about it
  • Why it's so painful and awkward for Domino developers
  • Why it's still worth using in spite of all the suffering
  • How this will help when working on projects outside of traditional Domino

The session is slated for 3:30 PM on Thursday. I expect it to be cathartic for me and useful for the attendees, so I hope you can make it.

Maven Native Chronicles, Part 3: Improving Native Artifact Handling

Sun Jul 26 21:38:37 EDT 2015

Tags: maven
  1. Jul 24 2015 - Maven Native Chronicles, Part 1: Figuring Out nar-maven-plugin
  2. Jul 26 2015 - Maven Native Chronicles, Part 2: Setting Up a Windows Jenkins Node
  3. Jul 26 2015 - Maven Native Chronicles, Part 3: Improving Native Artifact Handling
  4. Feb 27 2016 - Maven Native Chronicles: Running Automated Notes-based Tests

This post isn't so much a part of the current series as it is a followup to a post from the other week, but I can conceptually retcon that one in as a prologue. This will also be a good quick tip for dealing with Maven projects.

In my previous post, I described how I copied the built native shared library from the C++ project into the OSGi fragments for distribution, and I left it with the really hacky approach of copying the file using a project-relative path that reached up into the other project. It technically functioned, but it relied on the specific project structure, which wouldn't survive any reorganization or breaking up of the module tree.

To improve it, I reworked it to be a bit more Maven-y, which involves two steps: attaching the built artifacts to the output of the native project and then using the dependency plugin to copy the native artifacts in as needed. For the first step, I used the build-helper-maven-plugin, though there may be other ways to do it. This is relatively straightfoward, though:

<plugin>
	<groupId>org.codehaus.mojo</groupId>
	<artifactId>build-helper-maven-plugin</artifactId>
	<version>1.3</version>
	<executions>
		<execution>
			<id>attach-artifacts</id>
			<phase>package</phase>
			<goals>
				<goal>attach-artifact</goal>
			</goals>
			<configuration>
				<artifacts>
					<artifact>
						<file>${project.basedir}/x64/Debug/nativelib-win32-x64.dll</file>
						<type>dll</type>
						<classifier>win32-x64</classifier>
					</artifact>
					<artifact>
						<file>${project.basedir}/Win32/Debug/nativelib-win32-x86.dll</file>
						<type>dll</type>
						<classifier>win32-x86</classifier>
					</artifact>
				</artifacts>
			</configuration>
		</execution>
	</executions>
</plugin>

This causes the native libraries - so far, the two Windows ones - to be included in the Maven repository during installation, and to then be accessible from other projects. The files are named using the module base name plus the classifier appended and the type as the file extension, like native-project-name-win32-x64.dll.

To copy that artifact into the OSGi bundle project, I then use maven-dependency-plugin to copy it in. Here I reference it via the module name and the classifier/type pair used above (with some shorthands because they're in the same multi-module project):

<plugin>
	<groupId>org.apache.maven.plugins</groupId>
	<artifactId>maven-dependency-plugin</artifactId>
	<version>2.10</version>
	
	<executions>
		<execution>
			<id>copy-native-lib</id>
			<phase>prepare-package</phase>
			<goals>
				<goal>copy</goal>
			</goals>
			<configuration>
				<artifactItems>
					<artifactItem>
						<groupId>${project.groupId}</groupId>
						<artifactId>native-project-name</artifactId>
						<version>${project.version}</version>
						<type>dll</type>
						<classifier>win32-x64</classifier>
					</artifactItem>
				</artifactItems>
				<outputDirectory>lib</outputDirectory>
				<stripVersion>true</stripVersion>
			</configuration>
		</execution>
	</executions>
</plugin>

The net result here is the same as previously, but should be more maintainable.

Maven Native Chronicles, Part 2: Setting Up a Windows Jenkins Node

Sun Jul 26 11:16:50 EDT 2015

Tags: maven
  1. Jul 24 2015 - Maven Native Chronicles, Part 1: Figuring Out nar-maven-plugin
  2. Jul 26 2015 - Maven Native Chronicles, Part 2: Setting Up a Windows Jenkins Node
  3. Jul 26 2015 - Maven Native Chronicles, Part 3: Improving Native Artifact Handling
  4. Feb 27 2016 - Maven Native Chronicles: Running Automated Notes-based Tests

Before I get to the meat of this post, I want to point out that Ulrich Krause wrote a post on a similar topic today and you should read it.

The build process I've been working with involves a Jenkins server running on OS X (in order to build iOS binaries), and so it will be useful to have a Windows instance set up as well to run native builds and, importantly, tests. Jenkins comes with support for distributed builds and makes it relatively straightforward.

To start with, I installed VirtualBox and went through the usual Windows setup process - it shouldn't matter too much which major version of Windows you use, as long as it's 64-bit, in order to be able to generate and test both types of binaries. Once that was running, I installed the latest 64-bit JDK followed by Visual Studio Community, which is a pretty smooth process (for all their faults, Microsoft knows how to treat developers). To provide access to the VM from the Mac host, I added a second network adapter to the VM and set it to host-only networking:

During this process, I found Jump Desktop to be a very useful tool. Since the Mac host runs SSH, I was able to set up an RDP connection to the Windows VM using an SSH tunnel, which Jump does transparently for you. This made for a much better experiencing than VNCing into the Mac and controlling Windows in the VirtualBox window in there.

Next, I decided that the route I wanted to take to control the Windows slave was SSH, since SSH is the bee's knees. I installed Cygwin, which creates a fairly Unix-like environment on top of Windows, and included OpenSSH in the process. After going through the afore-linked setup process, I had SSH access to the Windows machine (including, thanks to SSH proxying, remote access via the primary build server). On the Jenkins side on the Mac, I installed the "Cygpath plugin" (which is in the built-in plugin manager) to avoid any of the issues mentioned on the wiki page. The configuration in Jenkins is relatively straightforward (I will probably end up changing the base directory to be a clean Jenkins home, since I hadn't initially been sure if I needed Jenkins installed on the slave):

With that, I was able to set the build to run on servers with the "windows" label, kick it off, and start going through its complaints until I had it working.

First off, I had some more Java setup to do, specifically creating a system environment variable named JAVA_HOME and setting it to the root of the JDK ("C:\Program Files\Java\jdk1.8.0_51" in this case). Then, I set up Maven, which is something of an awkward process on Windows, but not TOO bad. I downloaded the latest binaries, unzipped them to "C:\Program Files\maven", added an environment variable of M2_HOME to point to that:

I also added %M2_HOME%\bin;C:\Program Files (x86)\MSBuild\12.0\Bin to the end of the PATH variable, to cover both the Maven tools and the msbuild executable for later.

I ran into a bit of weirdness when it came to setting up configuration for SSH and Maven, specifically because it seems that Cygwin has two home folders for the logged-in user: the Unix-style /home/jesse and the normal Windows C:\Users\jesse (which is available in Cygwin as /cygdrive/c/Users/jesse). Since this Jenkins build checks out the code from GitHub via SSH, I needed to copy over the id_rsa file for the Jenkins user: this went into /home/jesse/.ssh/id_rsa. In order to configure Maven, though, the settings file went to C:\Users\jesse\.m2\settings.xml.

Eventually, it slogged its way through the build to completion, including a successful run of the integration tests. I still need to figure out the best way to get the resultant artifacts back out (or maybe it will be best to just deploy from both to the same Artifactory server), but this seems to do the main task for me.

Maven Native Chronicles, Part 1: Figuring Out nar-maven-plugin

Fri Jul 24 15:48:59 EDT 2015

  1. Jul 24 2015 - Maven Native Chronicles, Part 1: Figuring Out nar-maven-plugin
  2. Jul 26 2015 - Maven Native Chronicles, Part 2: Setting Up a Windows Jenkins Node
  3. Jul 26 2015 - Maven Native Chronicles, Part 3: Improving Native Artifact Handling
  4. Feb 27 2016 - Maven Native Chronicles: Running Automated Notes-based Tests

As I mentioned the other day, my work lately involves a native shared library that is then included in an OSGi plugin. To get it working during a Maven compile, I just farmed out the actual build process to Visual Studio's command-line project builder. That works as far as it goes, but it's not particularly Maven-y and, more importantly, it's Windows-only.

In looking around, it seems like the most popular method of doing native compilation in Maven, especially with JNI components, is maven-nar-plugin - nar means "Native ARchive", and it's meant to be a consistent way to package native artifacts (executables and libraries) across platforms. It does an admirable job wrangling the normally-loose nature of a C/C++ program to work with Maven-ish standards and attempts to paper over the differences between platforms and toolchains. I'm not entirely convinced that this will be the way I go long-term (in particular, its attitude towards multi-platform/arch builds seems to be "eh, sort of?"), but it's a good place to get started with non-Windows compilation.

The first step was to move the files around to mostly match a Maven-style layout. Starting out, the .cpp and .h files were in the src folder directly, while dependency headers were in a dependencies folder next to it. I left the Notes includes in there for now, but it seems that nar-maven-plugin will cover the JNI stuff for me, so I could simplify that somewhat. The new project structure looks like:

  • (project root)
    • src
      • main
        • c++
        • include
    • dependencies
      • inc
        • notes

Next was to set up the project configuration. For now, I want to still use Visual Studio's CLI app to build the Windows version, and I'm going to have to specifically define supported platforms, so I define the project as a nar, but then disable actual execution of the plugin by default:

<project>
	...
	<packaging>nar</packaging>
	
	<build>
		<plugins>
			<plugin>
				<groupId>com.github.maven-nar</groupId>
				<artifactId>nar-maven-plugin</artifactId>
				<version>3.2.3</version>
				<extensions>true</extensions>
				
				<configuration>
					<skip>true</skip>
				</configuration>
			</plugin>
		</plugins>
	</build>
</project>

Then, much as I did for the Windows-specific builds, I added a profile to try to build on my Mac. Note that these build settings produce a library that fails all unit tests, so they're surely not correct, but hey, it compiles and links, so that's a start. To ensure that it only builds when it has an appropriate context, it is triggered by a combination of OS family and the presence of the notes-program Maven property, which should point to the Notes executable directory.

<project>
	...
    
	<profiles>
		...
		<profile>
			<id>mac</id>
		
			<activation>
				<os>
					<family>mac</family>
				</os>
				<property>
					<name>notes-program</name>
				</property>
			</activation>
	
			<build>
				<plugins>
					<plugin>
						<groupId>com.github.maven-nar</groupId>
						<artifactId>nar-maven-plugin</artifactId>
						<extensions>true</extensions>
			
						<configuration>
							<skip>false</skip>
				
							<cpp>
								<debug>true</debug>
								<includePaths>
									<includePath>${project.basedir}/src/main/include</includePath>
									<includePath>${project.basedir}/dependencies/inc/notes</includePath>
								</includePaths>
					
								<options>
									<option>-DMAC -DMAC_OSX -DMAC_CARBON -D__CF_USE_FRAMEWORK_INCLUDES__ -DLARGE64_FILES -DHANDLE_IS_32BITS -DTARGET_API_MAC_CARBON -DTARGET_API_MAC_OS8=0 -DPRODUCTION_VERSION -DOVERRIDEDEBUG</option>
								</options>
							</cpp>
							<linker>
								<options>
									<option>-L${notes-program}</option>
								</options>
								<libSet>notes</libSet>
							</linker>
				
							<libraries>
								<library>
									<type>shared</type>
								</library>
							</libraries>
						</configuration>
					</plugin>
				</plugins>
			</build>
		</profile>
	</profiles>
</project>

Unstable though the result may be, the nar plugin does its job: it produces an archive containing the dylib, suitable for distribution as a Maven artifact and extraction into the downstream project, which I'll go into later.

So this is a good step towards my final goal. As I mentioned, I may end up getting rid of nar-maven-plugin specifically, but this is a good way to shape the code into something more portable (I also got rid of a few Windows-isms in the C++ while I was at it). My ultimate goal is to get a single build run that produces artifacts for all of the important platforms (Windows 32/64 and Linux 32/64 for production, Mac 32/64(?) for JUnit tests during development). I may be able to accomplish that using the nar plugin with a distributed Jenkins build, or I may be able to do it with Makefiles with GCC cross-compilers on OS X build host. If that works, it's the sort of thing that makes all this Maven stuff worthwhile.

Adding Components to an XPage Programmatically

Sun Jul 19 09:16:35 EDT 2015

Tags: xpages java

One of my favorite aspects of working with apps using my framework is the component binding capability. This lets me just write the main structure of the page and let the controller do the grunt work of creating fields with validators and converters. There's a lot of magic behind the scenes to make it happen, but the core concept of dynamic component creation is relatively straightforward.

An XPage is a tree of components, and those components are all Java objects on the back end, which can be manipulated and added or removed programmatically. To demonstrate, I'll start with this basic XPage:

<?xml version="1.0" encoding="UTF-8"?>
<xp:view xmlns:xp="http://www.ibm.com/xsp/core" beforePageLoad="#{controller.beforePageLoad}" afterPageLoad="#{controller.afterPageLoad}">
	<xp:div id="container">
	</xp:div>
</xp:view>

Now, I'll add a basic form table using the afterPageLoad method in the controller class:

package controller;

import javax.faces.component.UIComponent;
import javax.faces.context.FacesContext;

import com.ibm.xsp.component.UIViewRootEx2;
import com.ibm.xsp.component.xp.XspInputText;
import com.ibm.xsp.extlib.component.data.UIFormLayoutRow;
import com.ibm.xsp.extlib.component.data.UIFormTable;
import com.ibm.xsp.extlib.util.ExtLibUtil;
import com.ibm.xsp.util.FacesUtil;

import frostillicus.xsp.controller.BasicXPageController;

public class home extends BasicXPageController {
	private static final long serialVersionUID = 1L;

	@SuppressWarnings("unchecked")
	@Override
	public void afterPageLoad() throws Exception {
		super.afterPageLoad();

		UIViewRootEx2 view = (UIViewRootEx2)ExtLibUtil.resolveVariable(FacesContext.getCurrentInstance(), "view");
		UIComponent container = FacesUtil.findChildComponent(view, "container");

		UIFormTable formTable = new UIFormTable();
		formTable.setFormTitle("Some Form");
		formTable.setStyle("margin: 2em; width: 20em");
		container.getChildren().add(formTable);
		formTable.setParent(container);

		UIFormLayoutRow formRow = new UIFormLayoutRow();
		formRow.setLabel("Name");
		formTable.getChildren().add(formRow);
		formRow.setParent(formTable);

		XspInputText inputText = new XspInputText();
		formRow.getChildren().add(inputText);
		inputText.setParent(formRow);
	}
}

There are a few concepts to get a handle on here, but fortunately they're not as esoteric as other aspects of back-end XPages development.

To start out with, there's the question of how you're supposed to know what classes the components are. The best way to find this out is to create a basic XPage containing the control with an ID and then go to the Package Explorer view, then the "Local" source folder, and find the class file for your page in the "xsp" package. In there, you can see the code that actually generates the XPage (which is doing basically the same thing as we're doing here). Look for the ID you gave the control on the page and you can find the class behind it.

Next is the job of finding the parent control you're going to attach the new components to. In this case, I used the FacesUtil class to search for the component by its ID, because I know it's the only one on the page with that base ID. This tack will usually do the trick for you, but there are other ways to find it, such as binding or XspQuery.

Finally, there's the need to both add the new child to the parent's children list and to set the parent in the child itself. This is a bit of housekeeping boilerplate, but it has to be done.

Once you have this code running, you get a basic form (using the Bootstrap3.2.0 theme in this case):

Some Form

This can get much more in-depth: anything you can do declaratively in the XPage XML, you can do programmatically in Java. You can also manipulate existing components: change their properties, rearrange them, or remove them from the tree entirely. I've found knowledge of this to be both useful as a part of the toolbox as well as a way to really clear up the mental model of what's going on on the page.

Quick-and-Dirty Inclusion of a Visual C++ Project in a Maven Build

Sat Jul 11 19:26:34 EDT 2015

Tags: maven jni

One of my projects lately makes use of a JNI library distributed via an OSGi plugin. The OSGi side of the project uses the typical Maven+Tycho combination for its building, but the native library was developed using Visual C++. This is workable enough, but ideally I'd like to have the whole thing part of one smooth build: compile the native library, then subsequently copy its resultant shared 32- and 64-bit libraries into the OSGi plugins.

From what I've gathered, the "proper" way to do this sort of setup is to use the nar-maven-plugin, which is intended to wrap around the normal compilers for each platform and handle packaging and access to the libraries and related components. I tinkered with this a bit but ran into a lot of trouble trying to get it to work properly, no doubt due to my extremely-limited knowledge of C++ toolchains combined with the natural weirdness of Windows's development environment.

For now, I decided to do it the "ugly" way that nonetheless gets the job done: just run the Visual C++ toolchain from Maven. Fortunately, Microsoft includes a tool called msbuild for this purpose: if you run it in the directory of a Visual C++ project, it will act like the full IDE. I added its executables to my PATH (C:\Program Files (x86)\MSBuild\12.0\bin) and then used a Maven plugin called exec-maven-plugin to launch it (the Ant plugin would also work, but this is more explicit). Since this will only run on Windows, I wrapped it in a triggered profile and added two executions to cover both 32-bit and 64-bit versions:

<project>
	...
	<packaging>pom</packaging>
	...
	
	<profiles>
		<profile>
			<id>windows-x64</id>
		
			<activation>
				<os>
					<family>windows</family>
					<arch>amd64</arch>
				</os>
			</activation>
			
			<build>
				<plugins>
					<plugin>
						<groupId>org.codehaus.mojo</groupId>
						<artifactId>exec-maven-plugin</artifactId>
						<version>1.4.0</version>
						<executions>
							<execution>
								<id>build-x86</id>
								<phase>generate-sources</phase>
								<goals>
									<goal>exec</goal>
								</goals>
								<configuration>
									<environmentVariables>
										<Platform>Win32</Platform>
									</environmentVariables>
									<executable>msbuild</executable>
								</configuration>
							</execution>
							<execution>
								<id>build-x64</id>
								<phase>generate-sources</phase>
								<goals>
									<goal>exec</goal>
								</goals>
								<configuration>
									<environmentVariables>
										<Platform>X64</Platform>
									</environmentVariables>
									<executable>msbuild</executable>
								</configuration>
							</execution>
						</executions>
					</plugin>
				</plugins>
			</build>
		</profile>
	</profiles>
</project>

The project itself remains configured in Visual Studio. While the source files are certainly modifiable in Eclipse, it won't have the full C/C++ toolchain environment until I figure out a proper way to do that. But this does indeed do the trick: it creates the two DLLs in the same way as when I had been building them in the IDE.

The next step is to automatically include these in the appropriate OSGi fragment projects. For this, at least for now, I'm using the maven-resources-plugin. This configuration depends on the structure of the Maven projects, which is sort of fragile, but it's not too bad when they're in the same overall project. This is the config for the x64 plugin, and there is a separate x86 project with an almost-identical configuration:

<project>
	...
	<build>
		<plugins>
			...
			<plugin>
				<groupId>org.apache.maven.plugins</groupId>
				<artifactId>maven-resources-plugin</artifactId>
				<version>2.7</version>
				
				<executions>
					<execution>
						<id>copy-native-lib</id>
						<phase>generate-resources</phase>
						<goals>
							<goal>copy-resources</goal>
						</goals>
						<configuration>
							<resources>
								<resource>
									<directory>${project.basedir}/../../native-project-name/x64/Debug/</directory>
									<includes>
										<include>nativelib-win32-x64.dll</include>
									</includes>
								</resource>
							</resources>
							<outputDirectory>${project.basedir}/lib</outputDirectory>
						</configuration>
					</execution>
				</executions>
			</plugin>
		</plugins>
	</build>
</project>

The result is that, at least when I build on Windows, everything is properly compiled and put in its right place. When running in my normal Mac dev environment, it uses the built libraries that have previously been copied into the plugin, so it still works well enough.

This is still a far cry from an optimal configuration. The requirement of using Visual Studio is cumbersome, which means that any multi-platform build will mean a redundant config (whether it be in the pom or in a separate Makefile), and this current setup isn't properly "Mavenized": the output doesn't go into the "target" folder and the DLLs aren't tagged for inclusion in the installed Maven repo. It suits the purpose, though, of being an intermediate step in a larger build.

My long-term desire is to get this fully cross-platform and automated on a build server. That will involve a lot of learning about the nar-maven-plugin (or Makefiles) as well as either setting up a cross-compilation infrastructure or a series of Jenkins slaves. In theory, an OS X system can have everything it would need to build for the other platforms itself, but I've gathered that the safest way to do it is with the "multiple Jenkins nodes" route. When I develop an improved build system for this, I'll write followup posts.

Working with Rich Text's MIME Structure

Wed Jul 08 20:28:19 EDT 2015

Tags: mime

My work lately has involved, among other things, processing and creating MIME entities in the format used by Notes for storage as rich text. This structure isn't particularly complicated, but there are some interesting aspects to it that are worth explaining for posterity. Which is to say, myself when I need to do this again.

As a quick primer, MIME is a format originally designed for email which has proven generally useful, including for HTTP and, for our needs, internal storage in NSF. Like many things in programming, it is organized as a tree, with each node consisting of a set of headers (generally, things like "Content-Type: text/html"), content, and children.

Domino stores the text part of rich text in MIME as HTML. In the simplest case, this ends up a one-element "tree", which you can see in the document's properties dialog:

Content-Type: text/html; charset="US-ASCII"

<font size=2 face="sans-serif">Hello <b>there</b></font>

There's slightly more to its full storage implementation (like the MIME_Version item), but the MIME Part items are the important bits. This simple structure can be abstracted to this tree:

  • text/html

Things get a little more complicated when you add embedded images and/or attachments. When you do either of those, the MIME grows to multiple items and becomes a multi-node tree.

Embedded Images

When you add an embedded image in the rich text field, the storage grows to four same-named MIME Part items. Concatenated (and clipped for brevity), the items then look like:

Content-Type: multipart/related; boundary="=_related 006CEB9D85257E7C_="

This is a multipart message in MIME format.

--=_related 006CEB9D85257E7C_=
Content-Type: text/html; charset="US-ASCII"

<font size=3>Here's a picture:</font>
<br>
<br><img src=cid:_2_0C1832A80C182E18006CEB9885257E7C style="border:0px solid;">
<br>
<br><font size=3>Done.</font>

--=_related 006CEB9D85257E7C_=
Content-Type: image/jpeg
Content-ID: <_2_0C1832A80C182E18006CEB9885257E7C>
Content-Transfer-Encoding: base64

*snip*

--=_related 006CEB9D85257E7C_=--

You can see the same sort of HTML block as before contained in there, but it sprouted a lot of other stuff. To begin with, the starting part turned into "multipart/related". The "multipart" denotes that the top MIME entity has children, and the "related" is used when the children consist of an HTML body and inline images. There are delimiters used to separate each part, using the auto-generated convention of "related" plus an effectively-random number. The image itself is represented as a MIME Part of its own, in this case stored inline and Base64-encoded (it can be shifted off to an attachment by Notes/Domino after a certain size). This structure can be abstracted to:

  • multipart/related
    • text/html
    • image/jpeg

The HTML is designed so that there is an image tag that references the attached image using a "cid" URL, an email convention that basically means "find the entity in this related MIME structure with the following content ID" - you can then see the content ID reflected in the JPEG MIME Part. This sort of URL doesn't fly on the web, so anything displaying this field on a web page (or otherwise converting it to a non-MIME storage format) needs to translate that reference to something appropriate for its needs.*

Attachments

When you have a rich text field with an attachment (in this case without the embedded image), you get a very similar structure:

Content-Type: multipart/mixed; boundary="=_mixed 006EBF7C85257E7C_="

This is a multipart message in MIME format.

--=_mixed 006EBF7C85257E7C_=
Content-Type: text/html; charset="US-ASCII"

<font size=3>Here's an attachment: <br>
</font>
<br>
<br><font size=3><br>
Done. </font>

--=_mixed 006EBF7C85257E7C_=
Content-Type: application/octet-stream; name="cert.cer"
Content-Disposition: attachment; filename="cert.cer"
Content-Transfer-Encoding: binary

cert.cer

--=_mixed 006EBF7C85257E7C_=--

The structure is the same sort of tree as previously, but the "related" content sub-type has changed to "mixed". This indicates that there are multiple types of content, but they're conceptually distinct. In any event, the tree looks like:

  • multipart/mixed
    • text/html
    • application/octet-stream

"application/octet-stream" is a generic MIME type for, basically, "bag of bytes" - MIME-based tools use it when they either don't know the content type or, as in this case, don't care. In this case, Notes/Domino splits out the content to be an NSF-style attachment and then references that in the MIME - this is an implementation detail, though, as the API returns the value regardless.

This also highlights a minor limitation in rich text storage: attachments do not have an inline representation in the HTML, and so they are always moved to the end of the field in Notes. At first, I was peeved by this limitation, but it makes a sort of sense: cid references are really about images, and I guess Lotus didn't want to override that for use in normal link elements.

That brings us to the final potential structure you're likely to run across:

Embedded Images And Attachments

When you include both embedded images and attachments, things get slightly more complicated. I'll skip the raw MIME and go straight to the tree:

  • multipart/mixed
    • multipart/related
      • text/html
      • image/jpeg
    • application/octet-stream

So this becomes a combination of the two formats, and a bit of logic emerges. In Notes's structure, "multipart/mixed" always contains two or more children, and the first one is the textual body, whatever form that may take. One of those forms is just a single-part "text/html", and the other is a "multipart/related" subtree containing the "text/html" and one or more images.


Once you get a feel for these structures, it makes the task of reading and creating Notes-alike MIME items much less daunting. There are a number of other concerns I've been dealing with as well (such as the conversion of composite-data rich text to HTML and how there are two ways to do it), and maybe I'll make a followup post at some point about those.


* As a minor note on this point, it's an area where the Notes client and XPages diverge slightly. The Notes client (which generated the example above), leaves inline images "nameless" - they contain no "Content-Disposition" header and no name in the "Content-Type", instead sticking with just the "Content-ID" for identification. With XPages, however, presumably due to the fact that it has filename information during the upload process, the result still contains (and is referenced by) the "Content-ID" value, but it also contains a line like:

Content-Disposition: inline; filename="foo.jpg"

This functions the same way for most purposes, but it may be significant. For example, if you happen to write processing code that uses the presence of absence of the "Content-Disposition" header as an indicator of whether it's an attachment or not, knowing this ahead of time could save you a certain amount of headache. The right way to do it is to see if the header is either missing or has a basic value of "inline" instead of "attachment".

Quick Tip: Override Form-XPage Mapping in xsp.properties

Thu Jun 11 14:17:19 EDT 2015

Tags: xpages

This is sort of an esoteric feature, but I just ran across it and it fits nicely into an XPages bag of tricks.

Specifically, as it turns out, there's a way to override the default "which XPage should be used for this document?" behavior in some cases. The most-commonly-known behavior is the "Display XPage Instead" set on the form (or the default form if-and-only-if the document has an empty Form field). This wins in all cases, and is the only way to get an XPage to open for a document when using non-XPage-specific URLs, such as traditional-and-clean-ish "db.nsf/viewname/key" URLs.

There's also a secondary behavior that I knew about before, which is that, if it can't find an explicit XPage for the form, it looks for an XPage that matches the form name. So, for example, "SomeForm" becomes "SomeForm.xsp" (case-sensitive, except the first letter). I used this once upon a time to be able to have XPages for documents stored in remote databases without changing the design of those databases (you can also specify an $XPageAlt field on the remote form without the XPage existing in that DB, by the way).

What I learned today is that there's a way to short-circuit that fallback and specify an alternative via xsp.properties. Specifically, you can add a line that looks like this:

xsp.domino.form.xpage.testform=OtherForm

Note that there are a few notes here:

  • This only works for "$$OpenDominoDocument.xsp" URLs, because the XSP runtime is not consulted for "view/key" URLs unless the form explicitly names an XPage.
  • This does not override an explicitly-named XPage in the form.
  • This does work when no form with the specified name actually exists in the database.
  • The form name must be lowercase in the property.
  • The form name must also be properly escaped as per the rules for properties files; importantly, this means that spaces in the form name must be preceded by a backslash.
  • This does work to specify an XPage for the default form when no form is specified in the document.

Most of the time, this trick won't be needed, since either specifying an XPage name in the URL or designating it in the form note will suffice. However, if you're in a case where you have an app that points to data in remote databases and you don't want to modify the design there but still want to use "$$OpenDominoDocument" URLs, this is an option.