Maven Native Chronicles, Part 2: Setting Up a Windows Jenkins Node

Sun Jul 26 11:16:50 EDT 2015

Tags: maven
  1. Maven Native Chronicles, Part 1: Figuring Out nar-maven-plugin
  2. Maven Native Chronicles, Part 2: Setting Up a Windows Jenkins Node
  3. Maven Native Chronicles, Part 3: Improving Native Artifact Handling
  4. Maven Native Chronicles: Running Automated Notes-based Tests

Before I get to the meat of this post, I want to point out that Ulrich Krause wrote a post on a similar topic today and you should read it.

The build process I've been working with involves a Jenkins server running on OS X (in order to build iOS binaries), and so it will be useful to have a Windows instance set up as well to run native builds and, importantly, tests. Jenkins comes with support for distributed builds and makes it relatively straightforward.

To start with, I installed VirtualBox and went through the usual Windows setup process - it shouldn't matter too much which major version of Windows you use, as long as it's 64-bit, in order to be able to generate and test both types of binaries. Once that was running, I installed the latest 64-bit JDK followed by Visual Studio Community, which is a pretty smooth process (for all their faults, Microsoft knows how to treat developers). To provide access to the VM from the Mac host, I added a second network adapter to the VM and set it to host-only networking:

During this process, I found Jump Desktop to be a very useful tool. Since the Mac host runs SSH, I was able to set up an RDP connection to the Windows VM using an SSH tunnel, which Jump does transparently for you. This made for a much better experiencing than VNCing into the Mac and controlling Windows in the VirtualBox window in there.

Next, I decided that the route I wanted to take to control the Windows slave was SSH, since SSH is the bee's knees. I installed Cygwin, which creates a fairly Unix-like environment on top of Windows, and included OpenSSH in the process. After going through the afore-linked setup process, I had SSH access to the Windows machine (including, thanks to SSH proxying, remote access via the primary build server). On the Jenkins side on the Mac, I installed the "Cygpath plugin" (which is in the built-in plugin manager) to avoid any of the issues mentioned on the wiki page. The configuration in Jenkins is relatively straightforward (I will probably end up changing the base directory to be a clean Jenkins home, since I hadn't initially been sure if I needed Jenkins installed on the slave):

With that, I was able to set the build to run on servers with the "windows" label, kick it off, and start going through its complaints until I had it working.

First off, I had some more Java setup to do, specifically creating a system environment variable named JAVA_HOME and setting it to the root of the JDK ("C:\Program Files\Java\jdk1.8.0_51" in this case). Then, I set up Maven, which is something of an awkward process on Windows, but not TOO bad. I downloaded the latest binaries, unzipped them to "C:\Program Files\maven", added an environment variable of M2_HOME to point to that:

I also added %M2_HOME%\bin;C:\Program Files (x86)\MSBuild\12.0\Bin to the end of the PATH variable, to cover both the Maven tools and the msbuild executable for later.

I ran into a bit of weirdness when it came to setting up configuration for SSH and Maven, specifically because it seems that Cygwin has two home folders for the logged-in user: the Unix-style /home/jesse and the normal Windows C:\Users\jesse (which is available in Cygwin as /cygdrive/c/Users/jesse). Since this Jenkins build checks out the code from GitHub via SSH, I needed to copy over the id_rsa file for the Jenkins user: this went into /home/jesse/.ssh/id_rsa. In order to configure Maven, though, the settings file went to C:\Users\jesse\.m2\settings.xml.

Eventually, it slogged its way through the build to completion, including a successful run of the integration tests. I still need to figure out the best way to get the resultant artifacts back out (or maybe it will be best to just deploy from both to the same Artifactory server), but this seems to do the main task for me.

Maven Native Chronicles, Part 1: Figuring Out nar-maven-plugin

Fri Jul 24 15:48:59 EDT 2015

  1. Maven Native Chronicles, Part 1: Figuring Out nar-maven-plugin
  2. Maven Native Chronicles, Part 2: Setting Up a Windows Jenkins Node
  3. Maven Native Chronicles, Part 3: Improving Native Artifact Handling
  4. Maven Native Chronicles: Running Automated Notes-based Tests

As I mentioned the other day, my work lately involves a native shared library that is then included in an OSGi plugin. To get it working during a Maven compile, I just farmed out the actual build process to Visual Studio's command-line project builder. That works as far as it goes, but it's not particularly Maven-y and, more importantly, it's Windows-only.

In looking around, it seems like the most popular method of doing native compilation in Maven, especially with JNI components, is maven-nar-plugin - nar means "Native ARchive", and it's meant to be a consistent way to package native artifacts (executables and libraries) across platforms. It does an admirable job wrangling the normally-loose nature of a C/C++ program to work with Maven-ish standards and attempts to paper over the differences between platforms and toolchains. I'm not entirely convinced that this will be the way I go long-term (in particular, its attitude towards multi-platform/arch builds seems to be "eh, sort of?"), but it's a good place to get started with non-Windows compilation.

The first step was to move the files around to mostly match a Maven-style layout. Starting out, the .cpp and .h files were in the src folder directly, while dependency headers were in a dependencies folder next to it. I left the Notes includes in there for now, but it seems that nar-maven-plugin will cover the JNI stuff for me, so I could simplify that somewhat. The new project structure looks like:

  • (project root)
    • src
      • main
        • c++
        • include
    • dependencies
      • inc
        • notes

Next was to set up the project configuration. For now, I want to still use Visual Studio's CLI app to build the Windows version, and I'm going to have to specifically define supported platforms, so I define the project as a nar, but then disable actual execution of the plugin by default:

<project>
	...
	<packaging>nar</packaging>
	
	<build>
		<plugins>
			<plugin>
				<groupId>com.github.maven-nar</groupId>
				<artifactId>nar-maven-plugin</artifactId>
				<version>3.2.3</version>
				<extensions>true</extensions>
				
				<configuration>
					<skip>true</skip>
				</configuration>
			</plugin>
		</plugins>
	</build>
</project>

Then, much as I did for the Windows-specific builds, I added a profile to try to build on my Mac. Note that these build settings produce a library that fails all unit tests, so they're surely not correct, but hey, it compiles and links, so that's a start. To ensure that it only builds when it has an appropriate context, it is triggered by a combination of OS family and the presence of the notes-program Maven property, which should point to the Notes executable directory.

<project>
	...
    
	<profiles>
		...
		<profile>
			<id>mac</id>
		
			<activation>
				<os>
					<family>mac</family>
				</os>
				<property>
					<name>notes-program</name>
				</property>
			</activation>
	
			<build>
				<plugins>
					<plugin>
						<groupId>com.github.maven-nar</groupId>
						<artifactId>nar-maven-plugin</artifactId>
						<extensions>true</extensions>
			
						<configuration>
							<skip>false</skip>
				
							<cpp>
								<debug>true</debug>
								<includePaths>
									<includePath>${project.basedir}/src/main/include</includePath>
									<includePath>${project.basedir}/dependencies/inc/notes</includePath>
								</includePaths>
					
								<options>
									<option>-DMAC -DMAC_OSX -DMAC_CARBON -D__CF_USE_FRAMEWORK_INCLUDES__ -DLARGE64_FILES -DHANDLE_IS_32BITS -DTARGET_API_MAC_CARBON -DTARGET_API_MAC_OS8=0 -DPRODUCTION_VERSION -DOVERRIDEDEBUG</option>
								</options>
							</cpp>
							<linker>
								<options>
									<option>-L${notes-program}</option>
								</options>
								<libSet>notes</libSet>
							</linker>
				
							<libraries>
								<library>
									<type>shared</type>
								</library>
							</libraries>
						</configuration>
					</plugin>
				</plugins>
			</build>
		</profile>
	</profiles>
</project>

Unstable though the result may be, the nar plugin does its job: it produces an archive containing the dylib, suitable for distribution as a Maven artifact and extraction into the downstream project, which I'll go into later.

So this is a good step towards my final goal. As I mentioned, I may end up getting rid of nar-maven-plugin specifically, but this is a good way to shape the code into something more portable (I also got rid of a few Windows-isms in the C++ while I was at it). My ultimate goal is to get a single build run that produces artifacts for all of the important platforms (Windows 32/64 and Linux 32/64 for production, Mac 32/64(?) for JUnit tests during development). I may be able to accomplish that using the nar plugin with a distributed Jenkins build, or I may be able to do it with Makefiles with GCC cross-compilers on OS X build host. If that works, it's the sort of thing that makes all this Maven stuff worthwhile.

Adding Components to an XPage Programmatically

Sun Jul 19 09:16:35 EDT 2015

Tags: xpages java

One of my favorite aspects of working with apps using my framework is the component binding capability. This lets me just write the main structure of the page and let the controller do the grunt work of creating fields with validators and converters. There's a lot of magic behind the scenes to make it happen, but the core concept of dynamic component creation is relatively straightforward.

An XPage is a tree of components, and those components are all Java objects on the back end, which can be manipulated and added or removed programmatically. To demonstrate, I'll start with this basic XPage:

<?xml version="1.0" encoding="UTF-8"?>
<xp:view xmlns:xp="http://www.ibm.com/xsp/core" beforePageLoad="#{controller.beforePageLoad}" afterPageLoad="#{controller.afterPageLoad}">
	<xp:div id="container">
	</xp:div>
</xp:view>

Now, I'll add a basic form table using the afterPageLoad method in the controller class:

package controller;

import javax.faces.component.UIComponent;
import javax.faces.context.FacesContext;

import com.ibm.xsp.component.UIViewRootEx2;
import com.ibm.xsp.component.xp.XspInputText;
import com.ibm.xsp.extlib.component.data.UIFormLayoutRow;
import com.ibm.xsp.extlib.component.data.UIFormTable;
import com.ibm.xsp.extlib.util.ExtLibUtil;
import com.ibm.xsp.util.FacesUtil;

import frostillicus.xsp.controller.BasicXPageController;

public class home extends BasicXPageController {
	private static final long serialVersionUID = 1L;

	@SuppressWarnings("unchecked")
	@Override
	public void afterPageLoad() throws Exception {
		super.afterPageLoad();

		UIViewRootEx2 view = (UIViewRootEx2)ExtLibUtil.resolveVariable(FacesContext.getCurrentInstance(), "view");
		UIComponent container = FacesUtil.findChildComponent(view, "container");

		UIFormTable formTable = new UIFormTable();
		formTable.setFormTitle("Some Form");
		formTable.setStyle("margin: 2em; width: 20em");
		container.getChildren().add(formTable);
		formTable.setParent(container);

		UIFormLayoutRow formRow = new UIFormLayoutRow();
		formRow.setLabel("Name");
		formTable.getChildren().add(formRow);
		formRow.setParent(formTable);

		XspInputText inputText = new XspInputText();
		formRow.getChildren().add(inputText);
		inputText.setParent(formRow);
	}
}

There are a few concepts to get a handle on here, but fortunately they're not as esoteric as other aspects of back-end XPages development.

To start out with, there's the question of how you're supposed to know what classes the components are. The best way to find this out is to create a basic XPage containing the control with an ID and then go to the Package Explorer view, then the "Local" source folder, and find the class file for your page in the "xsp" package. In there, you can see the code that actually generates the XPage (which is doing basically the same thing as we're doing here). Look for the ID you gave the control on the page and you can find the class behind it.

Next is the job of finding the parent control you're going to attach the new components to. In this case, I used the FacesUtil class to search for the component by its ID, because I know it's the only one on the page with that base ID. This tack will usually do the trick for you, but there are other ways to find it, such as binding or XspQuery.

Finally, there's the need to both add the new child to the parent's children list and to set the parent in the child itself. This is a bit of housekeeping boilerplate, but it has to be done.

Once you have this code running, you get a basic form (using the Bootstrap3.2.0 theme in this case):

Some Form

This can get much more in-depth: anything you can do declaratively in the XPage XML, you can do programmatically in Java. You can also manipulate existing components: change their properties, rearrange them, or remove them from the tree entirely. I've found knowledge of this to be both useful as a part of the toolbox as well as a way to really clear up the mental model of what's going on on the page.

Quick-and-Dirty Inclusion of a Visual C++ Project in a Maven Build

Sat Jul 11 19:26:34 EDT 2015

Tags: maven jni

One of my projects lately makes use of a JNI library distributed via an OSGi plugin. The OSGi side of the project uses the typical Maven+Tycho combination for its building, but the native library was developed using Visual C++. This is workable enough, but ideally I'd like to have the whole thing part of one smooth build: compile the native library, then subsequently copy its resultant shared 32- and 64-bit libraries into the OSGi plugins.

From what I've gathered, the "proper" way to do this sort of setup is to use the nar-maven-plugin, which is intended to wrap around the normal compilers for each platform and handle packaging and access to the libraries and related components. I tinkered with this a bit but ran into a lot of trouble trying to get it to work properly, no doubt due to my extremely-limited knowledge of C++ toolchains combined with the natural weirdness of Windows's development environment.

For now, I decided to do it the "ugly" way that nonetheless gets the job done: just run the Visual C++ toolchain from Maven. Fortunately, Microsoft includes a tool called msbuild for this purpose: if you run it in the directory of a Visual C++ project, it will act like the full IDE. I added its executables to my PATH (C:\Program Files (x86)\MSBuild\12.0\bin) and then used a Maven plugin called exec-maven-plugin to launch it (the Ant plugin would also work, but this is more explicit). Since this will only run on Windows, I wrapped it in a triggered profile and added two executions to cover both 32-bit and 64-bit versions:

<project>
	...
	<packaging>pom</packaging>
	...
	
	<profiles>
		<profile>
			<id>windows-x64</id>
		
			<activation>
				<os>
					<family>windows</family>
					<arch>amd64</arch>
				</os>
			</activation>
			
			<build>
				<plugins>
					<plugin>
						<groupId>org.codehaus.mojo</groupId>
						<artifactId>exec-maven-plugin</artifactId>
						<version>1.4.0</version>
						<executions>
							<execution>
								<id>build-x86</id>
								<phase>generate-sources</phase>
								<goals>
									<goal>exec</goal>
								</goals>
								<configuration>
									<environmentVariables>
										<Platform>Win32</Platform>
									</environmentVariables>
									<executable>msbuild</executable>
								</configuration>
							</execution>
							<execution>
								<id>build-x64</id>
								<phase>generate-sources</phase>
								<goals>
									<goal>exec</goal>
								</goals>
								<configuration>
									<environmentVariables>
										<Platform>X64</Platform>
									</environmentVariables>
									<executable>msbuild</executable>
								</configuration>
							</execution>
						</executions>
					</plugin>
				</plugins>
			</build>
		</profile>
	</profiles>
</project>

The project itself remains configured in Visual Studio. While the source files are certainly modifiable in Eclipse, it won't have the full C/C++ toolchain environment until I figure out a proper way to do that. But this does indeed do the trick: it creates the two DLLs in the same way as when I had been building them in the IDE.

The next step is to automatically include these in the appropriate OSGi fragment projects. For this, at least for now, I'm using the maven-resources-plugin. This configuration depends on the structure of the Maven projects, which is sort of fragile, but it's not too bad when they're in the same overall project. This is the config for the x64 plugin, and there is a separate x86 project with an almost-identical configuration:

<project>
	...
	<build>
		<plugins>
			...
			<plugin>
				<groupId>org.apache.maven.plugins</groupId>
				<artifactId>maven-resources-plugin</artifactId>
				<version>2.7</version>
				
				<executions>
					<execution>
						<id>copy-native-lib</id>
						<phase>generate-resources</phase>
						<goals>
							<goal>copy-resources</goal>
						</goals>
						<configuration>
							<resources>
								<resource>
									<directory>${project.basedir}/../../native-project-name/x64/Debug/</directory>
									<includes>
										<include>nativelib-win32-x64.dll</include>
									</includes>
								</resource>
							</resources>
							<outputDirectory>${project.basedir}/lib</outputDirectory>
						</configuration>
					</execution>
				</executions>
			</plugin>
		</plugins>
	</build>
</project>

The result is that, at least when I build on Windows, everything is properly compiled and put in its right place. When running in my normal Mac dev environment, it uses the built libraries that have previously been copied into the plugin, so it still works well enough.

This is still a far cry from an optimal configuration. The requirement of using Visual Studio is cumbersome, which means that any multi-platform build will mean a redundant config (whether it be in the pom or in a separate Makefile), and this current setup isn't properly "Mavenized": the output doesn't go into the "target" folder and the DLLs aren't tagged for inclusion in the installed Maven repo. It suits the purpose, though, of being an intermediate step in a larger build.

My long-term desire is to get this fully cross-platform and automated on a build server. That will involve a lot of learning about the nar-maven-plugin (or Makefiles) as well as either setting up a cross-compilation infrastructure or a series of Jenkins slaves. In theory, an OS X system can have everything it would need to build for the other platforms itself, but I've gathered that the safest way to do it is with the "multiple Jenkins nodes" route. When I develop an improved build system for this, I'll write followup posts.

Working with Rich Text's MIME Structure

Wed Jul 08 20:28:19 EDT 2015

Tags: mime

My work lately has involved, among other things, processing and creating MIME entities in the format used by Notes for storage as rich text. This structure isn't particularly complicated, but there are some interesting aspects to it that are worth explaining for posterity. Which is to say, myself when I need to do this again.

As a quick primer, MIME is a format originally designed for email which has proven generally useful, including for HTTP and, for our needs, internal storage in NSF. Like many things in programming, it is organized as a tree, with each node consisting of a set of headers (generally, things like "Content-Type: text/html"), content, and children.

Domino stores the text part of rich text in MIME as HTML. In the simplest case, this ends up a one-element "tree", which you can see in the document's properties dialog:

Content-Type: text/html; charset="US-ASCII"

<font size=2 face="sans-serif">Hello <b>there</b></font>

There's slightly more to its full storage implementation (like the MIME_Version item), but the MIME Part items are the important bits. This simple structure can be abstracted to this tree:

  • text/html

Things get a little more complicated when you add embedded images and/or attachments. When you do either of those, the MIME grows to multiple items and becomes a multi-node tree.

Embedded Images

When you add an embedded image in the rich text field, the storage grows to four same-named MIME Part items. Concatenated (and clipped for brevity), the items then look like:

Content-Type: multipart/related; boundary="=_related 006CEB9D85257E7C_="

This is a multipart message in MIME format.

--=_related 006CEB9D85257E7C_=
Content-Type: text/html; charset="US-ASCII"

<font size=3>Here's a picture:</font>
<br>
<br><img src=cid:_2_0C1832A80C182E18006CEB9885257E7C style="border:0px solid;">
<br>
<br><font size=3>Done.</font>

--=_related 006CEB9D85257E7C_=
Content-Type: image/jpeg
Content-ID: <_2_0C1832A80C182E18006CEB9885257E7C>
Content-Transfer-Encoding: base64

*snip*

--=_related 006CEB9D85257E7C_=--

You can see the same sort of HTML block as before contained in there, but it sprouted a lot of other stuff. To begin with, the starting part turned into "multipart/related". The "multipart" denotes that the top MIME entity has children, and the "related" is used when the children consist of an HTML body and inline images. There are delimiters used to separate each part, using the auto-generated convention of "related" plus an effectively-random number. The image itself is represented as a MIME Part of its own, in this case stored inline and Base64-encoded (it can be shifted off to an attachment by Notes/Domino after a certain size). This structure can be abstracted to:

  • multipart/related
    • text/html
    • image/jpeg

The HTML is designed so that there is an image tag that references the attached image using a "cid" URL, an email convention that basically means "find the entity in this related MIME structure with the following content ID" - you can then see the content ID reflected in the JPEG MIME Part. This sort of URL doesn't fly on the web, so anything displaying this field on a web page (or otherwise converting it to a non-MIME storage format) needs to translate that reference to something appropriate for its needs.*

Attachments

When you have a rich text field with an attachment (in this case without the embedded image), you get a very similar structure:

Content-Type: multipart/mixed; boundary="=_mixed 006EBF7C85257E7C_="

This is a multipart message in MIME format.

--=_mixed 006EBF7C85257E7C_=
Content-Type: text/html; charset="US-ASCII"

<font size=3>Here's an attachment: <br>
</font>
<br>
<br><font size=3><br>
Done. </font>

--=_mixed 006EBF7C85257E7C_=
Content-Type: application/octet-stream; name="cert.cer"
Content-Disposition: attachment; filename="cert.cer"
Content-Transfer-Encoding: binary

cert.cer

--=_mixed 006EBF7C85257E7C_=--

The structure is the same sort of tree as previously, but the "related" content sub-type has changed to "mixed". This indicates that there are multiple types of content, but they're conceptually distinct. In any event, the tree looks like:

  • multipart/mixed
    • text/html
    • application/octet-stream

"application/octet-stream" is a generic MIME type for, basically, "bag of bytes" - MIME-based tools use it when they either don't know the content type or, as in this case, don't care. In this case, Notes/Domino splits out the content to be an NSF-style attachment and then references that in the MIME - this is an implementation detail, though, as the API returns the value regardless.

This also highlights a minor limitation in rich text storage: attachments do not have an inline representation in the HTML, and so they are always moved to the end of the field in Notes. At first, I was peeved by this limitation, but it makes a sort of sense: cid references are really about images, and I guess Lotus didn't want to override that for use in normal link elements.

That brings us to the final potential structure you're likely to run across:

Embedded Images And Attachments

When you include both embedded images and attachments, things get slightly more complicated. I'll skip the raw MIME and go straight to the tree:

  • multipart/mixed
    • multipart/related
      • text/html
      • image/jpeg
    • application/octet-stream

So this becomes a combination of the two formats, and a bit of logic emerges. In Notes's structure, "multipart/mixed" always contains two or more children, and the first one is the textual body, whatever form that may take. One of those forms is just a single-part "text/html", and the other is a "multipart/related" subtree containing the "text/html" and one or more images.


Once you get a feel for these structures, it makes the task of reading and creating Notes-alike MIME items much less daunting. There are a number of other concerns I've been dealing with as well (such as the conversion of composite-data rich text to HTML and how there are two ways to do it), and maybe I'll make a followup post at some point about those.


* As a minor note on this point, it's an area where the Notes client and XPages diverge slightly. The Notes client (which generated the example above), leaves inline images "nameless" - they contain no "Content-Disposition" header and no name in the "Content-Type", instead sticking with just the "Content-ID" for identification. With XPages, however, presumably due to the fact that it has filename information during the upload process, the result still contains (and is referenced by) the "Content-ID" value, but it also contains a line like:

Content-Disposition: inline; filename="foo.jpg"

This functions the same way for most purposes, but it may be significant. For example, if you happen to write processing code that uses the presence of absence of the "Content-Disposition" header as an indicator of whether it's an attachment or not, knowing this ahead of time could save you a certain amount of headache. The right way to do it is to see if the header is either missing or has a basic value of "inline" instead of "attachment".

Quick Tip: Override Form-XPage Mapping in xsp.properties

Thu Jun 11 14:17:19 EDT 2015

Tags: xpages

This is sort of an esoteric feature, but I just ran across it and it fits nicely into an XPages bag of tricks.

Specifically, as it turns out, there's a way to override the default "which XPage should be used for this document?" behavior in some cases. The most-commonly-known behavior is the "Display XPage Instead" set on the form (or the default form if-and-only-if the document has an empty Form field). This wins in all cases, and is the only way to get an XPage to open for a document when using non-XPage-specific URLs, such as traditional-and-clean-ish "db.nsf/viewname/key" URLs.

There's also a secondary behavior that I knew about before, which is that, if it can't find an explicit XPage for the form, it looks for an XPage that matches the form name. So, for example, "SomeForm" becomes "SomeForm.xsp" (case-sensitive, except the first letter). I used this once upon a time to be able to have XPages for documents stored in remote databases without changing the design of those databases (you can also specify an $XPageAlt field on the remote form without the XPage existing in that DB, by the way).

What I learned today is that there's a way to short-circuit that fallback and specify an alternative via xsp.properties. Specifically, you can add a line that looks like this:

xsp.domino.form.xpage.testform=OtherForm

Note that there are a few notes here:

  • This only works for "$$OpenDominoDocument.xsp" URLs, because the XSP runtime is not consulted for "view/key" URLs unless the form explicitly names an XPage.
  • This does not override an explicitly-named XPage in the form.
  • This does work when no form with the specified name actually exists in the database.
  • The form name must be lowercase in the property.
  • The form name must also be properly escaped as per the rules for properties files; importantly, this means that spaces in the form name must be preceded by a backslash.
  • This does work to specify an XPage for the default form when no form is specified in the document.

Most of the time, this trick won't be needed, since either specifying an XPage name in the URL or designating it in the form note will suffice. However, if you're in a case where you have an app that points to data in remote databases and you don't want to modify the design there but still want to use "$$OpenDominoDocument" URLs, this is an option.

Parsing JSON in XPages Applications

Thu May 21 12:47:53 EDT 2015

Tags: json java

David Leedy pointed out to me that a post I made last year about generating JSON in XPages left out a crucial bit of followup: reading that JSON back in. This topic is a bit simpler to start with, since there's really just one entrypoint: com.ibm.commons.util.io.json.JsonParser.fromJson(...).

There are a few variants of this method to provide either a callback to call during parsing or a List to fill with the result, but most of the time you're going to use the variants that take a String or Reader of JSON and convert it into a set of Java objects. As with generating JSON, the first parameter here is a JsonJavaFactory, and which static instance property you choose matters a bit. Contrary to the first-Google-result documentation, there are three types, and they differ slightly in the types of objects they output:

  • instance: This uses java.util.HashMap for JSON objects/maps and java.util.ArrayList for JSON arrays.
  • instanceEx: This is like instance, but uses JsonJavaObject for JSON objects/maps.
  • instanceEx2: Like instanceEx, this uses JsonJavaObject for objects/maps but also uses JsonArray for JSON arrays.

Since JsonJavaObject and JsonArray implement the normal Map<String, Object> and List<Object> interfaces you'd expect (and, indeed, subclass HashMap and ArrayList), you can treat them interchangeably if you're just using the interfaces like you should, but it may matter if you're doing something where you expect certain traits of one of the concrete classes or want to use the explicit getString, etc. methods on the JSON-specific ones.*

Anyway, with that out of the way, the actual code to use these is pretty straightforward:

String json = "{ \"foo\": 1, \"bar\": 2}";
Map<String, Object>result = (Map<String, Object>)JsonParser.fromJson(JsonJavaFactory.instance, json);

In this case, I'm immediately casting the result from Object to Map because I'm sure of the contents of the JSON. If you're less confident, you should surround it with instanceof tests before doing something like that. In any event, that's pretty much all there is to it. As with generating JSON, SSJS wraps this functionality in a fromJson method (which may or may not produce the same objects; I haven't checked).


* You could also subclass the standard as the code does if you have specific needs or desires, like using a LinkedHashMap instead of HashMap to preserve the order of the object's keys.

Quick PSA: LS2J Problems in 9.0.1 FP3

Tue May 19 14:25:11 EDT 2015

Tags: java

While I'm at it, I realized that it may be useful to further spread information about the problem that led to me fiddling with the latest IFs and JVM patches in the first place: the LS2J problems in 9.0.1 FP3. Specifically, it's borked. The main way most people have encountered this is via an exception dialog when attempting to add new plugins to an Update Site NSF, since that uses LS2J to accomplish its task - it displays several layers of stack and the upshot is that there's a java.lang.InternalError about calling a constructor.

The fix is to install the JVM patch from here; Interim Fix 3, while also worth an install, doesn't cover this.

There's another caveat about that, though, for those who have both Notes and Domino installed on the same machine. Since the installer for the JVM patch is the same for both, it will pick one of the two and not give you a choice to choose the other. In the case of the 64-bit patch (I don't know why it picks the client in that case), Ulrich Krause posted workaround steps. In my case, both were 32-bit, it found Domino first and not Notes, and the same commands didn't work. My ugly workaround was to fireup regedit, browse to (if I recall correctly) HKEY_LOCAL_MACHINE\SOFTWARE\IBM and rename the "Domino" folder/key to something else, run the installer, and then rename the folder/key back.

Quick Tip: Re-Enabling Disabled Designer Plugins

Tue May 19 12:58:05 EDT 2015

Tags: designer

Recently, I had a case where my installed Designer plugins stopped appearing, immediately made obvious by the libraries disappearing from XPages applications and Designer listing hundreds of class-not-found errors. At first, I figured that the local plugins had been deleted, but trying to install from update sites curtly informed me that they contained nothing new for me.

It turned out that my local plugins had been somehow marked disabled by Designer. The fix for this was to go to File → Application → Application Management (you may have to launch Designer to see this option) and to enable them there. Crucially, the disabled plugins didn't show up until I clicked the "Show Disabled Features" button (forgive the grossly-outdated Notes version on this client machine):

Once I did that, the second category of plugins (in the data folder) listed everything I expected, and I was able to re-enable them there. One hitch to this process is that it requires sticking to the dependency order, so some plugins may refuse to be enabled until you enable others (commonly, any that depend on the Extension Library).

I'm not sure what specifically caused all these plugins to take a nap, but I suspect it's related to a recent Interim Fix or the Java update, since it happened around when I installed those, and I've heard others report the same behavior.

How I Use JAX-RS in the frostillic.us Framework

Fri May 01 17:59:14 EDT 2015

Tags: java rest

Inspired by Toby Samples's new blog series on JAX-RS in Domino, I'd like to share a description of how I made use of it to write the REST services in the frostillic.us Framework. This is not intended to be a from-scratch introduction - Toby is handling that well so far - but instead assumes a certain amount of knowledge with OSGi development and why you would want to do this in the first place.

The goal of my REST services is to provide an automatic REST/JSON API for any Framework model objects used in a database without having to include any servlet code in the database itself. It's a business-logic-friendly analogue to the Domino Access Services and "borrows" heavily from that code base. It doesn't use the DAS extension point, though, in large part because I didn't know that existed until recently. As far as I can tell, using that extension point saves you some bootstrapping work and makes it possible to enable/disable the service in the server config, but otherwise the work will likely be fairly similar.

Initial Setup

To get started, this will all have to take place in a plugin, unless it turns out there's a way to do it in-NSF. In this case, this made sense anyway, since I wanted the servlets to be available for everything. The first step was to make a stub class to act as the base of the servlet, even though it doesn't really do anything:

package frostillicus.xsp.model.servlet;

import javax.servlet.ServletException;
import com.ibm.domino.services.AbstractRestServlet;

public class ModelServlet extends AbstractRestServlet {
	private static final long serialVersionUID = 1L;

	public static ModelServlet instance;

	public ModelServlet() {
		instance = this;
	}

	@Override
	protected void doInit() throws ServletException {
		super.doInit();
	}
}

Once that class existed, I registered it in the plugin.xml as a servlet extension:

<extension id="frostillicus.xsp.model.Servlet" name="fmodelservlet" point="org.eclipse.equinox.http.registry.servlets">
	<servlet alias="/fmodel" class="frostillicus.xsp.model.servlet.ModelServlet">
		<init-param name="applicationConfigLocation" value="/WEB-INF/fmodelapplication"/>
		<init-param name="propertiesLocation" value="/WEB-INF/fmodelservlet.properties"/>
		<init-param name="DisableHttpMethodCheck" value="true"/>
	</servlet>
</extension>

In addition to that servlet class, it also references two text files to provide configuration for Wink, the JAX-RS implementation packaged with the Extension Library. The first is a list of resource classes to use in the servlet:

frostillicus.xsp.model.servlet.resources.ApiRootResource
frostillicus.xsp.model.servlet.resources.ManagersResource
frostillicus.xsp.model.servlet.resources.ManagerResource
frostillicus.xsp.model.servlet.resources.ModelResource

The second is a properties file with configuration options (I don't remember why this option is important):

wink.defaultUrisRelative=false

Implementing a Resource

The classes listed above are what receive a REST request (funneled through Wink/JAX-RS) and provide a response. As an example, here's the ManagerResource class:

package frostillicus.xsp.model.servlet.resources;

import java.io.IOException;
import java.net.URI;
import java.util.Map;
import java.util.HashMap;

import javax.ws.rs.Consumes;
import javax.ws.rs.GET;
import javax.ws.rs.POST;
import javax.ws.rs.Path;
import javax.ws.rs.PathParam;
import javax.ws.rs.Produces;
import javax.ws.rs.WebApplicationException;
import javax.ws.rs.core.Context;
import javax.ws.rs.core.MediaType;
import javax.ws.rs.core.Response;
import javax.ws.rs.core.Response.ResponseBuilder;
import javax.ws.rs.core.UriInfo;

import org.openntf.domino.Database;

import com.ibm.commons.util.io.json.JsonException;
import com.ibm.commons.util.io.json.JsonGenerator;
import com.ibm.commons.util.io.json.JsonJavaFactory;
import com.ibm.domino.commons.util.UriHelper;
import com.ibm.domino.das.utils.ErrorHelper;

import frostillicus.xsp.model.ModelManager;
import frostillicus.xsp.model.ModelObject;
import frostillicus.xsp.model.ModelUtils;
import frostillicus.xsp.util.FrameworkUtils;

@SuppressWarnings("unused")
@Path("{managerName}")
public class ManagerResource {

	@GET
	@Produces(MediaType.APPLICATION_JSON)
	public Response getManager(@Context final UriInfo uriInfo, @PathParam("managerName") final String managerName) {
		try {
			Map<String, Object> result = new HashMap<String, Object>();
			Database database = FrameworkUtils.getDatabase();
			if(database == null) {
				result.put("status", "error");
				result.put("message", "Must be run in the context of a database.");
			} else {
				Class<? extends ModelManager<?>> managerClass = ModelUtils.findModelManager(database, managerName);
				if(managerClass == null) {
					result.put("status", "failure");
					result.put("message", "No manager found for name '" + managerName + "'");
				} else {
					result.put("status", "success");
					result.put("managerClass", managerClass.getName());
				}
			}

			return ResourceUtils.createJSONResponse(result, false);
		} catch (Throwable e) {
			return ResourceUtils.createErrorResponse(e);
		}
	}

	@POST
	@Consumes(MediaType.APPLICATION_JSON)
	public Response createModel(final String requestEntity, @Context final UriInfo uriInfo, @PathParam("managerName") final String managerName) {

		Database database = FrameworkUtils.getDatabase();
		Class<? extends ModelManager<?>> managerClass = ModelUtils.findModelManager(database, managerName);
		if(managerClass == null) {
			return ErrorHelper.createErrorResponse("Manager '" + managerName + "' not found.", Response.Status.NOT_FOUND);
		}

		URI location;
		try {
			ModelManager<? extends ModelObject> manager = managerClass.newInstance();
			ModelObject model = manager.create();
			ResourceUtils.updateModelObject(requestEntity, model, false);

			location = UriHelper.appendPathSegment(uriInfo.getAbsolutePath(), model.getId());
		} catch(Throwable t) {
			return ResourceUtils.createErrorResponse(t);
		}

		ResponseBuilder builder = Response.created(location);
		Response response = builder.build();
		return response;
	}
}

There are quite a few concepts at work here, as well as tons of logic wrapped up in the referenced ResourceUtils and ModelUtils classes. The term "manager" in this class has no special meaning for JAX-RS or servlets - it's the term the Framework uses for the objects that provide access to models, like the "Posts" manager that maps requests for "all" to a back-end view named "Posts\All" and returning "Post" objects.

This is where things get hairy and diverge from basic servlet creation and head into Domino/XPages-specific eccentricities.

A Secret Double Life

The job of the model servlet is a bit strange, in that it doesn't want to just read document data from an NSF, but also should read and process Java classes. Framework model classes are defined in the NSF, not in plugins, and so the servlet has no real knowledge of what's inside the NSF, other than that it should look for classes that implement the appropriate interfaces.

When code is executing in an OSGi servlet context like this, it sits in a strange grey area. It's a better spot than agents - which have no knowledge of OSGi plugins or the XSP runtime - but it's not quite an XPages context, either, and there's no FacesContext available. Instead, a class called ContextInfo provides access to the session (running as the currently-authenticated Domino user) and the current database, if applicable. That "if applicable" comes in because an OSGi servlet can be accessed either as "http://foo.com/servletname" or as "http://foo.com/bar.nsf/servletname". I modified my utility class to paper over the difference between these two environments. The manager resource calls ModelUtils.findModelManager with this database context to try to find the requested manager. For example, if the request comes in as "/fmodel/Posts", it will search the database for a class or managed bean named "Posts".

This is where the ability to treat an NSF as a "bag of classes" comes in handy. It may be possible to do this another way, but I'm using the DatabaseClassLoader provided by ODA to perform searches on the Java classes contained in an NSF. For this purpose, the actual names and structure of the classes are irrelevant, only that they implement ModelManager. If such a class is found, then the servlet knows enough about it, thanks to the interface, to fetch collections and individual model objects as necessary.

Bits and Bobs

In addition to the trickery required to pull manager and model classes out of an NSF, there are also a number of other techniques and components, mostly lifted from the ExtLib, used to make this work.

The code makes heavy use of JsonWriter to produce JSON in a lightweight manner, rather than building up and then spitting out large blobs of JSON, which is particularly important with large amounts of data.

The AbstractDominoModel class needed a lot of reworking to exist in the not-quite-XPages environment. In an XSP context, it makes use of an inner DominoDocument object in order to be able to deal with file attachments more easily, but that doesn't work without the full context. Accordingly, it uses a holder class to paper over the difference between DominoDocument and "manually" accessing the ODA Document. Of particular note is the handling of rich text for export into the REST display. It uses the HTML converter class included in DominoUtils, which I assume uses the HTMLConvertItem C function underneath the hood. It then uses the converter object to output an HTML version of the rich-text item as well as URLs for any attachments.

There is a certain amount of number fiddling and off-by-one-error-prone work done to determine the first and last entries in a model collection to show. As with DAS, it supports both the "Range" header and "start" and "count" GET parameters.

I nabbed wholesale IBM's shim implementation of the PATCH method, which isn't included in the version of Wink shipped with the ExtLib (at least not when I wrote this). Ideally, PATCH would mean that the JSON provided to the server would only be used to update fields in-place (leaving any fields in the model not included in the JSON untouched), while PUT would replace the model object entirely (removing any model fields not present in the JSON). In reality, they both act as PATCH.

As with XPages access to model objects, the REST APIs work with the JPA annotations used in model objects for validation. The model objects check their context - in an XPages context, failed validations result in FacesMessages, while otherwise it throws a ConstraintViolationException. When this exception occurs in the servlet, the error-response method picks up on that and generates specialized JSON to provide an explanation of the failed constraints to the user.

So Yeah

If you haven't tried out JAX-RS servlets yet, don't let this list of caveats and complicated code daunt you. This specific case of working with model objects in a very generic way naturally leads to complicated code, and the annotation-based coding system of JAX-RS/Wink reduces the amount of code dramatically. None of my code has to deal with fetching HTTP requests or parsing query parameters into useful objects - the API does that for me. There's no doubt a good deal more I could have it do for me as well. This is a pretty clean way to write servlets and is absolutely the best way to write them when the code makes sense to exist in a plugin.