Darwino for Domino: Conceptual Overlap and Distinctions

Wed Jun 01 16:18:39 EDT 2016

  1. An Overview of Darwino for Domino Types
  2. Darwino for Domino: Replication and Data Format
  3. Darwino for Domino: Domino-side Configuration
  4. Darwino for Domino: Conceptual Overlap and Distinctions

I've talked a bit so far about how Darwino related to Domino from a development perspective, but I think it'd also be useful to delve into specifically which concerns the two platforms address, to see where they overlap and where they don't.

There are two main categories to cover, since Darwino inherits Domino's unusual trait of pouring over from "database" to "app-dev platform".

Database

As I covered a few posts ago, the two are similar at a conceptual level, both being replicating document databases with document-level access control. Aside from the difference between an NSF note's format and a JSON document, the main distinction is that Darwino doesn't cover the actual physical storage of data. Instead, it is based on top of existing SQL servers of various stripes (PostgreSQL and SQLite being the most common). A Darwino application creates a series of tables and uses them as the backing store for the conceptual document database.

This has a number of implications. The main one is that there isn't a "Darwino server" as such - instead, there are SQL databases and Darwino applications acting in tandem. In developing an application, this isn't generally a concern: the Darwino APIs are the same across each database, in the same sort of way that a Domino application doesn't care about the ODS version. However, being backed by a SQL server has some distinct advantages: the server can be administered and optimized using the same knowledge you would use for a "normal" SQL-backed app, and the ability of modern DBs to index JSON data opens up a world of possibilities (think NSFDB2, except good).

The flip side of this bleeds into the second category, as it means that a Darwino application consists of at least two parts: the SQL database and the application, which veers from Domino's "everything in one package" promise slightly.

Application

Things diverge most significantly (though at least as promisingly) when it comes to the application level. Domino has a few "official" ways to develop applications (Notes, legacy web, and XPages) and then hooks to sort of act like a Java EE server, albeit with some notable limitations. Darwino, on the other hand, exists as a sort of "glue layer" in between the database and the application: lower-level than XPages but higher-level than just a database driver.

Darwino provides a common platform for writing Java-based applications, with various services for managed beans, user directories, and so forth, written to work consistently across all of the platforms it targets. Again, this starts out similar to Domino, but diverges in the areas where Darwino takes advantage of other technologies.

At the low level, since Darwino's main requirement is "a Java runtime", it is able to run smoothly on various Java EE servers, on Android, on iOS, and on pretty much anything that provides a capable-enough Java environment (such as, say, Domino). It also, incidentally, means that it works great on Java 8.

At the high level, Darwino doesn't prescribe a specific UI framework, so the field is open to use any of the tremendous array of rapidly-developing Java frameworks on the web side and, as desired, native UI toolkits on mobile. There's a bit of an inherent bias towards REST+client JS applications, since then the same code entirely can be used on both web and mobile (as not every Java web tooklit works on the mobile mini web server as it is now), but that's not obligatory.

Overall

So the overall idea is that Darwino doesn't solve every problem that Domino does, but the problems it chooses to farm out are in the areas where that brings tremendous benefit. In each area where Darwino uses third-party support, it benefits from the tremendous advancements made in recent years, without requiring jumping through weird hoops to get modern techniques to work.

Speaking at Social Connections Toronto

Mon May 30 11:57:41 EDT 2016

Tags: speaking

By virtue of one of the original speakers having to cancel, I will be presenting at Social Connections in Toronto next week! Specifically, it will be on the 7th at 2:30, and the topic will be OpenNTF's new tooling and initiatives:

OpenNTF has long been a central hub for community and open-source development in the Notes/Domino world. In the past year, however, it has also expanded its capabilities with new tooling and a broadening scope to the larger IBM portfolio. This presentation will discuss OpenNTF’s history and its plans for the future, including a look at its newly-launched suite of source control, issue tracking, and continuous-delivery tools.

So, if you're attending Social Connections, drop on by the Toronto Room on Tuesday afternoon.

Old App Idea: App Manager

Tue May 24 10:12:26 EDT 2016

Tags: xpages

Years back, I took a small whack at an idea that had been percolating in my head: a "app manager" application that would assist with the XPage-specific portions of running a Domino server. It would cover a lot of ground that Administrator really doesn't touch, such as inspecting NSFs to see which contain XPage artifacts, highlighting potential problems with them, and assisting with app-design backup and deployment.

It never really got too far, since there are always a hundred other things to work on, but I always liked the idea. I decided to toss what code I had laid down up on GitHub:

https://github.com/jesse-gallagher/app-manager

I don't imagine I'll really have time to build on that unless I'm suddenly really struck with inspiration. But either way, the code is there for anyone interested in taking a look.

The Cleansing Flame of Null Analysis

Sat May 21 10:18:00 EDT 2016

Tags: java maven
  1. The Cleansing Flame of Null Analysis
  2. Quick Tip: JDK Null Annotations for Eclipse
  3. The Joyful Utility of Optionals in Java

Though most of my work lately has been on sprawling, platform-level stuff or other large existing codebases, part of it has involved a new small app. I decided to take this opportunity to dive more aggressively than previously into automated null analysis and other potential-bugs tools.

What I mean by "null analysis" is letting the IDE or compiler try to help you avoid NullPointerExceptions. Though there are plenty of other programming mistakes you could still make, these are among the most common, and so a little extra work up front to avoid them should pay dividends. Eclipse has some handy options in its Java → Compiler → Errors/Warnings preferences to assist with this:

The first option will pick up on some pretty basic instances, such as:

Object foo = null;
System.out.println(foo.hashCode());

Since this is clearly going to always cause an NPE, Eclipse is able to point this out as an error. The next level gets a little more nebulous: "potential" null pointer access. This crops up when Eclipse can't reliably determine whether a value will be null, either because there is no way to know at compile time (say, database access) or because the compiler's tooling is too limited. Here's a contrived example:

Object foo = Math.random() > 0.5 ? new Object() : null;
System.out.println(foo.hashCode());

This situation is clearly untenable, but there are other situations where you as a programmer can be very confident that the value will not be null (say, if you swap out the > 0.5 for >= 0.0), but the compiler doesn't know that. That's why it often makes sense to leave that as a warning instead of an error.

That's all stuff I've done before, but now I've decided to dive into annotation-based null analysis as well. Unfortunately, in stock Java, this is something of a hot mess (that list even leaves out Eclipse's home-grown version). Since Java didn't grow up with this sort of capability, it's been shoehorned in by various parties over the years. There are other tools to assist you in Java 8, but, unfortunately, I can only target 7 as the highest. For now, I've settled on the "sort-of standard" javax.validation.constraints package. It wasn't really intended for this specific purpose, but it's flexible enough to suit and can be used in Eclipse and FindBugs (though I have my reservations about the choice).

In Eclipse, this type of analysis can be enabled by checking "Enable annotation-based null analysis" below the other options and, unless you're using Eclipse's known annotations, adjusting the "Configure" options next to "Use default annotations for null specifications":

In any event, regardless of the choice of tooling, the "this shouldn't be null" annotations work the same way: you use them to decorate things that you either require not be null when provided to you (method parameters) or you promise to not be null when providing to others (method return values). For example:

public @NotNull Object doSomething(@NotNull Object otherObject) {
	return otherObject.toString();
}

This highlights three things, two good and one bad:

  • Good: The @NotNull in the method parameter means that, as long as the calling code is also checked for null use, the method can be confident that there won't be a NullPointerException when calling a method on otherObject.
  • Good: The @NotNull on the return value means that other code calling this method can be confident that they will not get a null value from it, and so can skip extra null checks.
  • Bad: Eclipse flags otherObject.toString() as a potential problem because it doesn't know for sure that Object#toString doesn't return null, because it has no nullability annotations. As programmers (or as a compiled-code analysis tool), we can be fairly confident that it will be non-null because any object returning null for that is essentially broken on its own.

That last one is a common problem when adopting annotation-based null analysis, at least in Eclipse (I hear it may be better in IntelliJ): its logic doesn't go very deep. If everything is gussied up with these annotations, you're clear - but as soon as you step outside of the project you're working on, you have to add in likely-unnecessary checks. Fortunately, these checks don't realistically hurt (a null check at runtime in a normal app is negligible performance-wise), but they can grate to have to add in.

Glutton for punishment that I am, I decided to go a step further and enable FindBugs processing as an integral step of my build. Though FindBugs can be very picky about the types of things it complains about, it is blessedly more thorough in its analysis than Eclipse, so you generally end up conceding that it is correct when it yells at you. Since the project is Maven-based, I added the check in the project's pom file:

<plugin>
	<groupId>org.codehaus.mojo</groupId>
	<artifactId>findbugs-maven-plugin</artifactId>
	<version>3.0.3</version>
	<configuration>
		<includeTests>true</includeTests>
	</configuration>
	<executions>
		<execution>
			<phase>compile</phase>
			<goals>
				<goal>check</goal>
			</goals>
		</execution>
		<execution>
			<id>findbugs-test-compile</id>
			<phase>test-compile</phase>
			<goals>
				<goal>check</goal>
			</goals>
		</execution>
	</executions>
</plugin>

For most uses, that's all that's required. Now, when the project is compiled, FindBugs will give it a once-over and halt the build if it finds anything it doesn't like. This can be tweaked a great deal - for example, changing the checks to run or the severity of the problem needed to fail the build - but the defaults will likely suit.

Adding these extra checks involves a lot of plusses and minuses. The big minus is that you may end up spending a lot of time "fixing" bugs that don't really exist, time that you could instead spend actually writing your application (and writing new bugs that the tools won't find anyway). There's really nothing to be gained by carefully explaining to Eclipse for the hundredth time that toString always returns non-null.

Still, particularly when tested out in a small, low-surface-area app, this can be a good practice to learn and refine. Eventually, a move to Java 8 will help this more, and it certainly doesn't hurt to add in nullability annotations in the mean time. Overall, I think having the tooling help you avoid a whole suite of common "brain fart" bugs like this is worthwhile.

Quick, Short-Notice Announcement: Delaware Valley Soc-Biz UG Meetup

Tue May 17 20:34:20 EDT 2016

Tags: meetup

Granted, this is short notice, but for anyone in the southeast-PA area, there's a meetup at IBM's offices this Thursday (the 19th) from 11 to 12:30. I'll be there, giving a presentation about OpenNTF and some of the ways that I've used the projects we work on there on my customer projects. Better still, there's lunch! The signup form is over on Greenhouse here:

https://greenhouse.lotus.com/forms/landing/org/app/80846239-6f7a-4483-8ace-9e5e02b0a661/launch/index.html?form=F_Form1

If you're able to make it, great! Otherwise, I imagine there should be more of this sort of thing in the future.

Darwino for Domino: Domino-side Configuration

Mon May 16 10:51:47 EDT 2016

Tags: darwino
  1. An Overview of Darwino for Domino Types
  2. Darwino for Domino: Replication and Data Format
  3. Darwino for Domino: Domino-side Configuration
  4. Darwino for Domino: Conceptual Overlap and Distinctions

In my last post, I mentioned that a big part of the job of the Darwino-Domino replicator is converting the data one way or another to better suit the likely programming model Darwino-side or to clean up old data. The way this is done is via a configuration database on the Domino side (an XPages application), which allows you to specify Database Adapters that configure the translation. While it is possible to write these in Java, the primary way is to use a Groovy-based DSL script.

The simplest form of this script may be one line just defining where the NSF is:

nsfName "foo.nsf"

With that, the replicator will open foo.nsf and attempt to replicate all documents with "best fit" translations of each field it comes across. Things can get a little more complex, though:

form("SomeForm") {
	excludeField "foo"
	excludeFields "IgnoreMe"
	
	field "foo_(.*)", nameRegex: true
	arrayField "bar_(.*)", delimiter: "\$", zeroBased: true, initialIndexed: false, prefix: false, compact: true,
		userDataFormatName: "ODAChunk", nameRegex: true
	field ~"impliedfoo_(.*)"
}

form("ArrayTest") {
	arrayField "FirstName", type:TEXT, delimiter: "_"
	arrayField "LastName", type:TEXT, delimiter: "\$", zeroBased: true, initialIndexed: false, prefix: true, compact: true
	
	// Similar to the ODA style
	arrayField "UserData", type:USERDATA, delimiter: "\$", zeroBased: true, initialIndexed: false, prefix: false, compact: true,
		userDataFormatName: "ODAChunk",
		toDarwino: { value ->
			// The value will be an array of byte arrays, which should be strings
			println "testrep: called toDarwino!"
			value.collect { bytes ->
				new String((byte[])bytes, "UTF-8");
			}
		},
		toDomino: { value ->
			// We should provide an array of byte arrays
			println "testrep: called toDomino!"
			println "value is ${value}"
			def result = value.collect { string ->
				string.getBytes("UTF-8")
			}
			println "result is ${result}"
			return result
		}
}

form("RestrictedFields") {
	restrictToDefinedFields true
	field "KnownField"
	field "KnownField2"
}

The specifics of what's going on in this example (pulled from unit test data) aren't too important, but it demonstrates the customizability that an in-language DSL brings. Since the code is executed in Groovy, it has access to the full Java runtime (including, if you deliver it in a plugin, your own custom classes and dependencies), as well as Groovy's nice abilities like closures. In fact, a great many properties, like the converters above, can be specified as closures and executed in the context of each document as it's replicating, allowing for pretty fine-grained translation.

And personally, adapting Groovy into the process was an interesting exercise. Since Groovy was designed explicitly as a "scripting" variant of Java, the process of working it in to an existing Java code base is very smooth, and there aren't too many gotchas. I wrote some Java classes that provide the context for the root and individual "form" blocks, wired up the interpreter, and then they call each other seamlessly. Other languages could probably suit the job well too - JRuby, Rhino, etc. - but Groovy is mature, purpose-built, and largely syntax-compatible with Java itself, making it a very comfortable fit.

Darwino for Domino: Replication and Data Format

Wed May 11 14:35:50 EDT 2016

Tags: darwino
  1. An Overview of Darwino for Domino Types
  2. Darwino for Domino: Replication and Data Format
  3. Darwino for Domino: Domino-side Configuration
  4. Darwino for Domino: Conceptual Overlap and Distinctions

One of the key points of interest in Darwino for Domino developers is its two-way replication. Darwino's replication system was built in such a way that, in addition to its own internal needs, you can also write a replicator to connect to an entirely-unrelated system, as long as that replicator translates the foreign data to and from JSON documents. Domino is a perfect case for this, since the data model is already very similar, and its replicator ships with Darwino and has been a focus of my attention for a while.

Behind the scenes, this replication takes advantage of a lot of the same kinds of things that Domino's replication always has: UNIDs, sequence IDs, original-vs.-in-file modification/creation, and deletion stubs. These details are transparent to the developer: the Darwino adapter knows how to fetch the appropriate data from the NSF and to convince Domino that the Darwino DB is (almost) like a remote Domino server, at least as far as the stored data is concerned.

The primary differences show up in the way data is formatted for storage. Darwino uses JSON for its document format, which has a couple key advantages and disadvantages compared to NSF's "bag of items" approach. The best approach is probably to provide an example and highlight the pertinent differences. Say you have a Domino document that (conceptually) looks like this:

FirstName
	Type: TEXT
	Value: Foo
LastName
	Type: TEXT
	Value: Fooson
Username
	Type: TEXT
	Flags: NAMES READWRITERS
	Value: CN=Joe Schmoe/O=SomeOrg
Birthday
	Type: TIME
	Value: 1970/2/1
Vacations
	Type: TIME_RANGE
	Value: 2016/1/1-2016/1/5, 2016/3/3-2016/3/5
IsAdmin
	Type: TEXT
	Value: Y

On the Darwino side, that would potentially look like this:

{
	"_writers": {
		"username": ["cn=Joe Schmoe,o=SomeOrg"]
	}
	"firstname": "Foo",
	"lastname": "Fooson",
	"birthday": "1970-02-01",
	"vacations": [
		"2016-01-01/2016-01-05",
		"2016-03-03/2016-03-05"
	]
	"admin": true
}

There are certainly a few things to take note of here. First and foremost is the structure of the authors field. Because JSON doesn't have field metadata, the way Darwino does its readers/writers security is by using specially-named and -structured properties within the JSON, and so the converter moves all readers and writers fields into that. This has internal-implementation reasons, but I think it's also conceptually preferable to, say, having a multi-level object within each field to declare its flags separate from the value. The name also happens to be stored in LDAP style, because Darwino is more at home with standard LDAP conventions for that sort of thing.

Another thing to note is the format of the date fields. Since JSON doesn't have a real date/time type of its own, these values are converted according to ISO 8601 and stored as strings. That means that your Darwino application will need to know that those string values represent dates, but they're reasonable to deal with.

The multi-value date field leads to another important aspect: arrays. In Domino, most items are conceptually presented as arrays, regardless of whether they contain single or multiple values, leading to code that requires either explicitly asking for the first element or jumping through hoops to deal with single or multiple values. Since that's a drag to worry about with JSON, the default behavior for single-value items transferred to Darwino is to store them as "bare" values. When configuring the translation (which will be fodder for a future post), you are able to specify that you want a field to be always stored as an array, which will allow the Darwino-side code to be simpler.

The last field shows off an outright advantage of the JSON format: boolean storage. When defining the conversion, you can specify a field as boolean and provide what the true/false values will be, and they will be sent over to JSON as true and false explicitly. That's not a night-and-day change, but it is a nice help.

Finally, there's the matter of rich text, which is unsurprisingly a nontrivial problem. This is handled in an XPage-alike way: MIME rich text is transferred with only minor adjustments, while Composite Data is converted to HTML and cleaned up a bit before transfer. Darwino supports the concept of attachments natively, and so Domino attachments are brought over with a naming prefix to match the field they're attached to, plus a delimiter to indicate whether they're normal attachments or embedded images. The way it is presented on the user end is dependent on the application, but Darwino has some routines to translate storage-safe inline image refs to app-relative URLs.

Later, I will go into the process of how the Domino-Darwino adapters are set up. The short of it is that you create scripts that can run the gamut from just telling the server where to find the NSF to customizing the translation of each field encountered. This allows you to either transfer the data back and forth in the default "best approximation" approach or use the opportunity to enforce a bit of a schema on old data.

An Overview of Darwino for Domino Types

Thu Apr 14 19:05:47 EDT 2016

Tags: darwino
  1. An Overview of Darwino for Domino Types
  2. Darwino for Domino: Replication and Data Format
  3. Darwino for Domino: Domino-side Configuration
  4. Darwino for Domino: Conceptual Overlap and Distinctions

So, Darwino! I've mentioned it quite a few times on Twitter and, particularly, in person, but I think it's high time I write some proper blog posts about it.

To start with, I'll cover what Darwino is. The short version is it's a Java-based development framework with a replicating document database. The interesting aspects go beyond that, though:

  • In addition to Java web servers, it targets mobile devices, both Android and, through RoboVM, iOS. Those devices store their own replicas of the databases for offline work in the same conceptual way as Notes, but with native (or hybrid web, if you're so inclined) mobile user interfaces.
  • The document database sits on top of SQL servers. Many modern SQL servers have native support for JSON data, and Darwino takes advantage of this to get document-DB flexibility with SQL features.
  • Business logic is shared between platforms. Because Java acts as a common language between each platform, and the document DB works the same way locally and remotely, the core business logic of the app can be identical across each targetted platform, with only the UI changing between them.
  • Along those lines, Darwino isn't prescriptive with the UI: it's not a front-end framework itself, instead providing the basis for using other front-end tools, such as Ionic, JSF, and Vaadin for web/hybrid UIs and the native OS toolkits on mobile.
  • The Darwino syncing protocol is designed to be adaptable to other services. This is immediately notable for Domino developers, but can also be (and has been, in some cases) adapted for arbitrary other back ends, like Connections social data or other databases.

How does this relate to Domino/XPages development? That depends on your desires, really.

In some ways, it doesn't. Darwino is its own platform, running on Java web servers like WebSphere and Tomcat, using independent SQL servers like DB2 and PostgreSQL. Darwino's replication between the server and mobile devices is similar to Domino's, but is its own thing. Similarly, the document model, though conceptually similar to Domino (including enhanced reader and author fields), is not NSF.

However, there are a number of reasons why it's of interest to a Domino developer, and the most immediate of those is its ability to do two-way replication with Domino databases. I'm a little biased on this point because of how much time I've spent working on it, but this replication is capable of some nifty tricks to make it capable and adaptable, including transformation of the data, two-way maintenance of document time stamps, and so forth. With this syncing, it makes it very practical to extend your existing Domino app - be it a classic-style Notes/web app or an XPages one - with a Darwino-side UI that uses the same data, synced down to mobile devices for offline access. And this doesn't require migration; since the changes replicate back, the app can remain chugging away unchanged on the Domino side if desired. This also can be tremendously useful for reporting, by syncing the data over to a full SQL database that can be viewed and queried by normal tools.

And, really, it's also of interest to Domino developers personally by virtue of being a platform that has learned a lot of valuable lessons from Domino and extended them in new ways. Those years of accumulated document-database knowledge will carry over nicely, with some extra benefits if you're SQL-familiar too. Any Java knowledge will come in handy immediately, as Darwino is thoroughly Java-based on all target platforms. And, thanks to its pedigree, a lot of the platform support concepts are similar to aspects of XPages (the good parts). In general, the more XPages development you've done, the more it will benefit you (especially if you want to use JSF for the UI!). You could also, with some servlet-implementation limitations, run Darwino apps on Domino via OSGi, and I've been putting some side work into accessing Darwino databases from XPages directly - Darwino could make a solid basis for Domino-run apps.

So this turned into a bit of a sales pitch, but there's no getting around it - I find this thoroughly compelling and exciting. As I have time, I plan to expand on Darwino's various capabilities (along with the other various blog series I still plan to get to). For now, you can register for and download the Community Edition, read the documentation there, and/or track me down with any questions.

A Bit of Code Archaeology

Thu Mar 10 09:08:47 EST 2016

Tags: xpages

Yesterday, I decided to toss the source of my first real XPages app up on GitHub:

https://github.com/jesse-gallagher/Raidomatic

It's my WoW guild's web site, which had some forums as well as a raid-management tool and loot tracker. I'm guessing that those tools won't be particularly useful for your average XPages app, but they were interesting things to build, and were a great exercise in figuring out the platform. Since it is quite old, there are also plenty of terrible decisions in there, such as my mass recycler from before I knew that Domino objects are already all recycled at the end of the request, but hey, that's growth.

In any event, it's there for the curious, or in case any of the code ends up being useful for future Googlers.

Maven Native Chronicles: Running Automated Notes-based Tests

Sat Feb 27 17:02:11 EST 2016

Tags: maven
  1. Maven Native Chronicles, Part 1: Figuring Out nar-maven-plugin
  2. Maven Native Chronicles, Part 2: Setting Up a Windows Jenkins Node
  3. Maven Native Chronicles, Part 3: Improving Native Artifact Handling
  4. Maven Native Chronicles: Running Automated Notes-based Tests

This post isn't really in my ongoing Java thread, though it's related in that this is the sort of thing that may come up in fairly-advanced cases. This post will assume a functional knowledge of Maven, Tycho, and JUnit.

For Darwino, I ran into the need to run unit tests on Domino-adapter code during the Maven build process. Since the Domino project tree uses Tycho, this ended up differing slightly from standard Maven testing. Rather than using the src/test/java directory in the same project to house the associated tests, Tycho prefers the very-OSGi-native method of having a separate project, but declaring it a "fragment" plugin attached to the primary one. In OSGi terms, a fragment is a special type of plugin that, when loaded by the runtime, gets glommed on to a specified host plugin and runs in its same classpath. In other cases, this may be used to provide platform-specific additions, add locale resources, or other uses.

So I created a new fragment project, which is structurally much like a normal plugin, but with an extra line in the MANIFEST.MF:

Fragment-Host: com.example.some.parent.plugin

This line tips off the OSGi environment to its nature. In the pom.xml, there are a number of important differences related both to how Tycho handles test fragments and the necessity of loading the Notes native libraries:

<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
	xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
	<modelVersion>4.0.0</modelVersion>
	<parent>
		<groupId>com.example</groupId>
		<artifactId>some-parent</artifactId>
		<version>1.0.0-SNAPSHOT</version>
	</parent>
	<artifactId>com.example.some.parent.plugin.test</artifactId>
	<packaging>eclipse-test-plugin</packaging>

	<build>
		<plugins>
			<!--
				By default, Tycho doesn't include the other fragment plugins when running the test.
				So here, we manually include the appropriate features. 
			 -->
			<plugin>
				<groupId>org.eclipse.tycho</groupId>
				<artifactId>target-platform-configuration</artifactId>
				<version>${tycho-version}</version>
				
				<configuration>
					<dependency-resolution>
						<extraRequirements>
						
							<requirement>
								<type>eclipse-plugin</type>
								<id>com.ibm.notes.java.api.win32.linux</id>
								<versionRange>[9.0.1,9.0.2)</versionRange>
							</requirement>
							
							<requirement>
								<type>eclipse-feature</type>
								<id>com.example.some.native.feature</id>
								<versionRange>0.0.0</versionRange>
							</requirement>
							
						</extraRequirements>
					</dependency-resolution>
				</configuration>
			</plugin>
			<plugin>
				<groupId>org.eclipse.tycho</groupId>
				<artifactId>tycho-surefire-plugin</artifactId>
				
				<configuration>
					<testSuite>${project.artifactId}</testSuite>
					<testClass>com.example.some.parent.plugin.test.AllTests</testClass>
				</configuration>
			</plugin>
		</plugins>
	</build>
	
</project>

The preamble is the same as usual for Maven, but the packaging is slightly different. Instead of eclipse-plugin, this should be packaged as eclipse-test-plugin. Tycho's packaging doesn't particularly care about whether or not it's a fragment, but it does care about its test nature.

Things get a little interesting in the target-platform-configuration block. These two entries have similar purposes: to cause Tycho to load up other, native-artifact fragments required to run the tests. The first one showed up in the Java series: it contains Notes.jar, but, because it is itself a fragment (and can't be directly depended upon by the test project), Tycho won't automatically load it unless directed to. The second one serves a similar purpose, but loads a feature instead. This feature contains references to a number of distinct platform-dependent native-artifact fragments, and specifying this dependency causes Tycho to consider each one without having to specifically enumerate them in the POM.

The final block is a little simpler, and it just tells Tycho where to start when it goes to run the fragment as a test suite. The AllTests class is a test suite in the JUnit 4 convention, with @RunWith and @Suite.SuiteClasses annotations.


There's another catch to this, though: Notes has some specific demands on its environment, and in particular must be run with knowledge of a Notes program directory, a data directory, a notes.ini, and an ID file (unless you're doing DIIOP (which you probably shouldn't)). The specifics of what the libraries expect in their runtime environment and how they should be loaded in their API calls vary a little from platform to platform, and I ended up with a pile of "just keep trying stuff until it works" code. The result, though, is that I have automated tests running on Windows, Linux, and OS X. First, there's the large platform-specific section of my root POM, which defines platform-activated profiles that set up environment variables:

<!-- These profiles add support for specific platforms for tests -->
<profiles>
	<profile>
		<activation>
			<os>
				<family>Windows</family>
			</os>
			<property>
				<name>notes-program</name>
			</property>
		</activation>
	
		<build>
			<plugins>
				<plugin>
					<groupId>org.eclipse.tycho</groupId>
					<artifactId>tycho-surefire-plugin</artifactId>
					<version>${tycho-version}</version>
					
					<configuration>
						<skip>false</skip>
						
						<argLine>-Dfile.encoding=UTF-8 -Djava.library.path="${notes-program}"</argLine>
						<environmentVariables>
							<PATH>${notes-program}${path.separator}${env.PATH}</PATH>
						</environmentVariables>
					</configuration>
				</plugin>
			</plugins>
		</build>
	</profile>
	<profile>
		<id>mac</id>
		<activation>
			<os>
				<family>mac</family>
			</os>
			<property>
				<name>notes-program</name>
			</property>
		</activation>
	
		<build>
			<plugins>
				<plugin>
					<groupId>org.eclipse.tycho</groupId>
					<artifactId>tycho-surefire-plugin</artifactId>
					
					<configuration>
						<skip>false</skip>
						
						<argLine>-Dfile.encoding=UTF-8 -Djava.library.path="${notes-program}"</argLine>
						<environmentVariables>
							<PATH>${notes-program}${path.separator}${env.PATH}</PATH>
							<LD_LIBRARY_PATH>${notes-program}${path.separator}${env.LD_LIBRARY_PATH}</LD_LIBRARY_PATH>
							<DYLD_LIBRARY_PATH>${notes-program}${path.separator}${env.DYLD_LIBRARY_PATH}</DYLD_LIBRARY_PATH>
							<Notes_ExecDirectory>${notes-program}</Notes_ExecDirectory>
						</environmentVariables>
					</configuration>
				</plugin>
			</plugins>
		</build>
	</profile>
	<profile>
		<id>linux</id>
		<activation>
			<os>
				<family>unix</family>
				<name>linux</name>
			</os>
			<property>
				<name>notes-program</name>
			</property>
		</activation>
	
		<build>
			<plugins>
				<plugin>
					<groupId>org.eclipse.tycho</groupId>
					<artifactId>tycho-surefire-plugin</artifactId>
					<version>${tycho-version}</version>
					
					<configuration>
						<skip>false</skip>
						
						<argLine>-Dfile.encoding=UTF-8 -Djava.library.path="${notes-program}"</argLine>
						<environmentVariables>
							<!-- The res/C path entry is important for loading formula language properly -->
							<PATH>${notes-program}${path.separator}${notes-program}/res/C${path.separator}${notes-data}${path.separator}${env.PATH}</PATH>
							<LD_LIBRARY_PATH>${notes-program}${path.separator}${env.LD_LIBRARY_PATH}</LD_LIBRARY_PATH>
							
							<!-- Notes-standard environment variable to specify the program directory -->
							<Notes_ExecDirectory>${notes-program}</Notes_ExecDirectory>
							<Directory>${notes-data}</Directory>
							
							<!-- Linux generally requires that the notes.ini path be specified manually, since it's difficult to determine automatically -->
							<!-- This variable is a convention used in the test classes, not Notes-standard -->
							<NotesINI>${notes-ini}</NotesINI>
						</environmentVariables>
					</configuration>
				</plugin>
			</plugins>
		</build>
	</profile>
</profiles>

Each block is kicked off both by a specific OS combination, using Maven's OS names (you can also target specific architectures within them), as well as the presence of a notes-program property. This is a convention I've adopted to go alongside the notes-program property that points to the XSP plugins; this one instead points to the root Notes or Domino install to use for execution.

Windows is the easiest since Notes still feels most at home on there. There, it's just a matter of adding the Notes program root to the Java library path and the environment's PATH. From there, the Notes libraries automatically picked up the data directory and notes.ini, presumably from the registry.

The Mac is mildly more complex: in addition to the two settings from Windows, I also ended up adding the program path to LD_LIBRARY_PATH and DYLD_LIBRARY_PATH. I'm not entirely sure both are needed, but hey, it works this way. In addition, I had to specify Notes_ExecDirectory. After that, the tests found the location of the data dir and Notes Preferences, presumably due to Mac OS conventions.

Linux needed the most hand-holding, which shouldn't be too surprising for those who have installed Domino on Linux - it doesn't seem to respect any platform conventions there. In addition to specifying the notes-program property and using it in the same places as on the Mac, I also added two more properties to my Maven config: notes-data, to point to the data directory, and notes-ini, to point to notes.ini. I used the notes-data property to specify the Directory environment variable that the Notes libraries look for, and then I also specified NotesINI. That's not something that the Notes libs look for, but instead it's a way to shuttle the configuration to the Java code that actually executes the tests.

That leads to the final hurdle: initializing the Notes environment in the JUnit test classes. To do that, I specified a @BeforeClass method that checks for the presence of the Notes_ExecDirectory and NotesINI environment variables. If they're present (i.e. it's Linux), it calls NotesInitExtended with the value of Notes_ExecDirectory as the first argument and = plus the value of NotesINI as the second. Afterwards, whether or not that was called, it calls NotesThread.sinitThread(), and from then on NotesFactory.createSession() will generate proper native sessions.

There's also an @AfterClass method that is the mirror of that: it calls NotesThread.stermThread() and then, on Linux, NotesTerm.


So yeah, there are a lot of hoops to hop through! Hopefully, this post will be helpful for someone attempting to do the same thing I did, and it'll cut down on a lot of searching around and trying to piece together a working environment.