Satisfying Designer's Peculiar OSGi Constraints in Tycho

Mon Oct 12 17:33:14 EDT 2015

Tags: tycho

Before anything else, I should mention that this post is entirely on the topic of building OSGi plugins with Maven. If you're not doing that yet, this probably won't be particularly useful.

For the most part, when building OSGi plugins for XPages, you can be fairly confident that the available plugins will be fairly similar between Notes and Domino. That's not quite always the case, though - there are a set of plugins that are available in Domino that aren't present in Designer's runtime. They're generally physically there in the Notes install, in the osgi folder, but are presumably only loaded when you (inadvisably) fire up the local web preview.

In this situation, you may want to make your XPages library depend on a server plugin, but doing a normal OSGi dependency will cause it to fail to load in Designer. OSGi provides a clean mechanism for this: optional dependencies. If you mark a dependency (on a plugin or on a Java package) as optional, its absence will not prevent the plugin from loading. As long as the code that requires those classes is never run (as would be the case for server code loaded in Designer), you're in the clear.

However, when working on plugins with Maven, I've found it useful in a couple cases to tell Tycho to ignore optional dependencies to prevent it from choking on unimportant cascading dependencies or other issues. The trouble is when you combine these two techniques: now Tycho won't bother loading your optionally-required plugin, and so it won't have the classes available when it goes to compile your code.

A solution to this is to force Tycho to include the plugin in its view of the world regardless of what the MANIFEST.MF files say. This is accomplished in the target-platform-configuration plugin entry that shows up in your average Tycho pom.xml file. If you've started from the same starting point as most, the required section will already be in there: in the configuration, there should be a node named dependency-resolution and then one within that named extraRequirements. This allows you to shoehorn in these types of extra plugins - it's used in the normal case here for the "shim" Notes API plugin to avoid a dependency on Notes.jar. The same can be done for these non-Designer plugins:

<plugin>
	<groupId>org.eclipse.tycho</groupId>
	<artifactId>target-platform-configuration</artifactId>
	<version>${tycho-version}</version>
	<configuration>
		<!-- snip -->
		<dependency-resolution>
			<optionalDependencies>ignore</optionalDependencies>
			
			<extraRequirements>
				<requirement>
					<type>eclipse-plugin</type>
					<id>com.ibm.notes.java.api.win32.linux</id>
					<versionRange>9.0.1</versionRange>
				</requirement>
				<requirement>
					<type>eclipse-plugin</type>
					<id>com.ibm.domino.xsp.bridge.http</id>
					<versionRange>9.0.1</versionRange>
				</requirement>
			</extraRequirements>
		</dependency-resolution>

	</configuration>
</plugin>

Once that's in place, Tycho will include the plugin in its build and will be able to compile properly.

Incidentally, this technique has also proven useful in executing test cases in a setup that makes heavy use of fragments. Tycho won't automatically pick up all fragments when constructing a test environment, but you can force it to include them by adding an extra requirement on a feature that references them. For example:

<!-- snip -->
<extraRequirements>
	<requirement>
		<type>eclipse-feature</type>
		<id>some.parent.feature</id>
		<versionRange>0.0.0</versionRange>
	</requirement>
</extraRequirements>

This is the sort of Maven/OSGi interaction that brings joy to my heart and grey to my hair.

Release Day

Sat Oct 03 19:34:03 EDT 2015

Tags: oda framework

Today, I put two long-overdue releases up on OpenNTF.

First and by-far-foremost is version 2.0.0 of the OpenNTF Domino API. The major version reflects not so much a major new architectural change over the 1.5.x release candidates as it does the fact that those releases were conservatively named and presaged a Java-style "1.x forever" future. Various development builds and release candidates have been used in production by the API team and others for a while now, and so this represents a mature release of changes such as the Maven conversion, revamped auto-recycling, and graph API.

Alongside it, I bumped my own framework up to version 1.1.0 to reflect improved stability and a clean dependency on ODA 2.0.0. I also improved its packaging and created a distributable along the lines of the new ODA version.

So: enjoy!

The Podcasts I Listen To

Tue Sep 22 12:46:27 EDT 2015

Tags: podcasts

Over the years, I've accumulated a stable of podcasts I listen to, mostly tech-related (and then mostly Apple-related), and I realized that this may be a handy list for anyone looking to pick up some new listening material. I've uploaded the full list here, but here are some of the highlights:

  1. Accidental Tech Podcast - This show is hosted by John Siracusa (of OS X review fame), Marco Arment (of Overcast and Peace fame), and Casey Liss (of, now, ATP fame). It's almost all about technology, primarily Apple, with minor divergences into other topics.
  2. Hypercritical - This show ended several years ago, but is worth going back to if you have the time; it's John Siracusa's original podcast.
  3. Reconcilable Differences - Rounding out the Siracusa top-heaviness of my list, this podcast is less about tech and more about growing up and outlooks on life.
  4. Roundtable Podcast - This one not only doesn't involve John Siracusa, but also isn't about programming. Instead, it's about video games, hosted by a couple people I watch on YouTube and Twitch, and with a slant towards indie games.
  5. The Talk Show - Back to form, this is John Gruber's Apple-tech-heavy show, which is a great companion if you follow Daring Fireball.
  6. Random Trek - A podcast about a random episode of a Star Trek series each week. That description should tell you quickly whether or not you'd like it.
  7. Debug - This is something of a long-form conversation/interview show with a tech personality or two each episode. The episodes with Don Melton and Nitin Ganatra are particularly good.
  8. The Incomparable - This one covers geeky culture stuff generally: TV, movies, books, and so forth. I only listen to the episodes where they're talking about something I know, but those episodes are always worth listening to.
  9. Developing Perspective - This is a 15-minute-an-episode podcast about being an iOS developer, and is a great window into that world.
  10. Quit - This is about improving your career, including the potential benefits and perils of quitting a steady-but-crummy job for something else, and is often quite entertaining to boot.

There are a number of other good ones on my list that I didn't cover, but I didn't want to make the "highlights" summary TOO crazy. If you're interested in getting up to speed with the Apple world in particular, you can't go wrong with either ATP or The Talk Show.

Seriously, Though: Reverse Proxies

Wed Sep 16 17:28:38 EDT 2015

So, Domino administrators: what are your feelings about SSL lately? Do they include, perhaps, stress? It's "oh crap, my servers are broken" season again, and this time the culprit is a change in Apple's operating systems. Fortunately, in this case, the problem isn't as severe as an outright security vulnerability like POODLE and, better still, there is a definitive statement from IBM indicating that they are going to bring their security stack up to snuff almost in time.

But this isn't the first time we've been in this position, nor will it be the last. The focus on cracking and hardening TLS, particularly in the context of HTTPS, is not going to let up any time soon, nor will the basic movement towards encryption everywhere. So I would like to reiterate my stance: Domino is not suitable for direct external exposure via HTTP. The other protocols are problematic as well, but HTTP is the big one and, fortunately, the easiest to solve.

Whenever I've made this exhortation, part of the response I get is that administrators "should not" have to take this step. That Domino should be fully modern in its security stack or, at least, that IBM should handle this problem for them in one way or another. Or that one of Domino's traditional strengths is its all-on-one nature, with a single easy installation that takes care of everything, and that installing a separate web server is a complicated step that administrators shouldn't have to take.

Well... tough.

The promise of an integrated server system that took care of everything is a great promise, but it's always been extremely difficult to achieve, even for a platform firing on all cylinders. No matter the ideal, Domino does not perform at this level, and I still maintain that it should not need to. Outside of Domino and PHP, the application server is not generally expected to also be a full-fledged front-end web server, for exactly this sort of reason. Domino's job with respect to the web is to generate and serve up HTML, JSON, and other content; it's something else's job to make sure that that leaves your company's network securely.

If you still maintain that this should be Domino's job due to how much you pay for licensing, then that's a conversation between you and your IBM sales rep. I, though, am entirely fine with a paid-for app server not covering this ground, and that's in large part because the products that do perform this task are superb and often open-source.

These other products – nginx, Apache, HAProxy, and so forth – are made for this job. This flurry of SSL/TLS features and bugs you've been hearing about? These are all implemented or fixed in dedicated products, sometimes years before they come to your attention. And when new problems crop up, they're fixed and talked about immediately across the web, with guides for what to do appearing as soon as the problem arises.

Is it easier to continue using Domino HTTP directly than to set up a reverse proxy? Sure! Well, sort of, when there's not an active disaster to mitigate. And, much like how keeping an XPages (or other web) app up to spec and working on all target devices is more complicated than a legacy Notes app, sometimes that's just how the world goes. Deciding that it's complexity you don't want, or that your company's policy doesn't allow for an additional server, is not a tenable stance. Unless you're Apple, your company's policy will not bend the arc of the industry.

So, I implore you, at least give this kind of setup a real look and a trial run. I think you'll find that the basic setup is not dramatically more complicated than just Domino alone and will also open the door to new non-security features like improving page load speeds on the fly. If you want, with eyes open, to maintain an externally-facing Domino HTTP stack, that's fine, but I'll see you when the next security apocalypse comes around.

XPages Devs: Enable "Refresh entire application when design changes"

Mon Sep 14 11:48:46 EDT 2015

Tags: xpages java

When developing an XPages application of beyond-minimal complexity, you're likely to run into a problem where your app starts saying that a class is incompatible with itself in one way or another. The exception usually traces down to something like "foo.SomeClass is incompatible with foo.SomeClass" or "cannot assign instance of foo.SomeClass to field X..." where the field is that same class. This has cropped up since time immemorial.

It's actually, though, something that IBM sort-of fixed in 8.5.3 by adding an xsp.properties option of xsp.application.forcefullrefresh=true and then, in 9.0, a GUI option in Xsp Properties:

Basically, this checkbox amounts to "don't break my app periodically". From what I gather, the default behavior is in the interest of being clever with classloaders, but can lead to creeping problems in complicated apps, to the point where changing seemingly-innocuous things like the ACL breaks the app until you restart HTTP or "kick" the app by modifying certain design elements (namely, Java classes or faces-config.xml). Since that behavior is never desired, there is no reason to not check this box, and I enable it on every new NSF I create.

Dealing with OSGi Fragments in Tycho and Designer

Fri Sep 11 19:30:20 EDT 2015

Tags: java tycho osgi

This post is partly to spread information publicly and partly a useful note to my future self for the next time I run into this trouble.

In OGSi, the primary type of entity you're dealing with is a "Bundle" or "Plug-in" (the two terms are effectively the same for our needs). However, there's a sort of specialized type that you may run into called a "Fragment". They're similar to a plug-in in that they're a contained unit of Java code and resources, but they have the special property that they're attached to another plug-in and automatically come along for the ride when the main plug-in is used. This is useful in a couple situations, such as code organization, serving platform-specialized native libraries, after-the-fact additions, or providing library dependencies.

In the basic case, the only requirement is to specify in the fragment what the "parent" plug-in is (Eclipse provides a field for this in its editor) and then including the fragment in the installable feature alongside the plug-in. However, there are a few situations where a bit more work is required if you want to access the classes in the fragment: when used as part of a Tycho build and when used as an XSP Library in Designer (which may also apply to Eclipse dependency use generally).

Tycho

When doing a full Tycho build, even if both the plug-in and its fragment(s) are part of the current build, another project won't automatically include the fragment when doing the compilation. This can lead to a situation where the projects will compile cleanly in Eclipse (which handles the fragment attachment) but fail in Tycho. The trick, though small, is non-obvious: you have to tell the project that is using the fragment code about the fragment in its build.properties.

So say you have three projects: the main plug-in (some.main.plugin), a fragment attached to it (some.main.plugin.fragment), and the project consuming them (some.dependent.plugin). The normal first step is to include the main plug-in in the dependent plug-in's MANIFEST.MF as usual:

Require-Bundle: some.main.plugin

In Eclipse, this will suffice: both the main plug-in and its fragment will show up in the "Plug-in Dependencies" library. For Tycho, though, you have to tip it off using a line like this in build.properties:

extra.. = platform:/fragment/some.main.plugin.fragment

Think of this as saying "hey, dummy, don't forget about the fragment". Once you have that line, the Tycho-enabled Maven build should be able to resolve the fragment's classes and all will be well.

Designer

When using the plug-in and its fragment in an XSP Library in Designer, there's a similar-seeming problem: though Designer will include any direct dependencies of your Library plug-in in the class path, it won't pick up on any fragments by default (though it seems that Domino does). The trick here is that the primary plug-in has to tell Designer that it accepts fragments, which is done by setting Eclipse-ExtensibleAPI in the MANIFEST.MF file for some.main.plugin, like so:

Eclipse-ExtensibleAPI: true

Once that's in place, the fragment should start showing up in your NSF's classpath when the library is enabled.

My MWLUG 2015 Presentation, "Maven: An Exhortation and Apology"

Sun Aug 30 19:07:11 EDT 2015

Tags: mwlug

As prophesied, I gave a presentation on MWLUG last week. Keeping with my tradition, the slides from the deck are not particularly useful on their own, so I'm not going to post them as such. However, Dave Navarre once again did the yeoman's work of recording my session, so it and a number of other sessions from the conference are available on YouTube.

In addition, my plan is to expand, as I did earlier today, on the core components of my session in blog form, in a way that wouldn't have made much sense in a conference session anyway. And, if I'm as good as my word, I'll make a NotesIn9 episode or two on the subject.

Wrangling Tycho and Target Platforms

Sun Aug 30 17:16:13 EDT 2015

Tags: maven tycho

One of the persistent problems when dealing with OSGi projects with Maven is the interaction between Maven, Tycho, and Eclipse. The core trouble comes in with the differing ways that Maven and OSGi handle dependencies.

Dependency Mechanisms

The Maven way of establishing dependencies is to list them in your Maven project's POM file. A standard one will look something like this:

<dependencies>
	<dependency>
		<groupId>com.google.guava</groupId>
		<artifactId>guava</artifactId>
		<version>18.0</version>
	</dependency>
</dependencies>

This tells Maven that your project depends on Guava version 18.0. The "groupId" and "artifactId" bits are essentially arbitrary strings that identify the piece of code, and, following Java standards, convention dictates that they are generally reverse-DNS-style. There are variations on this setup, such as specifying version ranges or sub-artifacts, but that's what you'll usually see. The term "artifact" is a Maven-ism referring to a specific entity, usually a single Jar file, and I've taken to using it casually.

One of the key things Maven brings to the table here is Maven Central: a warehouse of common Maven-ized projects. Without specifying any additional configuration, the dependency declaration above will cause Maven to check with Maven Central to find the Jar, download it, and store it in your local repository (usually ~/.m2/repository). Then, during the build process, Java can reference the local copy of the Jar in the consistently-organized local folder structure. It will also, if needed, download "transitive" dependencies: the dependencies listed by the project you're depending on.

OSGi's dependency system is conceptually similar. Instead of the POM file, it piggybacks on the Jar's MANIFEST.MF file with something like this:

Require-Bundle: com.google.guava;bundle-version="18.0"

This is essentially the same idea as the Maven dependency: you reference an OSGi-enabled Jar (called a "Bundle" in OSGi parlance... which can also be a "Plug-in") by its usually-reverse-DNS name and provide restrictions on versions, plus other potential options.

There is no equivalent here of Maven Central: OSGi artifacts are found in Update Sites for each project and are added to the OSGi environment. When you install a plug-in in Eclipse/Designer or Domino, you are contributing to your installation's pool of OSGi artifacts. There are some conveniences to make this experience easier in some cases, such as the Eclipse Marketplace and the primary Eclipse Update Site, but it's not as coordinated as Maven.

The Overlap

Though often redundant, these two dependency mechanisms are not inherently incompatible. A given Jar file can be represented as both a Maven artifact and an OSGi bundle - and, indeed, a great many of the artifacts in Maven Central come pre-packaged with OSGi metadata, and there are Maven plugins to make generating this invisible to the developer.

Tycho - the Maven plugin that creates an OSGi environment for your Maven development - has the capability to more-or-less bridge this gap. By adding the Tycho plugins to your Maven build, you can point Maven at OSGi Update Sites (called "p2" sites) and Tycho will be able to find the artifacts referenced by your project's MANIFEST.MF Require-Bundle line. Even better, by using <pomDependencies>consider</pomDependencies> in your Tycho config, it will be able to look at the Maven dependencies of your project, check them for OSGi metadata, and then use that to satisfy the MANIFEST.MF requiremenets.

Though convoluted to say, the upshot is that, when you have that pomDependencies option, things work out pretty well... from the command line. The trouble comes in when you want to develop these projects in Eclipse.

Target Platforms

The aggregate set of OSGi bundles known by your OSGi environment (either Tycho or Eclipse in this case) and used for compilation is the "Target Platform". If you've used the XPages SDK or otherwise set up a non-Designer Eclipse installation for XPages plug-in development, you've seen Target Platforms in action: the installation process locates your Notes and Domino installations and adds their OSGi bundles to Eclipse's Target Platform, allowing them to be references by your own OSGi projects.

The trouble is that Eclipse is a bit... inflexible when it comes to specifying a project's Target Platform. Though Eclipse has the capacity to have many Target Platform definitions, only one is active at a time for your entire workspace. Moreover, this Target Platform (plus any projects in your workspace) makes up the entirety of what Eclipse is willing to acknowledge for OSGi development.

This causes serious trouble for Maven dependencies.

If you have a Tycho-enabled project, Eclipse's adapter will not use its Maven dependencies for OSGi requirement resolution. So if your project lists Guava in both OSGi and Maven, even though Maven can see it, and Tycho can see it, and the Guava Jar sitting in your local Maven repository is brimming with OSGi metadata, Eclipse will not acknowledge it and you will have an error that com.google.guava can't be found.

Workarounds

There are a couple potential workarounds for this, none of which are particularly great.

Just Do It Manually

One option is to just have any developers working on the project also track down and manually add all applicable OSGi bundles to their Eclipse installation. It's not ideal, but it could work in a pinch, especially if you only have a single dependency or two.

Include the Project Wholesale

This is the approach the OpenNTF Domino API has taken to date: several of its external dependencies are included wholesale in source form in the project tree. This accomplishes the goal because, with the projects in your workspace, Eclipse will happily acknowledge them as part of the Target Platform, while Tycho will also be able to recognize them. However, it carries with it the significant down side of importing a whole heap of foreign code into your project and then having to ensure that it builds in your environment.

Maven-Generated Target Platform

Another option is to have Maven create a Target Platform file (*.target) dynamically, and then have Eclipse use that as its Target Platform definition. You can do that by including a Maven project like this in your tree:

<?xml version="1.0"?>
<project
	xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"
	xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
	<modelVersion>4.0.0</modelVersion>
	<parent>
		<groupId>com.example</groupId>
		<artifactId>project-parent</artifactId>
		<version>1.0.0-SNAPSHOT</version>
	</parent>
	<artifactId>example-osgi-target</artifactId>
	
	<packaging>eclipse-target-definition</packaging>
	
	<build>
		<plugins>
			<plugin>
				<groupId>lt.velykis.maven</groupId>
				<artifactId>pde-target-maven-plugin</artifactId>
				<version>1.0.0</version>
				<executions>
					<execution>
						<id>pde-target</id>
						<goals>
							<goal>add-pom-dependencies</goal>
						</goals>
						<configuration>
							<baseDefinition>${project.basedir}/osgi-base.target</baseDefinition>
							<outputFile>${project.basedir}/${project.artifactId}.target</outputFile>
						</configuration>
					</execution>
				</executions>
			</plugin>
		</plugins>
	</build>
</project>

By creating a shell Target file in Eclipse named osgi-base.target, this project will locate its known dependencies (namely, any dependencies listed in it or in parent projects) and glom the paths of any of those OSGi plugins found in your local Maven repository onto it. In Eclipse, you can then open the generated Target file and set it as your active.

This... basically works, but it's ugly. Moreover, it limits your Target Platform customization options. If you want to include other Update Sites in your platform (say, the XPages targets generated by the SDK), you would have to modify the base Target file manually, making it fragile for multi-developer use.

Maven-Generated p2 Site

This is the option I'm tinkering with now, and it's similar to the Target-file approach. However, instead of creating an exclusive Target Platform, you can have Maven create a p2 Update Site and then add that directory to your Target Platform manually. That manual step is still unfortunate, but it's not too bad, and it should adapt automatically as more dependencies are added. A Maven plugin named p2-maven-plugin can do a tremendous amount of heavy lifting here: it can track down Maven dependencies, add OSGi metadata if they don't have them already, do the same for their dependencies, and then put them all into a nicely-organized Update Site:

<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
	xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
	<modelVersion>4.0.0</modelVersion>

	<groupId>com.example</groupId>
	<artifactId>example-osgi-site</artifactId>
	<version>1.0.0-SNAPSHOT</version>
	<packaging>pom</packaging>

	<pluginRepositories>
		<pluginRepository>
			<id>reficio</id>
			<url>http://repo.reficio.org/maven/</url>
		</pluginRepository>
	</pluginRepositories>

	<build>
		<plugins>
			<plugin>
				<groupId>org.reficio</groupId>
				<artifactId>p2-maven-plugin</artifactId>
				<version>1.2.0-SNAPSHOT</version>
				<executions>
					<execution>
						<id>default-cli</id>
						<phase>validate</phase>
						<goals>
							<goal>site</goal>
						</goals>
						<configuration>
							<artifacts>
								<artifact><id>com.google.guava:guava:18.0</id></artifact>
							</artifacts>
						</configuration>
					</execution>
				</executions>
			</plugin>
		</plugins>
	</build>
</project>

Once this project is executed, you can then add the generated folder to Eclipse's active Target Platform and be set. Though I haven't put this into practice yet, it may be the best out of a bad bunch of options.

Don't Use Eclipse

Well, I guess this final option may be the best if you're not an Eclipse fan - other IDEs may handle this whole thing much more smoothly. So, if you use IntelliJ and it doesn't have this problem, that's good.


These problems cause a lot more heartburn than you'd think they should, considering that this is basic project setup and not even part of the task of actually developing your project, but such is life. As long as you have a dependency on non-Mavenized OSGi artifacts (such as the XPages runtime) or want to use Tycho's full abilities (such as OSGi-environment unit tests or building full Eclipse-based applications) while also developing in Eclipse, you're stuck with this sort of workaround.

MWLUG 2015 - Maven: An Exhortation and Apology

Sun Aug 16 11:55:17 EDT 2015

Tags: mwlug maven

At MWLUG this coming week, I'll be giving a presentation on Maven. Specifically, I plan to cover:

  • What Maven is
  • Why Domino developers should know about it
  • Why it's so painful and awkward for Domino developers
  • Why it's still worth using in spite of all the suffering
  • How this will help when working on projects outside of traditional Domino

The session is slated for 3:30 PM on Thursday. I expect it to be cathartic for me and useful for the attendees, so I hope you can make it.

Maven Native Chronicles, Part 3: Improving Native Artifact Handling

Sun Jul 26 21:38:37 EDT 2015

Tags: maven
  1. Maven Native Chronicles, Part 1: Figuring Out nar-maven-plugin
  2. Maven Native Chronicles, Part 2: Setting Up a Windows Jenkins Node
  3. Maven Native Chronicles, Part 3: Improving Native Artifact Handling
  4. Maven Native Chronicles: Running Automated Notes-based Tests

This post isn't so much a part of the current series as it is a followup to a post from the other week, but I can conceptually retcon that one in as a prologue. This will also be a good quick tip for dealing with Maven projects.

In my previous post, I described how I copied the built native shared library from the C++ project into the OSGi fragments for distribution, and I left it with the really hacky approach of copying the file using a project-relative path that reached up into the other project. It technically functioned, but it relied on the specific project structure, which wouldn't survive any reorganization or breaking up of the module tree.

To improve it, I reworked it to be a bit more Maven-y, which involves two steps: attaching the built artifacts to the output of the native project and then using the dependency plugin to copy the native artifacts in as needed. For the first step, I used the build-helper-maven-plugin, though there may be other ways to do it. This is relatively straightfoward, though:

<plugin>
	<groupId>org.codehaus.mojo</groupId>
	<artifactId>build-helper-maven-plugin</artifactId>
	<version>1.3</version>
	<executions>
		<execution>
			<id>attach-artifacts</id>
			<phase>package</phase>
			<goals>
				<goal>attach-artifact</goal>
			</goals>
			<configuration>
				<artifacts>
					<artifact>
						<file>${project.basedir}/x64/Debug/nativelib-win32-x64.dll</file>
						<type>dll</type>
						<classifier>win32-x64</classifier>
					</artifact>
					<artifact>
						<file>${project.basedir}/Win32/Debug/nativelib-win32-x86.dll</file>
						<type>dll</type>
						<classifier>win32-x86</classifier>
					</artifact>
				</artifacts>
			</configuration>
		</execution>
	</executions>
</plugin>

This causes the native libraries - so far, the two Windows ones - to be included in the Maven repository during installation, and to then be accessible from other projects. The files are named using the module base name plus the classifier appended and the type as the file extension, like native-project-name-win32-x64.dll.

To copy that artifact into the OSGi bundle project, I then use maven-dependency-plugin to copy it in. Here I reference it via the module name and the classifier/type pair used above (with some shorthands because they're in the same multi-module project):

<plugin>
	<groupId>org.apache.maven.plugins</groupId>
	<artifactId>maven-dependency-plugin</artifactId>
	<version>2.10</version>
	
	<executions>
		<execution>
			<id>copy-native-lib</id>
			<phase>prepare-package</phase>
			<goals>
				<goal>copy</goal>
			</goals>
			<configuration>
				<artifactItems>
					<artifactItem>
						<groupId>${project.groupId}</groupId>
						<artifactId>native-project-name</artifactId>
						<version>${project.version}</version>
						<type>dll</type>
						<classifier>win32-x64</classifier>
					</artifactItem>
				</artifactItems>
				<outputDirectory>lib</outputDirectory>
				<stripVersion>true</stripVersion>
			</configuration>
		</execution>
	</executions>
</plugin>

The net result here is the same as previously, but should be more maintainable.