NSF ODP Tooling Example Project

Sun Apr 29 12:51:36 EDT 2018

  1. Next Project: ODP Compiler
  2. NSF ODP Tooling 1.0
  3. NSF ODP Tooling Example Project
  4. NSF ODP Tooling 1.2
  5. How the ODP Compiler Works, Part 1
  6. How the ODP Compiler Works, Part 2
  7. How the ODP Compiler Works, Part 3
  8. How the ODP Compiler Works, Part 4
  9. How the ODP Compiler Works, Part 5
  10. How the ODP Compiler Works, Part 6
  11. How the ODP Compiler Works, Part 7

To go alongside the first proper release of my NSF ODP Tooling, I've added an example project to the Git repository:

https://github.com/OpenNTF/org.openntf.nsfodp/tree/master/example

This project demonstrates how to use the tooling to create an XPages library project and build an NSF that uses it within the same Maven tree. This example project also serves as a reasonable template for the standard kind of project setup I make for Domino nowadays, minus a compile-time test plugin (which I'll probably add in eventually).

Environment Setup

Before building the ODP, you'll need to set up a compilation server and configure Maven to know about it. To start out with, make sure you have a Notes-ified Maven environment as described here. Since the IBM-provided update site is quite old at this point, it may be worth updating it from your local installation.

Next, install the Domino plugins on a Domino server running at least 9.0.1 FP10. It's best to do this on a pristine server without non-standard plugins installed, since part of the compilation process is to load and unload the needed plugins for your project. For my needs, I set up a Linux VM and it's doing the job nicely. Once that's set up, configure your Maven settings.xml to reference your compiler server, merging these values in with the normal Notes properties:

<?xml version="1.0"?>
<settings xmlns="http://maven.apache.org/SETTINGS/1.0.0"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="http://maven.apache.org/SETTINGS/1.0.0 http://maven.apache.org/xsd/settings-1.0.0.xsd">
    <profiles>
        <profile>
            <id>main</id>
            <properties>
                <notes-platform>file:///Users/jesse/Documents/Java/IBM/Notes9.0.1fp10</notes-platform>
                <nsfodp.compiler.server>someserver</nsfodp.compiler.server>
                <nsfodp.compiler.serverUrl>http://some.server/</nsfodp.compiler.serverUrl>
            </properties>
        </profile>
    </profiles>
    <activeProfiles>
        <activeProfile>main</activeProfile>
    </activeProfiles>
    <servers>
        <server>
            <id>someserver</id>
            <username>builduser</username>
            <password>buildpassword</password>
        </server>
    </servers>
</settings>

Note the "server" block at the bottom to provide login credentials. The plugins require a non-anymous user - at the moment, they allow ANY non-anonymous user, though, so you can create a new user just for compilation purposes. It's also good practice to encrypt your server connection passwords.

There are also options for deploying the NSF, but, since that's less important for our needs, I'll leave that aside for now. The README has a bit more information about that.

The Example Project

The structure of the projects is similar to the one I detailed in my Java series a couple of years back, but it's evolved a bit since then. The main aspects of it of note are:

The Folder Structure

Lately, I've been following the advice from the Vogella blog about how to structure an OSGi project. In a case like this, where there is only one plugin, one feature, and one update site, it's overkill, but I've found it to be better to create this structure at the start so you don't end up with a big mess of projects serving different needs within the same flat folder.

The nsfodp-maven-plugin

The most pertinent addition is the ODP wrapped in a Maven project. In the nsfs/nsf-example folder, I created a pom.xml to configure a project of type domino-nsf, which expects by default the ODP to be in an odp folder immediately within it. The ODP in there is entirely normal: it's just a near-fresh NSF exported from Designer, with the main addition being the inclusion of the XPages Library.

The project's pom does a few things: it establishes it as being a domino-nsf project and it adds some additional configuration to the nsfodp-maven-plugin, telling it to include the update site generated earlier in the build as part of the compilation process; this is what allows the NSF to build with the XPages library in the current project.

License Management

Since I've been making contributions for OpenNTF and particularly since taking over as IP manager, I've developed a much better appreciation for dotting my is and crossing my ts when it comes to licensing. One of the tools I quite like for this is the license-maven-plugin from com.mycila (it's not the only one of that name, and any of them should do the job). This plugin allows you to specify a license header template to be added to the top of each of your source files, which is an otherwise-tedious task that's easy to neglect. Once I added that to the root pom and set up appropriate exclusions (definitely make sure to add exclusions for any third-party code!), I'm able to run mvn license:format from the root project and have it run through all the source files in the directory tree and add an appropriately-formatted license header. I've definitely made this a standard part of my project setup now.

Plugin Versioning and maven-enforcer-plugin

This one admittedly doesn't have that much impact on day-to-day development, but it's another good "keeping your ducks in a row" addition: I've taken to explicitly specifying the versions for my Maven plugins, even when the version is implied by the core Maven version. This "locking down" can reduce one cause of mysterious code breaking if an implicit inclusion is upgraded in a way that is incompatible with your build process.

My friend in this task is the versions-maven-plugin's versions:display-plugin-updates goal, which will look through your project tree and find plugins that have newer versions in the available repositories and also tell you what your implied plugin versions are from the super-pom. I use this information to explicitly enumerate the plugins and find updates - the "copy and paste this block of XML" nature of Maven means that it's very easy to end up running a plugin that's several major versions behind.

Alongside this, the goal will tell you what the minimum required Maven version is for everything you're doing, which I've taken to specifying in the old-style prerequisites block as well as via the newer-style maven-enforcer-plugin route. Laying out these requirements explicitly is another good way to avoid phantom problems. Note that, for Eclipse, it's good to add a m2e configuration block to ignore the enforcer line, since m2e doesn't know what to do with it.

The OpenNTF Artifactory Server

This isn't so much a new technique as it is a prerequisite for using the plugin: currently, the plugin is hosted on OpenNTF's Artifactory server and not Maven Central, so you'll have to add a pluginRepository block for it. Once you have that, you can add the nsfodp-maven-plugin to the root pom.

The Eclipse Tooling

Once you have a project configured in Maven, the next step is to install the Eclipse tooling. This can be installed into any Eclipse installation running Neon or newer in a Java 8+ JVM. For my use nowadays, I primarily use Oxygen.3a on the Mac, but any platform should work.

Once you have that installed, you can import the projects into your Eclipse workspace and the tooling will adapt some elements for use as a pseudo plugin project. It will auto-generate MANIFEST.MF and build.properties files at the project's top level (which is why it's important to have the ODP in a sub-directory, so the FP10+ MANIFEST.MF isn't overwritten) and use those to configure the XPages Java class path, requiring the same plugins as the NSF does, including XPages library plugins, as well as adding any used Jars to the classpath. The result is that you can use it to edit your Java code with full classpath knowledge:

Eclipse Project

Beyond that, it provides some autocomplete capabilities for editing .xsp files. Currently, it had built-in knowledge of the controls that come standard on a Domoino 9.0.1FP10 server, plus any custom controls in your application. This autocomplete takes the form of contributions to Eclipse's standard XML editor, so it's pretty snappy:

Autocomplete

This works for tag names as well as component properties.

Limitations

This tooling has some significant limitations:

  • The biggest is that it doesn't have any special knowledge of most design elements - and, if you use binary DXL for safety purposes, that means that most legacy elements are difficult to modify.
  • It doesn't currently do any programmatic pairing between editable design elements and their associated .metadata and .xsp-config files, so it's best to do keep Designer around for creating those.
  • The XSP autocomplete consists just of contributions to the autocomplete list, and so it doesn't do any checking for legality of content or tag placement, nor does it currently have any descriptive metadata.

Future Prospects

I've gotten this project to a point where I can reduce my level of daily annoyance with my tools, which is an important step. There are a few more things that I'd like to add so I can further reduce my need to use Designer at all: giving the editors some knowledge of split files could allow for manipulating custom control properties in a better way, some property panes may be worth making, I'd like to have a way to modify binary-format DXL notes (which may either be by using ODA's CD structure implementation or by round-tripping the DXL to Domino to convert it to friendly format for editing), and I'd like to eliminate the strict requirement of having a Domino server around for compilation.

The last one is a fun project on its own: my plan is to have a headless Java app loading up an Equinox environment and using the same plugins and REST services as the Domino server. That's mostly functional now, but it has some odd compilation-time bugs that will take some investigation. With that in place, it'd remove the need for any separate server, though it would then require that your development environment have a Notes or Domino runtime available.

As always, I'd welcome any contributions, especially if someone has a particular itch they'd like to scratch. I have some open issues in the GitHub project that I likely want to tackle at some point, and I'm sure this could cover a lot more ground besides.

NSF ODP Tooling 1.0

Sat Apr 28 17:37:42 EDT 2018

  1. Next Project: ODP Compiler
  2. NSF ODP Tooling 1.0
  3. NSF ODP Tooling Example Project
  4. NSF ODP Tooling 1.2
  5. How the ODP Compiler Works, Part 1
  6. How the ODP Compiler Works, Part 2
  7. How the ODP Compiler Works, Part 3
  8. How the ODP Compiler Works, Part 4
  9. How the ODP Compiler Works, Part 5
  10. How the ODP Compiler Works, Part 6
  11. How the ODP Compiler Works, Part 7

A couple weeks back, I started a new project, and today I decided to declare it a 1.0. The premise of this project is simple: I really, really hate Designer. Since the original post, it's expanded into a set of tools that cover three main tasks:

ODP Compilation

The main impetus of the whole thing was to get a way to compile On-Disk Projects into working NSFs without using the extremely-fiddly Headless Designer route (and thus also decoupling the build process from Windows). In 1.0, ODP compilation works by installing a set of plugins on a Domino server (Windows or Linux), configuring some Maven properties, and wrapping the ODP in a Maven project. That process can upload dependent OSGi plugins and compile complex XPages apps as well as import the expected normal legacy Notes resources.

NSF Deployment

In addition to compilation, the Maven wrapper can be configured to deploy the NSF to a Domino server also running the plugins. This portion isn't as battle-hardened as compilation, but it seems to work, as long as your server ID is authorized to run the resultant XPages application, since that's how it'll be signed. Perhaps in the future I'll add the ability to sign with an ID out of an ID Vault.

Eclipse Tooling

In Eclipse Neon or above, the tooling adds a bit of knowledge about how to handle ODP Maven projects as well as some autocomplete capabilities for editing XSP source. Currently, autocomplete knows about the components that come with Domino 9.0.1 FP 10 as well as any Custom Controls inside the NSF, but in time I'd also like to include XPages-Library-contributed components. Additionally, it configures the project as a Plug-in Project with dependencies based on the selected XPages libraries and with source folders and embedded jars set up in the classpath.

The End Result

This project started out as just wanting to get Jenkins compilation working more reliably, but it's grown a bit to also allow for a smoother developer experience when working with NSF data in Eclipse. The "use case", for lack of a better term, that I'm aiming for is when you have the bulk of your code in OSGi plugins but have an NSF to maintain. I don't have any interest in replacing every editor from Designer, but I'd like to have enough that it's practical to do some basic XPages and legacy dev work without having to go over to Designer for everything. It's not fully there yet, but this version is a good start.

Getting The Code

The project is hosted on OpenNTF, and the source code is hosted on GitHub.

Compiling and Testing XPages Plugins With Java 9+

Fri Apr 13 15:38:50 EDT 2018

Tags: java xpages

Thanks to 9.0.1 FP8, we've been able to use Java 8 on Domino for a while, and FP10 makes that support a bit more official at the OSGi level. However, Java 8 is no longer the latest Java runtime, and so anyone writing XPages plugins and compiling/testing them via Maven will likely run into a situation where the compiling JRE is 9 or above. There are a couple changes in these runtimes that add some wrinkles to the process, so I took up the task of working around the problems I hit, and I created a Git repository to contain the code and tips I used to do so:

https://github.com/OpenNTF/org.openntf.domino.java9compat

The README in the repo contains a couple bits of Maven configuration that can be used to successfully compile XPages plugins in Java 9 or 10, and the included project contains a patch fragment for the com.ibm.notes.java.api plugin that serves up the Notes.jar to work around the fact that CORBA has been removed from the standard JRE.

For my needs, those changes made me able to compile a reasonably-complex XPages application, but there may be other edge cases I haven't hit yet. As I do, I'll add information and code there.

Next Project: ODP Compiler

Mon Mar 05 17:32:06 EST 2018

Tags: xpages java
  1. Next Project: ODP Compiler
  2. NSF ODP Tooling 1.0
  3. NSF ODP Tooling Example Project
  4. NSF ODP Tooling 1.2
  5. How the ODP Compiler Works, Part 1
  6. How the ODP Compiler Works, Part 2
  7. How the ODP Compiler Works, Part 3
  8. How the ODP Compiler Works, Part 4
  9. How the ODP Compiler Works, Part 5
  10. How the ODP Compiler Works, Part 6
  11. How the ODP Compiler Works, Part 7

One of the larger thorns in my side with my Domino development lately has been trying to automate builds of on-disk projects into NSFs via Jenkins. In theory, the process is pretty straightforward. It even works sometimes! However, particularly once you add in the necessity to deploy OSGi plugins to Designer first and want to run it from Jenkins, things get extraordinarily flaky: Designer may not launch properly from a behind-the-scenes Jenkins runner, the plugin installation may mysteriously fail, and so forth - and the error reporting is difficult at best.

So it's been on my mind for a good while to find a way to get from an ODP to an NSF without involving Designer, and I decided over the last couple days to really take a swing at it. It's not a small task, though; the process involves a number of difficult steps:

  • Install and activate provided OSGi plugins
  • Create an XPages registry that knows about the XPages libraries installed on the server, including those just contributed
  • Translate XPages and Custom Controls into Java source, with intra-file knowledge of the just-added CCs
  • Create a Java classpath that matches the plug-in dependencies that Designer derives from the dependant XPages Libraries
  • Compile the resultant Java source and any Java classes in the NSF into bytecode
  • Recompose the composite data form of the file data for these elements and many file resources into their DXL ".metadata" files for import
  • Create an NSF and import all of this
  • Compile any LotusScript in source-based libraries from the ODP
  • Uninstall any hot-loaded OSGi plugins

Those steps even leave out some fiddly details, like components defined via .xsp-config files in the NSF or XSP-associated .properties files, not to mention any steps I haven't encountered yet. It's a lot of work!

My first hope was to be able to hook into the process that Designer uses, perhaps grabbing a couple pertinent OSGi plugins and going from there. However, from what I can tell, all the involved plugins are intricately tied into many layers of Designer-the-IDE and so are no small matter to use on their own without also including the entire stack. So that left me to cobble together an equivalent process out of parts.

Fortunately, a couple projects have already provided a solid foundation for this. First and foremost is the XPages Bazaar. This is a project that Philippe Riand created a number of years ago, meant to be a workshop for really experimental components in a form less contrained than the ExtLib became. Since he left IBM, it's sat unmaintained, but I figured it'd be a perfect incubator for this project, so I tossed it up on GitHub, recomposed its Maven structure, and cleaned it up a bit for FP10 use. The reason why it makes such a perfect shell is a pair of its features: an XSP interpreter and an on-the-fly Java compiler. The former hooks into the mysterious guts of the XSP runtime to allow for translation of XSP to Java and the latter wraps the official Java compiler API with some OSGi knowledge to compile that along into bytecode.

Even starting with this, the first couple steps still required a lot of digging around. I learned how to install and activate OSGi bundles, how XPages Registries work internally, and made some tweaks to work around problems I encountered. I also encountered the joy of a bizarre javac bug to do with annotations in enum constructors, which my target project used.

Once I had the XPages-side components compiled, the next step was to start composing the NSF. The ODP format for XPages elements and other "file resource"-type entities is to put the code in its "normal" form in a file and then a subset of the DXL in a ".metadata" file next to it. The trouble here is that, even for entities where the file data is stored in the note unprocessed, the storage format isn't a strict binary blob of the file data: it's a composite data stream of file header and segment structures. I thought of two main ways I could go about getting these file resources into the NSF: via IBM's NAPI and by building the structures into the DXL files before import. The NAPI has a convenient FileAccess class for this purpose (presumably used by Designer), but my attempts to use it met primarily with server crashes. I'm sure it's possible to go this route, but I'd already "solved" the DXL problem years ago, for ODA's Design API. So, at least for now, I took the tack of writing out the binary structure manually, pouring it into the DXL as Base64, and importing that. It's a little inefficient, but it works.

Overall, I've made a lot of progress so far, but there's still a lot to be done: not all file types have their data put into the right places, LotusScript isn't properly compiled, Agents don't do anything at all yet, and XPages+CCs aren't actually imported into the NSF. Still, it's in a spot where I'm confident that it can one day work, whichis more than I could have said a week ago. If you'd like, browse around the code and pitch in if it's an itch you'd like to scratch as well.

New Small Project: generate-domino-update-site

Wed Jan 31 21:28:20 EST 2018

Tags: java xpages

For a good while now, the Domino Update Site for Build Management has been an essential tool for anyone setting up a local OSGi/Tycho development environment. However, it's really withered on the vine - the latest official release matches stock Domino 9.0.1, while recent fix packs have brought a number of improvements and new classes/methods. Since the binary code is entirely IBM's to distribute or not at their leisure, we can't update the release itself. Today's release of FP10, though, pushed me over the edge to the point where I at least wrote a tool to help the local creation of updated repositories.

The result of my work is generate-domino-update-site, a small CLI and programmatic script that you can point to a Domino installation, a destination directory, and an Eclipse program root to have it create (effectively) an updated version of the update site. The result contains a bit more than the official one did, but that should hopefully not hurt anything. Beyond just copying files into a directory, the script does the dirty work of re-packing unpacked bundles and features, vivifying the Notes.jar wrapper bundles, and creating p2 metadata (hence the Eclipse dependency).

To use the tool, you can clone the repository and run the jar as described in the readme. I may also get around to uploading it as a compiled result to a proper OpenNTF project page.

Lessons From Writing a JNoSQL Driver

Sat Dec 30 11:12:24 EST 2017

The other day, I decided to start up a side project to write an app for my Stars Without Number game in Darwino. Like back when I wrote a forum/raiding app for my WoW guild, I like to use this kind of opportunity to try new technologies and flesh out my skills in existing ones.

One such tech I've had my eye on for a bit is JNoSQL, which is a framework for integrating with NoSQL databases in Java. It's along the lines of Hibernate OGM, but intended to avoid the pitfalls of the relational/NoSQL that came with trying to adapt JPA directly to NoSQL databases. JNoSQL promised to be much easier to implement for a new database, so I decided to give it a shot.

JNoSQL

JNoSQL is split into two paired components, cleverly named Diana (the driver side) and Artemis (the model/integration side). The task of writing a driver for a new database is pretty well-contained: pick the database type(s) you want to implement (out of key/value, column, document, and graph) and implement about half a dozen interfaces. This is in stark contrast from when I took a swing at writing a Hibernate OGM driver, where the task was significantly more daunting. The final result is only ten Java files, with a chunk of them being utility classes for code organization.

It's a young project - young enough that the best version to run right now is 0.0.4-SNAPSHOT - but it functions well and it's been taken under the wing of the Eclipse foundation, which builds some confidence.

Implementation

Though the task was small, there were still a couple initial hurdles to getting going.

To begin with, I decided to start with the Couchbase driver - this certainly made the overall task easier, since Couchbase's semantics are very similar to Darwino's, but it also meant that I had to be wary of which parts of the codebase were really about implementing a Diana driver and which were Couchbase-isms. Fortunately, this was much easier than the equivalent work when I adapted the CouchDB Hibernate OGM driver, which was a sprawling codebase by comparison.

More significantly, though, it's always tough coming in to modify a codebase written by a single person or small team and learning as you go. The structure of the code is clean, but not quite my normal style (in part because Domino kept me from diving into Java 8 streams for so long), and I also had to ramp up quickly on the internal concepts of Diana. Fortunately, this was mostly easy, since the document-DB driver scaffolding is purpose-built, the hooks are straightforward and the query semantics were extremely easy to adapt. The largest impediment was getting used to the use of the term "Document", which internally refers to a key/value pair, while "DocumentEntity" is closer to the expected meaning.

Like the core implementation, the test suite I adapted from Couchbase was also pleasantly svelte, covering the bases without being an onerous nightmare to convert. Indeed, most of the code I added to it was the Darwino app scaffolding just for the test runtime.

Putting It Into Practice

Once the driver was written, I was hit by a bit of a personal curveball when I went to implement some actual data models. The model side, Artemis, is heavily wrapped together with CDI, which is a Java EE thing that, as I gather, handles managed beans, separation of implementation, and variable injection. This is a pretty normal thing for Java EE developers, but XPages's "don't call it Java EE" environment didn't introduce me to this aspect. As such, the fact that the documentation just kind of casually tossed around CDI terms and annotations threw me for a bit of a loop trying to determine what was what was required and what was just an idiom.

I eventually determined that I could use the reference implementation, Weld, without necessarily going whole-hog on Java-EE-everything. I'm a bit wary of what this bodes for whether I'll be able to use JNoSQL in Darwino on mobile devices, but I'll cross that bridge when I come to it. Once I got a bit of a handle on what Weld is and how to use it in unit tests (hint: make sure you have beans.xml files!), I was able to start writing my model objects and testing them.

Doing It Again

The fact that the bulk of my implementation work ended up being on the app side with CDI goes to show that the Diana driver model really shines. It got me thinking about how difficult it would be in the future, say to write a driver for Domino. There'd be some hurdles - Domino's lack of nested objects and antiquated querying mechanisms would need replacing - but the core task wouldn't be too bad. I don't know if I'd have a need for it, but it's nice to keep in mind as potential future small project.

All in all, I'm optimistic about the use of this. I'd love for Darwino to integrate as smoothly as possible into whatever standard environments it can, and this is one more step in that direction. I'll know as my side app takes shape how much this ingrains itself into my actual work.

State of my Workspace 2017

Thu Dec 28 14:00:00 EST 2017

Tags: sotu
  1. State of my Workspace 2017
  2. State of my Workspace 2021

Since the end of the year is a good time for recaps, I figured it could be fun and useful to look back to see how my development workspace and habits changed over the course of the year. One of the recurring pleasant side effects of working on Darwino is that it provides opportunities to dive into a wide array of tools and techniques, though my normal XPages development improved a bit too.

Eclipse Still Reigns

My primary IDE of choice remains Eclipse on the Mac. I've tried to like IntelliJ - I really have - and using Android Studio has given me a bit of appreciation for it, but the combination of inertia, its features, and the fact that IntelliJ feels more alien on the Mac have kept me with Eclipse. I do keep checking every milestone release notes page to see if they've improved the process of working with Tycho-based projects, though.

Text Editor Brawl

I've been a TextMate user since about when it came out, but the slowdown of development has taken its toll, and my eyes have started to wander. Its main replacement so far has been Sublime Text, which essentially feels like a snappier and more-modern TextMate, and it's suited well enough - I'm using it to write this post, for example. In recent weeks, though, I've also finally started giving Visual Studio Code a trial run for a React app I'm writing. Unsurprisingly, I'm finding it quite pleasant, and it's making a good play to take over as my default in the future.

The Markdown Editor Search May Have Reached Its Conclusion

The Darwino user documentation is written with Gitbook, which uses Markdown, and so I've been looking for a while for a comfortable environment for writing it. Each of the programmer text editors has good syntax support, and TextMate did some inline formatting, but the ones I use seem to either lack an inline preview pane or have one that doesn't work quite like I'd like. I used Haroopad for a while, but development petered out, and it was time to find another. I've recently found Typora - I'd thought at first that its semi-WYSIWYG editing in a single pane wasn't what I wanted, but it surprised me: it's outstanding. I have a few minor gripes, but I think I'm sold (or I will be once they release a for-money version).

Issue Trackers Remain Weird

I've gone through many iterations of how to track to-dos and issues for projects, and I've gone all-in on using integrated issue trackers in Git repos when available. The experience is never perfect, though: I don't like using browser tabs for these, but none of the native apps do quite what I want. One of the kickers is that not all of my projects are on GitHub, so either I need to check in two places or use something that bridges the gap.

Eclipse has Mylyn, which integrates with both GitHub and Bitbucket, but it's always a little janky about it. It does the job, though, and for a good while my solution was to have a separate Eclipse installation running geared entirely to issue tracking - no IDE functionality enabled, just a single window with the list of tasks. That worked kind of well, and I may return to it, but it never felt quite right.

For now, I've settled back on splitting up the two - checking Bitbucket via the web and using Ship as a mostly-native client for GitHub. The Ship UI is excellent enough to overcome my retience at the split workflow - the handling of milestones, Up Next, and whatnot make it a joy to use.

Aging Hardware Bristling With Drives

My main development machine remains the Late-2014 iMac 5K, which is nervously eyeing the iMac Pro page, but I've augmented it with a refurbished ThunderBay 4 mini to house my workspaces and VMs. I'll likely eventually convince myself to buy an iMac Pro, but for now this machine is still doing its job nicely.

Similarly, my gaming PC is still chugging along in its crazytown new case, and I've recently had a wild hair to cram it full of hard drives and group them with Storage Spaces. It's proving to have turned into a pretty nice NAS-alike and a capable workhorse for Plex serving, VR, and general gaming.

On the Chopping Block

I have a spate of apps that I use that I would love to trim down or replace if I can do so. Prime among those is Slack; while I like Slack for chat rooms better than the old standby of Skype, I'm hardly the first to notice what a hog it is, considering it's just a couple web pages. I've tried out Franz and Rambox a bit to tame the disaster zone of running Slack, Skype, Discord, and Microsoft Teams simultaneously, but they had some showstopping problems of one stripe or another. Still, I don't think the current setup can last another year.

SourceTree continues to be... kind of there. It's taken to crashing randomly every day or so, which isn't a huge impediment to working, but the notification dialog may as well say "hey, maybe it's time to give Git Tower a shot".

Parallels is still doing its job as my VM environment, but the advertizing for each successive version gets more and more desperate, and it's really been putting me off. Maybe it's time to make the switch to VMWare, especially since it seems like it took the performance crown in recent versions. Ideally, I wouldn't have to keep Windows running all the time at all, but for now it's a necessity.

First Steps to Code Coverage Analysis in Domino Plugins

Thu Nov 09 08:53:04 EST 2017

Tags: maven domino java

I'm always interested in getting the computer to tell me how to tell it what to do more successfully, and, to further that pursuit, I've started taking an interest in code coverage.

If you're not familiar with the term, "code coverage" refers to reporting on which lines of code were actually executed during runtime, most commonly in association with unit tests. Eclipse (and presumably other IDEs) has support for this, and I've decided to give it a shot.

Since I'm starting this out in the context of Domino plugins, there are more wrinkles than in most tutorials. Namely, the test suites I've written run exclusively through Maven instead of the Eclipse UI due to all the Notes environment setup, so I can't just use the normal UI tools to gather the data. Fortunately, Eclipse's EclEmma will work just fine with the output from a Maven project, as long as you configure it properly. I looked around for a while to find the right combination of tools to use, but it ended up being fairly simple to configure basic output that can be consumed in Eclipse's Coverage view.

There are two main additions. First, add the jacoco-maven-plugin to your root project's project.build.plugins block:

<plugin>
	<groupId>org.jacoco</groupId>
	<artifactId>jacoco-maven-plugin</artifactId>
	<version>0.7.8</version>
	<executions>
		<execution>
			<goals>
				<goal>prepare-agent</goal>
			</goals>
		</execution>
	</executions>
</plugin>

In normal cases, that would suffice. However, since the test configuration I have for Notes overrides the argLine property of the Tycho test runner, there's another step - add the tycho.testArgLine property manually into those blocks, such as in the Windows profile:

<profile>
	<activation>
		<os>
			<family>Windows</family>
		</os>
		<property>
			<name>notes-program</name>
		</property>
	</activation>

	<build>
		<plugins>
			<plugin>
				<groupId>org.eclipse.tycho</groupId>
				<artifactId>tycho-surefire-plugin</artifactId>
				<version>${tycho-version}</version>
 
				<configuration>
					<skip>false</skip>
 
					<argLine>${tycho.testArgLine} -Dfile.encoding=UTF-8 -Djava.library.path="${notes-program}"</argLine>
					<environmentVariables>
						<PATH>${notes-program}${path.separator}${env.PATH}</PATH>
					</environmentVariables>
				</configuration>
			</plugin>
		</plugins>
	</build>
</profile>

Once that's configured, running the test suite via Maven will create a new file in the target folder of the test plugin: jacoco.exec. This file can then be consumed in Eclipse by opening the "Coverage" view:

Eclipse's Show View window

In that view, right click and choose "Import Session..." and point to the data file. Click "Next" and check the projects+source folders from your workspace you're interested in analyzing. When you click "Finish", it'll do two things. First, it'll fill the Coverage view with statistics from your run:

Code Coverage stats

(We have a lot of work to do fleshing out our test suites for this one)

Secondly, it'll start highlighting your code to show you what code is executed, which branches are only partially covered, and which lines are skipped entirely. For example (ignore the sickly color scheme - I need to work on that):

Code Coverage example

This shows how several of the if branches are only tested in one direction, while the "Faces" block is skipped entirely. That also shows some of the trouble with testing XPages-run code: the Tycho environment can't reproduce the XPages environment fully, so some branches aren't testable in that way. I haven't looked into the possibility of gathering similar data from JUnit for XPages, so perhaps that's possible.

For now, though, this will have to do. And, like with these other "code improvement" techniques I've integrated lately, there's a lot of potential tedium - juggling when to write a test to cover some code that will obviously always work just to improve the highlighting vs. just focusing on the low-hanging fruit - but I expect that it will be a nice addition to my workflow over time.

New Small Project: p2site-maven-plugin

Thu Oct 26 14:17:16 EDT 2017

Tags: maven

It's no secret that I have a love/hate relationship with developing for OSGi platforms with Maven. The giant divide between "all-in" Tycho projects (which limit your options with normal Maven features) and trying to bolt on OSGi support in an otherwise-normal project creates an array of problems big and small.

Some of those hurdles would be difficult to bridge, such as any automated tests that want to test the proper functioning of OSGi services. However, not all projects need that - in the case of Darwino, for example, deployment to Domino is a secondary consideration in the Maven project, and so a Darwino app doesn't use Tycho for its packaging or testing. By jumping through a few hoops, we've gotten those projects to the point where they can emit a p2-formatted update site for use in OSGi, and that can be imported into a Domino NSF-based update site.

There's a minor caveat, though: because those update sites don't know about p2 formatting, you can't use the "Import Update Site" action, instead having to use "Import Features", which leaves the imported features in the "(Not Categorized)" group. This isn't a huge problem, but it's one that's easily fixed, so I wrote a small tool to do just that.

I've created a small open-source project called p2sitexml-maven-plugin, the purpose of which is to generate the site.xml file expected by Notes from a p2 repository generated by other means, such as the p2-maven-plugin. This can be included in a Maven build like so:

...
	<build>
		<plugins>
			...
			<plugin>
				<groupId>org.darwino</groupId>
				<artifactId>p2sitexml-maven-plugin</artifactId>
				<version>1.0.0</version>
				<executions>
					<execution>
						<goals>
							<goal>generate-site-xml</goal>
						</goals>
						<configuration>
							<category>Some Category</category>
						</configuration>
					</execution>
				</executions>
			</plugin>
		</plugins>
	</build>
...

Right now, the plugin isn't in Maven Central, but is in OpenNTF's Maven server. You can add that to an active profile in your settings.xml file like so:

...
	<pluginRepositories>
		<pluginRepository>
			<id>artifactory.openntf.org</id>
			<name>artifactory.openntf.org</name>
			<url>https://artifactory.openntf.org/openntf</url>
		</pluginRepository>
	</pluginRepositories>
...

It isn't a world-changing thing, but this should at least make the task of targetting Domino with non-Tycho Maven projects a little easier.

Side-Project Monday Evening

Tue Jun 27 09:49:32 EDT 2017

Tags: xpages

Yesterday, in one of my various Slack chats, the topic of JShell - the Java 9 REPL - came up in the context of how useful it would be for XPages development. Being able to open up a "shell" into a running XPages application could be really useful in a lot of ways - and I think that the XPages Debug Toolbar has an SSJS-evaluate feature that would do something like this.

Still, it got me looking around a bit, and I ran across Groovysh Server, which is a project that combines Apache's SSH server with Groovy's REPL to make an interactive remote-login shell running in the context of a given JRE. It even comes with a Spring Framework binding, showing its utility for this sort of thing.

So I decided to see how easy it would be to adapt this into an XPages context, and the answer is "pretty easy". I created a new project called XPages Groovy Shell to do just this. It's an XSP Library that you can enable in an application to, when it's loaded (i.e. when someone visits it via the web), spawn an SSH server on the specified port to allow logins and evaluation of Groovy code using the app's ClassLoader.

Now, I don't expect this to be a real project necessarily - I have a lot of non-tinkering work on my plate - but it can serve as an interesting proof of concept. Still, as it is, it's not too far from being expanded to having some proper user authentication, and, with some mechanism to "break into" the Faces environment to work with existing bean instances, it could be really something. As it stands, take a look - it's not a lot of code, and the concepts could be useful elsewhere too.