Showing posts for tag "domino"

Using Custom DNS Configurations With CertMgr

Thu Oct 24 18:51:20 EDT 2024

The most common way that I expect people use Domino's CertMgr/certstore.nsf is to use Let's Encrypt with the default HTTP-based validation. This is very common in other products too and usually works great, but there are cases when it's not what you want. I hit two recently:

  • My Domino servers are behind Traefik reverse proxies to blend them with other Docker-based services, and by default the HTTP challenge doesn't like when there's already an HTTP->HTTPS redirect in place
  • I also have dev servers at home that aren't publicly-visible at all, and so can't participate in the HTTP flow at all

The first hasn't been trouble until recently, since the reverse proxy is fine, but now I want to have a proper certificate for other services like DRAPI. For the second, I've had a semi-manual process: my [pfSense][(https://www.pfsense.org/)-based router uses its Acme Certificate plugin to do the dns-01 challenge (since it knows about my DNS provider) and then, every three months, I would export that certificate and put it into certstore.nsf.

Re-Enter CertMgr

Domino's CertMgr can handle those DNS challenges just fine, though, and the HCL-TECH-SOFTWARE/domino-cert-manager project on GitHub contains configuration documents for several common providers/protocols.

For historical reasons (namely: I didn't like Network Solutions in 2000), I use joker.com as my registrar, and they're not in the default list. Indeed, it seems like their support for this process is very much a "oh geez, everyone's asking us for this, so let's hack something together" sort of thing. Fortunately, the configuration docs are adaptable with formula (and other methods) - I'll spare you the troubleshooting details and get to the specifics.

DNS Provider Configuration

In certstore.nsf, the "DNS Configuration" view lets you create configuration documents for custom providers. Before I go further, I'll mention that I put the DXL of mine in OpenNTF's Snippets collection - certstore.nsf has a generic "Import DXL" action that lets you point to a file and import it, made for exactly this sort of situation.

Anyway, the meat of the config document happens on the "Operations" tab. This tab has a bunch of options for various lookup/query actions that different providers may need (say, for pre-request authorization flows), but we won't be using most of those here.

Operations

Our type here is "HTTP Request" - there are options to shell out to a command or run an agent if you need even more flexibility, but that type should handle most cases.

The "Status formula" field controls what Domino considers the success/failure state of the request. It contains a formula that will be run in the context of a consistent document used across each phase. If your provider responds with JSON, the JSON will be broken down into JSONPath-ish item names, as you can see in the HCL-provided examples. You can then use that to determine success or failure. Joker replies in a sparse human-readable text format, but does set the HTTP status code nicely, so I set this to ret_AddStatus.

The "DNS provider delay" field indicates how long the challenge check will wait after the "Add" operation, and that latency will depend on your specific provider. I did 20 seconds to be safe, and it's proven fine.

During development, setting "HTTP request tracing" to "Enabled" is helpful for learning how things go, and then "Write trace on error" is likely the best choice once things look good.

HTTP Lookup Request

For Joker, you can leave this whole section blank - its "API" doesn't support this and it's optional, so ignore it.

HTTP Add Request

This section is broken up into two parts, "Query request" and "Request". Set/leave the "Query request type" to "None" or blank, since it's not useful here.

Now we're back into the meat of the configuration. For "Request type", set it to "POST".

"URL formula" should be cfg_URL, which represents the URL configured above. Other providers may have URL permutations for different operations, but Joker has only the one.

Joker is very picky about the Content-Type header, so set the "Header formula" field to "Content-Type: application/x-www-form-urlencoded", which will include that constant string in the upload.

Things get a bit more complicated when it gets to the "Post data formula". For this, we want to match the example from Joker's example, but we also need to do a bit of processing based on the specific name you're asking for. Some DNS providers may want you to send a DNS key value like _acme-challenge.foo.example.com, while others (like Joker) want just _acme-challenge.foo. So we do a check here:

txtName := @If(@Ends(param_DnsTxtName; "."+cfg_DnsZone); @Left(param_DnsTxtName; "."+cfg_DnsZone); param_DnsTxtName);

"username=" + @UrlEncode("Domino"; cfg_UserName) + "&password=" + @UrlEncode("Domino"; cfg_Password) + "&zone=" + @UrlEncode("Domino"; cfg_DnsZone) + "&label=" + @UrlEncode("Domino";txtName) + "&type=TXT&value=" + @UrlEncode("Domino"; param_DnsTxtValue)

In my experience, this covers both single-host certificates and wildcard certificates.

HTTP Delete Request

This is for the cleanup step, so your DNS isn't littered with a bunch of useless TXT challenge records.

As before, make sure "Query request type" is "None" or blank.

Similarly, "Request type", "URL formula", and "Header formula" should all be the same as in the "Add" section.

Finally, the "Post data formula" is almost the same, but sets the value to nothing:

txtName := @If(@Ends(param_DnsTxtName; "."+cfg_DnsZone); @Left(param_DnsTxtName; "."+cfg_DnsZone); param_DnsTxtName);

"username=" + @UrlEncode("Domino"; cfg_UserName) + "&password=" + @UrlEncode("Domino"; cfg_Password) + "&zone=" + @UrlEncode("Domino"; cfg_DnsZone) + "&label=" + @UrlEncode("Domino";txtName) + "&type=TXT&value="

Putting It To Use

Once you have your generic provider configured, you can create a new Account document in the "DNS Providers" view.

In this document, set your "Registered domain" to your, uh, registered domain - in my case, "frostillic.us". This remains the case even if you want to register wildcard certificates for subdomains, like if I wanted "*.foo.frostillic.us". CertMgr will use this to match your request, and matches subdomain wildcards too.

There's a lot of room for special tokens and keys here, but Joker only needs three fields:

"DNS zone" is again your domain name.

"User name" is the user name you get when you go to your DNS configuration and enable Dynamic DNS - it's not your normal account name. This is a good separation, and a lot of other providers will likely have similar "don't use your normal account" stuff.

Similarly, "Password" is the Dynamic-DNS-specific password.

Joker account configuration

TLS Credentials

Your last step is to actually put in the certificate request. This stage is actually pretty much identical to the normal process, with the extra capability that you can now make use of wildcard certificates.

On this step, you can fill in your host name, servers with access, and then your ACME account. Even more than with the normal process, it's safest to start with "LetsEncryptStaging" instead of "LetsEncryptProduction" to avoid them temporarily banning you if you make too many requests.

With a custom provider, I recommend opening up a server console for your CertMgr server before you hit "Submit Request", since then you can see its progress as it goes. You can get potentially more info if you launch CertMgr as load certmgr -d for debug output. Anyway, with that open, you can click "Submit Request" and let it rip.

As it goes, you'll see a couple lines reading "CertMgr: Error parsing JSON Result" and so forth. This is normal and non-fatal - it comes from the default behavior of trying to parse the response as JSON and failing, but it should still put the unparsed response in the document. What you want is something at the end starting "CertMgr: Successfully processed ACME request", and for the document in certstore.nsf to get its nice little lock icon. If it fails, check the error message in the cert document as well as the document in the "DNS Trace Logs" view - that will contain logs of each step, and all of the contextual information written into the doc used by your formulas.

Wrapping Up

This process is, unfortunately, necessarily complicated - since each DNS provider does their own thing, there's not a one-config-fits-all option. But the nice thing is that, once it's configured, it should be good for a long while. You'll be able to generate certificates for non-public servers and wildcard at will, and that makes a lot of things a lot more flexible.

Simplifying the Maven Build of the NSF File Server Project

Wed Apr 10 17:02:09 EDT 2024

When working on NSF File Server project that I talked about the other day, I took a slightly-different tack as far as building it than I did in the past, and I think it's worth going over some of that in case it's useful for others.

Initial Version

The first version of this project was a non-OSGi WAR file meant to be deployed to an app server like Liberty, not to Domino's OSGi stack, and so it's never involved Tycho. This made it mostly simpler, since its various dependencies are normal Maven dependencies and so I didn't have to worry about any of the annoying hoops.

However, it did have some native Domino dependencies: Notes.jar and the NAPI. These would need to be included as Maven dependencies and brought into the final WAR. The way I handled this was using the generate-domino-update-site project, which lets you first generate a p2 site in the style of the painfully outdated IBM-provided update site and then, if desired, turn that p2 site into more-normal Maven artifacts.

When I eventually switched from targeting a WAR file to having it run on Domino, I used the same dependency structure. The Domino version runs as an HttpService implementation, and so I pointed at the Mavenized version of the com.ibm.xsp.bootstrap and com.ibm.domino.xsp.adapter bundles.

Then, I used the maven-bundle-plugin, which fits the job of taking an otherwise-normal Maven project and making it work in OSGi environments (mostly). The way that plugin works is that you specify a lot of your MANIFEST.MF rules in the pom.xml:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
<plugin>
	<groupId>org.apache.felix</groupId>
	<artifactId>maven-bundle-plugin</artifactId>
	<extensions>true</extensions>
	<configuration>
		<instructions>
			<Bundle-SymbolicName>org.openntf.nsffile.httpservice;singleton:=true</Bundle-SymbolicName>
			<Bundle-RequiredExecutionEnvironment>JavaSE-1.8</Bundle-RequiredExecutionEnvironment>
			<Export-Package/>
			<Require-Bundle>
				com.ibm.domino.xsp.adapter,
				com.ibm.commons,
				com.ibm.commons.xml
			</Require-Bundle>
			<Import-Package>
				javax.servlet,
				javax.servlet.http
			</Import-Package>
			<Embed-Dependency>*;scope=compile</Embed-Dependency>
			<Embed-Transitive>true</Embed-Transitive>
			<Embed-Directory>lib</Embed-Directory>
			
			<_removeheaders>Require-Capability</_removeheaders>
			
			<_snapshot>${osgi.qualifier}</_snapshot>
		</instructions>
	</configuration>
</plugin>

The first couple are one-for-one matches to what you'd have in the MANIFEST.MF, but things get weird once you get to the "Embed-*" ones.

The Embed-Dependency instruction is a potent one: you give it a description of what dependencies you want embedded in your OSGi bundle (in this case, all my non-provided dependencies), and then it does the job of copying them into your final bundle JAR. You can do this other ways - copying them manually, using the Maven Dependency Plugin, or others - but this handles all your transitive stuff nicely for you, thanks to Embed-Transitive. I use Embed-Directory here just for cleanliness - the result is functionally the same without it.

The final bits are just for cleanliness: I remove Require-Capability to avoid some trouble I had with older Domino versions, and then I set what the snapshot value will be, which ends up being the current build time.

With this, I end up with a single OSGi bundle with everything in it. This works well for this sort of project - with something to be used in Designer, I prefer to make a big pool of distinct OSGi bundles to make it so that you can look up the source properly, but something server-only like this doesn't need that.

2.0 Version

In this new version, the switch to JNX meant that I was tantalizingly close to not having to do any weird dependency stuff: JNX is distributed in Maven Central, so I didn't need to have the weird locally-built stuff like I did for Notes.jar and the NAPI.

However, that wasn't everything: there are still the "bootstrap" bundles containing the HttpService superclass and related classes. While I don't need to distribute those anywhere, they're still required to compile the classes - no amount of non-verified text files or the like will get around that.

I came up with another way, though. Java only needs classes that look like those to compile, and then the compiled class of mine will be the same regardless of anything else. This is critical: I don't actually need any implementation code, and that's the part I can't redistribute. So I made two little Maven modules: com.ibm.xsp.bootstrap.shim and com.ibm.domino.xsp.adapter.shim. These modules contain just a handful of classes, and of those classes only the methods actually referenced by my code. Since these modules will then be marked as "provided", they won't be bundled into the final JAR.

This squared the circle nicely, where now I can compile the Java side without any weird pre-requisites. Admittedly, the two NSFs in the module set still require an NSF ODP Tooling environment, which is a whole other ball of wax, but it's a step in the right direction.

Other Uses

This technique can be used in other similar projects when you only need a few classes from the XPages stack. For example, if your goal is to just wrap a third-party library and provide it to XPages in Designer, you could probably do this by making a stub implementation of XspLibrary and related classes, and skip the whole generate-domino-update-site step. The more you use from the stack, the less practical this is - for example, the XPages Jakarta EE project reaches into all sorts of crap, and so I can't really do this there. For this, though, it works nicely.

NSF File Server 2.0

Sun Apr 07 13:59:18 EDT 2024

A few years ago, I made a little project that hosts an SFTP server that stores documents in an NSF. I've used it here and there since then - as in the original post, I stashed some company docs in it to have them nicely synced among our Domino servers, and I've also had cases where clients use it to, for example, provide a way for their vendors to upload files in a standard way.

The other week, I decided to dive back into it to add some capabilities I'd wanted for a while, and the result is version 2.0.0. This version is a significant revamp that adds quite a bit.

Multiple Mounts

The first limitation I wanted to improve was the fact that the first version was restricted to a single NSF. That works fine in the basic case, but I wanted to start doing things like storing server config backups in there, and wouldn't necessarily want them in the same NSF as, say, company contracts, credentials, and secrets.

The way I went about this was to make it so that the new configuration NSF has a "Mounts" view that lets you specify a path in a conceptual top level of the filesystem that would then point to a target NSF. This allows the admin to do things like have separate ACLs for different mounts - since the client will act as an authenticated Domino user, these will be properly obeyed, and a user won't be able to access documents they don't have rights to.

Additionally, I could configure it so that not all mounts are present on all servers, which will come into play particularly with the next feature.

Screenshot of the Mounts view in the file server config NSF

New Filesystem Types

Once I had a composite top-level filesystem, I realized that it wouldn't be terribly difficult to allow more filesystem types than the original NSF file store I made. That filesystem is built using the NIO File System Provider framework added in Java 7, and that system is designed to be pretty extensible. By default, Java comes with a few providers: the normal local filesystem as well as one that can treat a ZIP or JAR as a contained filesystem itself. These are accessed in a generic way, where you specify a URI and a map of "env" properties that are provider-specific.

For example, the ZIP filesystem takes a URI in the form of "jar:file:/some/path/to/file.zip" and an environment map configuring whether the filesystem should be created if it doesn't already exist in memory and what encoding to use for filenames (very important if you have Unicode characters in there).

So I added ways to configure a mount to the local server filesystem (similar to what the Mindoo FTP Server does) and then a generic configuration for any installed provider. It's probably uncommon that you will have a custom File System Provider implementation in your Java classpath, but hey, maybe you do, and I want to allow that to work.

I also added an extension point to the project itself that allows adding new providers via plugin.xml files in OSGi, and I can think of a couple other projects that may use this, like the NSF ODP Tooling.

WebContent Filesystem

Beyond adding the JVM-provided systems, I wrote another new filesystem type, one that provides access to the conceptual "WebContent" directory presented in Package Explorer in Designer:

Screenshot showing Designer and Transmit looking at the same WebContent in an NSF

The idea here is that this could be used to deploy, say, a JavaScript client application to an NSF without the developer or build server having to know anything about Domino. Pretty much everything can work with SFTP, so this makes accessing those files a lot easier. This is similar to the WebDAV capabilities Domino has had for a very long time, but with a different protocol.

Server Keypairs

In the first version, the app would generate and store the server's SSH keypair on the filesystem, in the data directory. This is fine, but part of the point of this whole project is that I like to get away from non-replicating stuff, and so I moved these keys to the configuration NSF. Now, on first connection, the server will look for a keypair document in the NSF and, if it doesn't exist, will generate a new one and store it there. Since I've been working with encrypted fields a lot for client work lately, I also realized that this was a good use for it: the public key is a normal text item (so you could distribute and verify it as needed), but the private key is encrypted with the generating server's ID file. Since only the server itself ever needs to know its private key, this works swimmingly.

JNX

This isn't a new app feature per se, but this was a good situation for me to put JNX to work in an open-source project. I had originally written this using the lotus.domino classes for most work and the IBM NAPI for things like generating sessions for a given username, but switching to JNX let me ditch both of those.

Admittedly, this is a case where switching to JNX didn't grant me significant new capabilities, but it DID let me do a couple things better. Some things are distinct feature improvements, like improving password authentication (previously, I was doing a compare of hashes "manually", which is fragile), while others are just making the code smoother, like no longer having to do the read-convert-recycle dance with DateTimes in LSXBE. It's just pleasant, and let me find a few places where the JNX API could be improved.

Future Additions

When I pick this project back up, there are certainly a couple things I'd like to add.

One would be to look into rsync support: rsync is tremendously useful for things like synchronizing filesystem-bound configs, but it's its own protocol tunneled over SSH, and so just having SFTP isn't enough. The underlying Apache Mina SSHD project is a general SSH server and not just SFTP, so it may be possible to do it by intercepting the commands sent over to initialize rsync, but it will be non-trivial. There's a library in Java that provides an rsync server, but it's GPL-licensed, and so I have to keep away for license-safety's sake.

Beyond that, it's mostly that I'd like to implement more filesystem types. Presenting data as a filesystem can be a very powerful tool: you could imagine providing access to documents in a DB as DXL or YAML, or listing files from a Document Library NSF, or (as I'd like to do some day) having the NSF ODP Tooling project replicate the ODP layout over SFTP.

For now, I'm looking forward to putting it to more use as a coordinating point. If I keep messing around with apps on TrueNAS, it'll give me a good feeling of security to have more info stashed in Domino and less prone to destruction if one server happens to blow up.

XPages JEE 2.15.0 and Plans for JEE 10 and 11

Fri Feb 16 15:30:40 EST 2024

  1. Updating The XPages JEE Support Project To Jakarta EE 9, A Travelogue
  2. JSP and MVC Support in the XPages JEE Project
  3. Migrating a Large XPages App to Jakarta EE 9
  4. XPages Jakarta EE Support 2.2.0
  5. DQL, QueryResultsProcessor, and JNoSQL
  6. Implementing a Basic JNoSQL Driver for Domino
  7. Video Series On The XPages Jakarta EE Project
  8. JSF in the XPages Jakarta EE Support Project
  9. So Why Jakarta?
  10. XPages Jakarta EE 2.5.0 And The Looming Java-Version Wall
  11. Adding Concurrency to the XPages Jakarta EE Support Project
  12. Adding Transactions to the XPages Jakarta EE Support Project
  13. XPages Jakarta EE 2.9.0 and Next Steps
  14. XPages JEE 2.11.0 and the Javadoc Provider
  15. The Loose Roadmap for XPages Jakarta EE Support
  16. XPages JEE 2.12.0: JNoSQL Views and PrimeFaces Support
  17. XPages JEE 2.13.0
  18. XPages JEE 2.14.0
  19. XPages JEE 2.15.0 and Plans for JEE 10 and 11

Today, I released version 2.15.0 of the XPages Jakarta EE project. As is often the case lately, this version contains bug fixes but also a few notable features:

  • You can now specify Servlets in WEB-INF/web.xml (as opposed to just via the @WebServlet annotation). This is helpful for defining a Servlet when the actual implementation is in a JAR or when following non-annotation-based examples
  • You can now specify context-param values in WEB-INF/web.xml in the NSF and META-INF/web-fragment.xml in JAR design elements, which will be available to JSP, JSF, JAX-RS, @WebServlet-annotated Servlets, and web.xml-defined Servlets
  • Added @BooleanStorage annotation for NoSQL entities to define how boolean values are converted to note items
  • Added CRUD operations for calendar events to NoSQL, around a few new methods on Repository. This exposes some of the capabilities of NotesCalendar and can be used for, for example, providing an iCalendar feed based on a mail database. To go with that, XPages JEE also re-exports iCal4J as included in the Domino stack for NSF use, though this API is... not smooth

The first two here are focused around bringing NSFs more in line with "normal" Jakarta EE applications, while the latter are some nice improvements for the NoSQL driver. I hope to put the last one in particular to good use - for example, OpenNTF's site will be able to provide a calendar of webinars and other events that we can manage internally using a normal Notes calendar, and that sounds nice to me.

Next Versions

I still have the 3.x branch of the project chugging along, and I think it'll be ready for a real release before too long. Since it'll be a breaking-changes release thanks to upstream changes, I'm using it as an opportunity to consolidate the sprawl of features and XPages Libraries. Currently, my plan is:

  • One for "core", covering most things in the Jakarta EE Core Profile, plus the other utility specs I've implemented: Transactions, Bean Validation (which really should be in Core in my estimation), Concurrency, Servlet, and so forth, plus Data and NoSQL
  • One for "UI", covering Jakarta Pages, Jakarta Faces, and MVC - basically, the stuff you could use to replace XPages to make an HTML-generating app in your NSF
  • One for MicroProfile, or at least the specs I've implemented so far. I'm a little tempted to wrap this in to Core, since things like OpenAPI are useful almost all the time, but it's a clean-enough separation that it'll be fine

This will require Domino 14, since Jakarta EE 10 requires at least Java 11.

That brings me to some unexpected good news: though Jakarta EE 11 was long planned to use Java 21 as its minimum version (since 21 is the current LTS), it looks like they've switched to making Java 17 the baseline. For me, this is a little sad in an idealistic sense, since it pushes things like Virtual Threads out of the realm of being a core part of JEE, but I'm very happy that I'll be able to use all JEE 11 specs in Domino 14. Even if Domino 15 used Java 21, it'd still be a long while before that came, and we'd lag behind the standard for at least a year. Instead, this puts the project back in line with upstream, and allows me personally to potentially resume committing to Jakarta NoSQL - I'd been out of the loop for a very long time when it moved to 11 and then 17 as its required version.

I don't know right now whether JEE 11 will be the same sort of breaking change for the project (which would mean a 4.x release) or if I'll be able to make it a 3.x one - the specs aren't out yet, so time will tell. The big focus of 11 will be further centralization on CDI instead of EJB, and I'm all for it.

My plan is to get 3.x out for Domino 14, based on JEE 10, as soon as time allowed, and then I'll start looking into bumping to JEE 11 when it releases in the summer.

Notes/Domino 14 Fallout

Fri Dec 15 11:53:44 EST 2023

  1. AbstractCompiledPage, Missing Plugins, and MANIFEST.MF in FP10 and V10
  2. Domino 11's Java Switch Fallout
  3. fontconfig, Java, and Domino 11
  4. Notes/Domino 12.0.2 Fallout
  5. Notes/Domino 14 Fallout
  6. PSA: ndext JARs on Designer 14 FP1 and FP2
  7. PSA: XPages Breaking Changes in 14.0 FP3

Notes and Domino 14 are out now and, as I discussed back in June the big deal for me is the move to Java 17. This also came with a refresh of the Eclipse innards, from Neon (circa 2016) to 2021-12 (circa, uh, 2021). The Eclipse update is welcome, but so far it's been less impactful than the Java update - at some point, I'll want to see if some of the current-era Eclipse plugins work here, but that's for the future.

In the mean time, there's a bunch to know, so let's get to it! I've broken this one down into "critical" and "less critical" sections, since this post will likely have a similar life to my target platform one.

Critical To Know

ndext

I mentioned this in June, but an important thing to know about Java 17 is that "jvm/lib/ext" no longer exists. Fortunately, Notes and Domino have long had a secondary location for this sort of thing: "ndext" in the program directory. Any JARs in this folder are available at runtime on both platforms, and so the quick fix is to move anything you had in "jvm/lib/ext" to "ndext".

...but.

When it comes to Notes, there's a distinct difference between "available at runtime" and "on the build path". While Java agents here are fine - they modified the editor to include "ndext" in the build path - there's no such accommodation for XPages. So far, the best workaround I've thought of is to add any such JARs manually to the JRE definition in Eclipse's preferences:

Screenshot of Designer's JRE preferences, showing the addition of an ndext JAR

When doing this, there's a huge caveat: do not just add everything from this directory to the JRE. Most of it will be redundant, like the SWT stuff, but some will be actively harmful, in particular "jsdk.jar". That's an even-older Servlet version than comes with XPages, and including that will cause Designer to think that methods added in Servlet 2 aren't present. Only add the JARs you're extending it with.

Ideally, this will be remedied one way or another in the future, but for now we'll have to do this to account for it. The silver lining may be that it's a good impetus to OSGi-ify your dependencies if possible.

Poi Remains

This isn't new, but it's worth mentioning while we're on the topic of "ndext". Since Notes 11, the client ships with an old version of Apache Poi, but this is painfully distributed right in the JVM and not at the OSGi layer. Newer versions of Poi 4 XPages deal with this, but it's important to know that, while Poi is in Notes, it's not in Domino. If you write agents using these classes, or add them to your JRE and use them in XPages, you'll also need to deploy them to Domino.

Java Policy Location

This one's more of a note, and it's reiterating something from June: the location of "java.policy" and (if you add it) "java.pol" changed since Java 8. They used to be in "jvm/lib/security" but they're in "jvm/conf/security" now. They work the same as before, as does putting a file named ".java.policy" in the Domino user's home dir, so the other characteristics haven't changed.

sun.misc.BASE64Decoder

Back in Domino 11, the move from IBM J9 to OpenJ9 meant that some IBM internal forks of Sun internal classes were no longer available. The most notable of these were the BASE64 classes com.ibm.misc.BASE64Encoder and BASE64Decoder. These were very popular to use in the pre-Java-8 days, before java.util.Base64 existed, and so it was worth noting that they were gone then.

Well, sun.misc.Base64Encoder has now met a similar fate - it's probably still in there somewhere, but it's no longer accessible by user code. If you haven't made the switch to java.util.Base64, do so now.

Target Platform Bug

This is another note, but this classic bug that's been with us since 9.0.1FP10 is... fixed, I think! I'd thought at first that it remained, since the Target Platform config still just lists the Eclipse home dir, but installing and using an XPages library seemed to work properly without further change. Good!

Java Compiler Level

This is a fiddly one. Though Designer uses Java 17 by default now, it has the ability to compile down to older versions for past compatibility, such as running on a pre-14 server. Java agents have had their own way of dealing with this for a while, though it seems like maybe they default to Java 8 now, which is good. With XPages, it seems like it's a little less obvious.

From my experience, this is what I've found:

  • Existing projects may have a specific compiler setting specified (similar to agents in the above-linked post) and so will remain as Java 1.8
  • New projects seem to use the active workspace setting, which will matter
  • Upgrades of Designer in place will keep the default compiler level for the workspace at 1.8
  • A fresh installation sets the compiler level to 17

This is all... fine. It'd be nice if it was a little more obvious to the developer (like if it was controlled by the xsp.properties setting for minimum version), but this is more or less the Eclipse way of doing things. We'll just have to be aware for a while of these settings and their interactions when developing for pre-14 deployment. If you're targeting an older server, you should make sure that you go to Project Properties for your NSF and set the Java Compiler level to fit:

Screenshot of the Java Compiler settings for an NSF

Less Critical

Alright, that covers the big-ticket stuff, but there are a few more things to know.

Java Deprecation Warning On HTTP Start

This one is documented, though oddly in the "no longer included" page: when you start HTTP on Domino, you'll see a message like this:

WARNING: A terminally deprecated method in java.lang.System has been called
WARNING: System::setSecurityManager has been called by lotus.notes.AgentSecurityManager (file:/C:/Domino/ndext/Notes.jar)
WARNING: Please consider reporting this to the maintainers of lotus.notes.AgentSecurityManager
WARNING: System::setSecurityManager will be removed in a future release

This is actually totally fine. As it says, the whole SecurityManager apparatus is gone in future versions, but Notes and Domino still use it for agents (and, unfortunately, XPages). While I long for the day when I never have to think about it again, this is a reasonable-enough compromise for 14 as a "transitional" version in its Java journey. So... you can ignore this and not worry.

"Java Main Sources" and "Java Test Sources" Working Sets

If you use Working Sets in Designer, you may notice two entries not of your creation:

Screenshot of the 'Select Working Set' dialog in Designer 14

These showed up in Eclipse somewhere along the line, presumably to make it easier for people to select those types of projects without manually managing their working sets. Designer inherits this and HCL didn't remove them, but you're free to delete them if you want.

"AbstractCompiledPage cannot be resolved"

This is another one that came up back in 9.0.1FP10 and it's still here, but it's less of an issue: every once in a while, when building a project, Designer will complain about the AbstractCompiledPage class not being found. In my experience, this only shows up temporarily during a build, but it's worth noting that, if it does stick around for you, a Project - Clean should fix it.

JAX-B and CORBA

After Java 8, a few Java EE components were removed from the normal JRE distribution in favor of developing them in Java EE, then Jakarta EE. JAX-B is one of them (now Jakarta XML Binding) - we don't normally use this directly, but it comes up sometimes, either directly (likely as one of the other old-timey BASE64 workarounds) or as a dependency. It shouldn't matter in Designer, but, if you're writing Domino-targeting code outside Designer, you may need to be aware. One way or another, you should add this as a dependency - in a basic case, you can add "jakarta.xml.bind-api.jar" and "jaxb-impl.jar" from "ndext" to your build path.

Similarly, CORBA support classes ("org.omg" stuff, usually) used to come with the JRE and now don't. To work around this, HCL did what I've done in the past and included "glassfish-corba-omgapi.jar", in their case putting it in "ndext". If you're using Notes.jar with Java > 8 outside Designer, you'll need this one too.

Conclusion

I think that covers it for now. Considering how much changed with Java from 8 to 17, this could have been a lot rougher, though I fear that some of these workarounds will plague us for a long time.

In the mean time, I'd appreciate it if you vote for this Aha idea to keep Domino on a good Java update cadence. It'd be a shame if we sit on 17 right up until the very end of its life, as we did with 6 and 8, in part since these multi-version moves are more painful than single-LTS updates.

New Tiny Project: Wink Chattiness Patch

Mon Sep 18 17:24:06 EDT 2023

Tags: domino

I've been using the Domino 14 betas for development for a while now, and one of the things that has driven me a little nuts is the way Wink spews a bunch of INFO-level logs to the server console when the XPages runtime initializes. You've probably seen it - this stuff:

Screenshot of a Domino console on Windows displaying Wink INFO logs from Verse

It goes on for a while like that.

This isn't new with 14 as such - it's just that 14 now ships with Verse by default, and Verse uses the Wink distribution that came along with the Extension Library, and so now everyone sees this.

I had encountered this before, back when one of my client projects used Wink before transitioning to what became the XPages JEE project. Back then, I wrote a shim in the project's Activator to reflectively insert replacements for each logger object. After seeing these messages from Verse for the millionth time, I decided to dust that off and turn it into a little project.

Thus was born the Wink Chattiness Patch, a project with an expected single release that has a simple purpose: when installed, it makes Wink less annoying.

This version is actually a bit more clever than the original from my client project. Part of that was born out of necessity: originally, the shim involved writing to final fields in classes, but that's no longer possible (normally) since Java 12, so that path was out. Instead, now I pre-populate the internal Logger cache with my shim objects. I also made them a bit better: rather than just lessening the threshold for logging from INFO to WARN, they redirect to java.util.logging, which then will log to error-log-*.xml as appropriate, like other parts of the XPages stack.

Ideally, HCL will improve this themselves (I recommend they look at replacing slf4j-simple in the Wink bundle with slf4j-jdk14, which is probably the most-expedient path), but, failing that, this patch should make your Domino console just a bit less hairy.

Kicking the Tires on Domino 14 and Java 17

Fri Jun 02 13:50:53 EDT 2023

Tags: domino java

As promised, HCL launched the beta program for Domino 14 the other day. There's some neat stuff in there, but I still mostly care about the JVM update.

I quickly got to downloading the container image for this to see about making sure my projects work on it, particularly the XPages JEE Support project. As expected, I had a few hoops to jump through. I did some groundwork for this a while back, but there was some more to do. Before I get to some specifics, I'll mention that I put up a beta release of 2.13.0 that gets almost everything working, minus JSP.

Now, on to the notes and tips I've found so far.

AccessController and java.policy

One of the changes in Java 17 specifically is that our old friend AccessController and the whole SecurityManager framework are marked as deprecated for removal. Not a minute too soon, in my opinion - that was an old relic of the applet days and was never useful for app servers, and has always been a thorn in the side of XPages developers.

However, though it's deprecated, it's still present and active in Java 17, so we still have to deal it. One important thing to note is that java.policy moved from the "jvm/lib/security" directory to "jvm/conf/security". A while back, I switched to putting .java.policy in the Domino user's home directory and this remains unchanged; I suggest going this route even with older versions of Domino, but editing java.policy in its new home will still work.

Extra JARs and jvm/lib/ext

Another change is that "jvm/lib/ext" is gone, as part of a general desire on Java's part to discourage installing system-wide libraries.

However, since so much Java stuff on Domino is older than dirt, having system-wide custom libraries is still necessary, so the JVM is configured to bring in everything from the "ndext" directory in the Domino program directory in the same way it used to use "jvm/lib/ext". This directory has actually always been treated this way, and so I started also using this for third-party libraries for older versions, and it'll work the same in 14. That said, you should ideally not do this too much if it's at all avoidable.

The Missing Java Compiler

I mentioned above that everything except JSP works on Domino 14, and the reason for this is that V14 as it is now doesn't ship with a Java compiler (JSP, for historical reasons, does something much like XPages in that it translates to Java and compiles, but this happens on the server).

I originally offhandedly referred to this as Domino no longer shipping with a JDK, but I think the real situation was that Domino always used a JRE, but then had tools.jar added in to provide the compiler, so it was something in between. That's why you don't see javac in the jvm/bin directory even on older releases. However, tools.jar there provided a compiler programmatically via javax.tools.ToolProvider.getSystemJavaCompiler() - on a normal JRE, this API call returns null, while on a JDK (or with tools.jar present) it'd return a usable compiler. tools.jar as such doesn't exist anymore, but the same functionality is bundled into the newer-era weird runtime archives for efficiency's sake.

So this is something of a showstopper for me at the moment. JSP uses this system compiler, and so does the XPages Bazaar. The NSF ODP Tooling project uses the Bazaar to do its XSP -> Java -> bytecode compilation of XPages, and so the missing compiler will break server-based compilation (which is often the most reliable compilation currently). And actually, NSF ODP Tooling is kind of double broken, since importing non-raw Java agents via DXL is currently broken on Domino 14 for the same reason.

I'm pondering my options here. Ideally, Domino will regain its compiler - in theory, HCL could switch to a JDK and call it good. I think this is the best route both because it's the most convenient for me but also because it's fairly expected that app servers run on a JDK anyway, hence JSP relying on it in the first place.

If Domino doesn't regain a compiler, I'll have to look into something like including ECJ, Eclipse's redistributable Java compiler. I'd actually looked into using that for NSF ODP Tooling on macOS, since Mac Notes lost its Java compiler a long time ago. The trouble is that its mechanics aren't quite the same as the standard one, and it makes some more assumptions about working with the local filesystem. Still, it'd probably be possible to work around that... it might require a lot more of extracting files to temporary directories, which is always fiddly, but it should be doable.

Still, returning the built-in one would be better, since I'm sure I'm not the only one with code assuming that that exists.

Overall

Despite the compiler bit, I've found things to hold together really well so far. Admittedly, I've so far only been using the container for the integration tests in the XPages JEE project, so I haven't installed Designer or run larger apps on it - it's possible I'll hit limitations and gotchas with other libraries as I go.

For now, though, I'm mostly pleased. I'm quite excited about the possibility of making more-dramatic updates to basically all of my projects. I have a branch of the XPages JEE project where I've bumped almost everything up to their Jakarta EE 10 versions, and there are some big improvements there. Then, of course, I'm interested in all the actual language updates. It's been a very long time since Java 8, and so there's a lot to work with: better switch expression, text blocks, some pattern matching, records, the much-better HTTP client, and so forth. I'll have a lot of code improvement to do, and I'm looking forward to it

The Loose Roadmap for XPages Jakarta EE Support

Thu May 04 10:29:44 EDT 2023

  1. Updating The XPages JEE Support Project To Jakarta EE 9, A Travelogue
  2. JSP and MVC Support in the XPages JEE Project
  3. Migrating a Large XPages App to Jakarta EE 9
  4. XPages Jakarta EE Support 2.2.0
  5. DQL, QueryResultsProcessor, and JNoSQL
  6. Implementing a Basic JNoSQL Driver for Domino
  7. Video Series On The XPages Jakarta EE Project
  8. JSF in the XPages Jakarta EE Support Project
  9. So Why Jakarta?
  10. XPages Jakarta EE 2.5.0 And The Looming Java-Version Wall
  11. Adding Concurrency to the XPages Jakarta EE Support Project
  12. Adding Transactions to the XPages Jakarta EE Support Project
  13. XPages Jakarta EE 2.9.0 and Next Steps
  14. XPages JEE 2.11.0 and the Javadoc Provider
  15. The Loose Roadmap for XPages Jakarta EE Support
  16. XPages JEE 2.12.0: JNoSQL Views and PrimeFaces Support
  17. XPages JEE 2.13.0
  18. XPages JEE 2.14.0
  19. XPages JEE 2.15.0 and Plans for JEE 10 and 11

At Engage, HCL officially announced Java 17 in Domino 14 (I'm sure they announced other things too, but I have my priorities). This will allow me to do a lot in pretty much all of my projects, but it's particularly pertinent to XPages JEE.

Currently, the project targets generally Jakarta EE 9, which came out in late 2020 and was "just" a switch from javax.* to jakarta.*, with no official new features. However, Jakarta EE 10 came out a year ago - in addition to bringing a raft of new features, it also bumped the minimum Java version to Java 11, pushing it outside of Domino's realm. Accordingly, I've had to hold off on a lot of major- and minor-version bumps in the XPages JEE project as new releases started being compiled for Java 11. Once V14 is out, though, I'll be able to move to the current JEE platform... at least until JEE 11 comes out next year and requires Java 21, anyway.

So I've been working on how I'm going to approach this, and what I'm thinking is that I'll do it in two phases: first, a final 2.x release that provides Java 17/Domino 14 compatibility for existing components, and then a new 3.x breaking-changes release to bring in Jakarta EE 10 components.

The Final 2.x Release

I currently have this penciled in for the next release, 2.12.0, but that may change if I decide I want to get a real 2.12.0 release out before Domino 14 is at least in stable beta form. Let's call it "2.99.0" for now.

The idea here will be that I'll want to make sure all existing code in NSFs continues to work unchanged: upgrade your server to V14, install 2.99.0, and your apps keep working. In theory, this shouldn't be too complex. There's some shimming needed for Weld (the CDI implementation) to account for changes from Project Jigsaw in Java 9 and later, and there might be some stuff around AccessController, but in general I expect it'll just be some tweaks here and there. Time will tell, of course.

Once that's out, I plan to not look back (unless there's demand, I suppose). The switch to Java 17 is a huge deal, and I don't think it'll be worth spending much more time with Java 8 once it's no longer required. The 2.x branch is already, I feel, in a pretty good place, so I'll feel comfortable having a stable final version.

The Breaking 3.0 Release

Then, the plan will be to start down the path of 3.x with breaking changes - not everything, but some. For one, JEE 10 has a handful of backwards-incompatible changes. Those are mostly for legacy true-JEE code, though, and the main ones that XPages JEE code will likely want to be aware of will be the switch of XML namespaces to shorter representations. That will affect JSP and JSF code, but the old URIs (the jcp.org ones) will continue to work, at least for a while.

Most of the breaking changes will probably happen internally. I've talked for a long while now about my desire to do some reorganization of the project. The big one is wrangling the proliferation of Eclipse Features and XPages Libraries. Anyone who has installed the project in Designer is well aware of just how many times you have to click "yes, I want to install the thing I'm installing", and that alone is enough to warrant a reorganization. Beyond that, though, I've had to take care to try to make it so that the individual components don't depend on each other unnecessarily. There's a certain amount of good discipline that provides, but it eventually wears a bit thin.

I'm not quite sure what form the consolidation will take, but it'll probably be something like three features: "core", "extended", and "MicroProfile". "Core" would probably roughly map to the actual Jakarta Core Profile, plus things that I find essentially obligatory like Bean Validation. "Extended" would be all the things like JSP and JSF, the "leaves" on the dependency tree: they depend on core features, but nothing depends on them. Then "MicroProfile" would be, well, MicroProfile features. The only thing still giving me pause is that there's not too much case for not installing all of these all the time anyway - if you don't want to use, say, JSF, you don't have to; additionally, it's not like Domino is a svelte cloud-native mini server meant to be deployed a thousand times in a cluster, so having the extra bundles sitting there isn't really onerous. We'll see. I hem and haw a lot on this, but eventually I'll have to make a decision.

Regardless of what form that takes, I expect that the changes to in-NSF code will be either minimal or none - for users of the project, it'll mostly be a matter of making sure to fully uninstall the old plugins before an upgrade and then tweaking Xsp Properties to select whatever the new form of the XPages Libraries ends up being.

Side Note: Jakarta NoSQL and Data

One interesting aspect of this move will be the path Jakarta NoSQL has been on. Though I've included it in the XPages JEE project for a little while now (and continue to heavily expand on it), it's always been technically a beta release. It's clearly proven itself stable even in its beta form, but it's going through a shift in the run-up to Jakarta EE 11. Specifically, the higher abstraction levels - the Repository interface and friends - are moving to a new project, Jakarta Data. The idea of that project will be that it will be able to sit on top of Jakarta NoSQL and other storage types, namely JPA.

It's going to be very neat, but it's created a bit of a pickle for me. Since it's targetting Jakarta EE 11, that means the release of it and NoSQL are going to require at least Java 21, and there's no word on when Domino will support that.

One option would be to stick with what I have now for the foreseeable future: a mildly-forked version of Jakarta NoSQL 1.0.0-b4. It's a workhorse and has been doing a good job, and it'd mean that app code wouldn't have to change. I'm not crazy about this for obvious reasons: I don't want to have one component stuck way behind while all the other parts get a nice jump forward, even if it works.

The other main option I'm considering is sliding forward to another beta release and landing there until Java 21 support shows up. The current development versions of the Data spec and JNoSQL with its implementation target Java 17, so I'll probably go with whatever the last beta is before the official switch to 21. Though it's tough to predict the future, that will probably end up being API-wise similar enough to the release forms of them that future jumps won't be difficult. We shall see, anyway.

Timeline

Anyway, the timeline for this is a little vague, and will mostly depend on when the Domino 14 betas come out and whether they contain anything show-stopping. My hope is to be able to have something that passes all the test cases ASAP with betas and then to have it continue to be stable through to the actual release.

I'm looking forward to leaving Java 8 behind for good, though, that much is certain.

JPA in the XPages Jakarta EE Project

Sat Mar 18 11:55:36 EDT 2023

For a little while now, I'd had an issue open to implement Jakarta Persistence (JPA) in the project.

JPA is the long-standing API for working with relational-database data in JEE and is one of the bedrocks of the platform, used by presumably most normal apps. That said, it's been a pretty low priority here, since the desire to write applications based on a SQL database but running on Domino could be charitably described as "specialized". Still, the spec has been staring me in the face, maybe it'd be useful, and I could pull a neat trick with it.

The Neat Trick

When possible, I like to make the XPages JEE project act as a friendly participant in the underlying stack, building on good use of the ComponentModule system, the existing app lifecycle, and so forth. This is another one of those areas: XPages (re-)gained support for relational data over a decade ago and I could use this.

Tucked away in the slide deck that ships with the old ExtLib is this tidbit:

Screenshot of a slide, highlighting 'Available using JNDI'

JNDI is a common, albeit creaky, mechanism used by app servers to provide resources to apps running on them. If you've done LDAP from Java, you've probably run into it via InitialContext and whatnot, but it's used for all sorts of things, DB connections included. What this meant is that I could piggyback on the existing mechanism, including its connection pooling. Given its age and lack of attention, I imagine that it's not necessarily the absolute best option, but it has the advantage of being built in to the platform, limiting the work I'd need to do and the scope of bugs I'd be responsible for.

Implementation

With one piece of the puzzle taken care for me, my next step was to actually get a JPA implementation working. The big, go-to name in this area is Hibernate (which, incidentally, I remember Toby Samples getting running in XPages long ago). However, it looks like Hibernate kind of skipped over the Jakarta EE 9 target with its official releases: the 5.x series uses the javax.persistence namespace, while the 6.x series uses jakarta.persistence but requires Java 11, matching Jakarta EE 10. Until Domino updates its creaky JVM, I can't use that.

Fortunately, while I might be able to transform it, Hibernate isn't the only game in town. There's also EclipseLink, another well-established implementation that has the benefits of having an official release series targeting JEE 9 and also using a preferable license.

And actually, there's not much more to add on that front. Other than writing a library to provide it to the NSF and a resolver to account for OSGi's separation, I didn't have to write a lot of code.

Most of what I did write was the necessary code and configuration for normal JPA use. There's a persistence.xml file in the normal format (referencing the source made by the XPages JDBC config file), a model class, and then access using the normal API.

In a normal full app server, the container would take care of some of the dirty work done by the REST resource there, and that's something I'm considering for the future, but this will do for now.

Writing Tests

One of the neat side effects is that, when I went to write the test case for this, I got to make better use of Testcontainers. I'm a huge fan of Testcontainers and I've used it for a good while for my IT suites, but I've always lost a bit by not getting to use the scaffolding it provides for common open-source projects. Now, though, I could add a PostgreSQL container alongside the Domino one:

1
2
3
4
5
6
postgres = new PostgreSQLContainer<>("postgres:15.2")
	.withUsername("postgres")
	.withPassword("postgres")
	.withDatabaseName("jakarta")
	.withNetwork(network)
	.withNetworkAliases("postgresql");

Here, I configure a basic Postgres container, and the wrapper class provides methods to specify the extremely-secure username and password to use, as well as the default database name. Here, I pass it a network object that lets it share the same container network space as the Domino server, which will then be able to refer to it via TCP/IP as the bare name "postgresql".

The remaining task was to write a method in the test suite to make sure the table exists. You can do this in other ways - Testcontainers lets you run init scripts via URL, for example - but for one table this suits me well. In the test class where I want to access the REST service I wrote, I made a @BeforeAll method to create the table:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
@BeforeAll
public static void createTable() throws SQLException {
	PostgreSQLContainer<?> container = JakartaTestContainers.instance.postgres;
		
	try(Connection conn = container.createConnection(""); Statement stmt = conn.createStatement()) {
		stmt.executeUpdate("CREATE TABLE IF NOT EXISTS public.companies (\n"
				+ "	id BIGSERIAL PRIMARY KEY,\n"
				+ "	name character varying(255) NOT NULL\n"
				+ ");");
	}
}

Testcontainers takes care of some of the dirty work of figuring out and initializing the JDBC connection for me. That's not particularly-onerous work, but it's one of the small benefits you get when you're doing the same sort of thing other users of the tool are doing.

With that, everything went swimmingly. Domino saw the Postgres container (thanks to copying the JDBC driver to the classpath) and the JPA access worked just the same as it does in my real environment.

Like with the implementation, there's not much there beyond "yep, do the things the docs say and it works". Though there were the usual hurdles that I've gotten used to with adding things like this to Domino, this all went pleasantly smoothly. I may build on this in the future - such as the aforementioned server-managed JPA bits - but that will depend on whether I or others have need. Regardless, I'm glad it's in there.

Per-NSF-Scoped JWT Authorization With JavaSapi

Sat Jun 04 10:35:05 EDT 2022

Tags: domino dsapi java
  1. Poking Around With JavaSapi
  2. Per-NSF-Scoped JWT Authorization With JavaSapi
  3. WebAuthn/Passkey Login With JavaSapi

In the spirit of not leaving well enough alone, I decided the other day to tinker a bit more with JavaSapi, the DSAPI peer tucked away undocumented in Domino. While I still maintain that this is too far from supported for even me to put into production, I think it's valuable to demonstrate the sort of thing that this capability - if made official - would make easy to implement.

JWT

I've talked about JWT a bit before, and it was in a similar context: I wanted to be able to access a third-party API that used JWT to handle authorization, so I wrote a basic library that could work with LS2J. While JWT isn't inherently tied to authorization like this, it's certainly where it's found a tremendous amount of purchase.

JWT has a couple neat characteristics, and the ones that come in handy most frequently are a) that you can enumerate specific "claims" in the token to restrict what the token allows the user to do and b) if you use a symmetric signature key, you can generate legal tokens on the client side without the server having to generate them. "b" there is optional, but makes JWT a handy way to do a quick shared secret between servers to allow for trusted authentication.

It's a larger topic than that, for sure, but that's the quick and dirty of it.

Mixing It With An NSF

Normally on Domino, you're either authenticated for the whole server or you're not. That's usually fine - if you want to have a restricted account, you can specifically grant it access to only a few NSFs. However, it's good to be able to go more fine-grained, restricting even powerful accounts to only do certain things in some contexts.

So I had the notion to take the JWT capability and mix it with JavaSapi to allow you to do just that. The idea is this:

  1. You make a file resource (hidden from the web) named "jwt.txt" that contains your per-NSF secret.
  2. A remote client makes a request with an Authorization header in the form of Bearer Some.JWT.Here
  3. The JavaSapi interceptor sees this, checks the target NSF, loads the secret, verifies it against the token, and authorizes the user if it's legal

As it turns out, this turned out to be actually not that difficult in practice at all.

The main core of the code is:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
public int authenticate(IJavaSapiHttpContextAdapter context) {
    IJavaSapiHttpRequestAdapter req = context.getRequest();

    // In the form of "/foo.nsf/bar"
    String uri = req.getRequestURI();
    String secret = getJwtSecret(uri);
    if(StringUtil.isNotEmpty(secret)) {
        try {
            String auth = req.getHeader("Authorization"); //$NON-NLS-1$
            if(StringUtil.isNotEmpty(auth) && auth.startsWith("Bearer ")) { //$NON-NLS-1$
                String token = auth.substring("Bearer ".length()); //$NON-NLS-1$
                Optional<String> user = decodeAuthenticationToken(token, secret);
                if(user.isPresent()) {
                    req.setAuthenticatedUserName(user.get(), "JWT"); //$NON-NLS-1$
                    return HTEXTENSION_REQUEST_AUTHENTICATED;
                }
            }
        } catch(Throwable t) {
            t.printStackTrace();
        }
    }

    return HTEXTENSION_EVENT_DECLINED;
}

To read the JWT secret, I used IBM's NAPI:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
private String getJwtSecret(String uri) {
    int nsfIndex = uri.toLowerCase().indexOf(".nsf"); //$NON-NLS-1$
    if(nsfIndex > -1) {
        String nsfPath = uri.substring(1, nsfIndex+4);
        
        try {
            NotesSession session = new NotesSession();
            try {
                if(session.databaseExists(nsfPath)) {
                    // TODO cache lookups and check mod time
                    NotesDatabase database = session.getDatabase(nsfPath);
                    database.open();
                    NotesNote note = FileAccess.getFileByPath(database, SECRET_NAME);
                    if(note != null) {
                        return FileAccess.readFileContentAsString(note);
                    }
                }
            } finally {
                session.recycle();
            }
        } catch(Exception e) {
            e.printStackTrace();
        }
    }
    return null;
}

And then, for the actual JWT handling, I use the auth0 java-jwt library:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
public static Optional<String> decodeAuthenticationToken(final String token, final String secret) {
	if(token == null || token.isEmpty()) {
		return Optional.empty();
	}
	
	try {
		Algorithm algorithm = Algorithm.HMAC256(secret);
		JWTVerifier verifier = JWT.require(algorithm)
		        .withIssuer(ISSUER)
		        .build();
		DecodedJWT jwt = verifier.verify(token);
		Claim claim = jwt.getClaim(CLAIM_USER);
		if(claim != null) {
			return Optional.of(claim.asString());
		} else {
			return Optional.empty();
		}
	} catch (IllegalArgumentException | UnsupportedEncodingException e) {
		throw new RuntimeException(e);
	}
}

And, with that in place, it works:

JWT authentication in action

That text is coming from a LotusScript agent - as I mentioned in my original JavaSapi post, this authentication is trusted the same way DSAPI authentication is, and so all elements, classic or XPages, will treat the name as canon.

Because the token is based on the secret specifically from the NSF, using the same token against a different NSF (with no JWT secret or a different one) won't authenticate the user:

JWT ignored by a different endpoint

If we want to be fancy, we can call this scoped access.

This is the sort of thing that makes me want JavaSapi to be officially supported. Custom authentication and request filtering are much, much harder on Domino than on many other app servers, and JavaSapi dramatically reduces the friction.

Video Series On The XPages Jakarta EE Project

Mon Feb 07 15:54:15 EST 2022

  1. Updating The XPages JEE Support Project To Jakarta EE 9, A Travelogue
  2. JSP and MVC Support in the XPages JEE Project
  3. Migrating a Large XPages App to Jakarta EE 9
  4. XPages Jakarta EE Support 2.2.0
  5. DQL, QueryResultsProcessor, and JNoSQL
  6. Implementing a Basic JNoSQL Driver for Domino
  7. Video Series On The XPages Jakarta EE Project
  8. JSF in the XPages Jakarta EE Support Project
  9. So Why Jakarta?
  10. XPages Jakarta EE 2.5.0 And The Looming Java-Version Wall
  11. Adding Concurrency to the XPages Jakarta EE Support Project
  12. Adding Transactions to the XPages Jakarta EE Support Project
  13. XPages Jakarta EE 2.9.0 and Next Steps
  14. XPages JEE 2.11.0 and the Javadoc Provider
  15. The Loose Roadmap for XPages Jakarta EE Support
  16. XPages JEE 2.12.0: JNoSQL Views and PrimeFaces Support
  17. XPages JEE 2.13.0
  18. XPages JEE 2.14.0
  19. XPages JEE 2.15.0 and Plans for JEE 10 and 11

Over the last two weeks, Graham Acres and I recorded a video series for OpenNTF about my XPages Jakarta EE Support project, which has seen a flurry of development in the last few months. The 15-part series is up on YouTube:

The project itself saw the release of version 2.3.0 today, which is the first release with the Jakarta NoSQL driver I blogged about recently.

I think the project has turned into a pretty-interesting "platform update" for XPages, and I hope the video series captures that a bit. I'm still mulling over a sort of "thesis statement" about the whole thing, but for now describing the various new capabilities and how they interact will have to suffice.

Implementing a Basic JNoSQL Driver for Domino

Tue Jan 25 13:36:06 EST 2022

  1. Updating The XPages JEE Support Project To Jakarta EE 9, A Travelogue
  2. JSP and MVC Support in the XPages JEE Project
  3. Migrating a Large XPages App to Jakarta EE 9
  4. XPages Jakarta EE Support 2.2.0
  5. DQL, QueryResultsProcessor, and JNoSQL
  6. Implementing a Basic JNoSQL Driver for Domino
  7. Video Series On The XPages Jakarta EE Project
  8. JSF in the XPages Jakarta EE Support Project
  9. So Why Jakarta?
  10. XPages Jakarta EE 2.5.0 And The Looming Java-Version Wall
  11. Adding Concurrency to the XPages Jakarta EE Support Project
  12. Adding Transactions to the XPages Jakarta EE Support Project
  13. XPages Jakarta EE 2.9.0 and Next Steps
  14. XPages JEE 2.11.0 and the Javadoc Provider
  15. The Loose Roadmap for XPages Jakarta EE Support
  16. XPages JEE 2.12.0: JNoSQL Views and PrimeFaces Support
  17. XPages JEE 2.13.0
  18. XPages JEE 2.14.0
  19. XPages JEE 2.15.0 and Plans for JEE 10 and 11

A few weeks back, I talked about my use of DQL and QRP in writing a JNoSQL driver for Domino. In that, I left the specifics of the JNoSQL side out and focused on the Domino side, but that former part certainly warrants some expansion as well.

Background

As a quick overview, Jakarta NoSQL is an approaching-finalization spec for working with NoSQL databases of various stripes in a Jakarta EE app. This is as opposed to the venerable JPA, which is a long-standing API for working with RDBMSes in JEE.

JNoSQL is the implementation of the Jakarta NoSQL spec, and is also an Eclipse project. As a historical note, the individual components of the implementation used to have Greek-mythological names, which is why older drivers like my Darwino driver or original Domino driver are sprinkled with references to "Diana" and "Artemis". The "JNoSQL" name also pre-dates its reification into a Jakarta spec - normally, spec names and implementations aren't quite so similarly named.

The specification is broken up into two main categories. The README for the implementation describes this well, but the summary is:

  1. "Communication" handles interpreting JNoSQL CRUD operations and actually applying them to the database.
  2. "Mapping" handles what the app developer interacts with: annotating classes to relate them to the back-end database and querying object repositories.

An individual driver may include code for both sides of this, but only the Communication side is obligatory to implement. A driver would contribute to the Mapping side as well if they want to provide database-specific higher-level concepts. For example, the Darwino driver does this to provide explicit annotations for its full-text search, stored-cursor, and JSQL capabilities. I may do similarly in the Domino driver to expose FT search, view operations, or DQL queries directly.

Jakarta NoSQL handles Key-Value, Column, Graph, and Document data stores, but we only care about the last category for now.

Implementation Overview

Now, on to the actual implementation in question. The handful of classes in the implementation fall into a few categories:

Implementation Details

The core entrypoint for data operations is DefaultDominoDocumentCollectionManager, and JNoSQL specifies a few main operations to implement, basically CRUD plus total count:

  • insert and its overrides handle taking an abstract DocumentEntity from JNoSQL and turning it into a new lotus.domino.Document in the target database.
  • update does similarly, but with the assumption that the incoming entity represents a modification to an existing document.
  • delete takes an incoming abstract query and deletes all documents matching it.
  • select takes an incoming abstract query, finds matching documents, converts them to a neutral format, and returns them to JNoSQL. This is what my earlier post was all about.
  • count retrieves a count for all documents in a "collection". "Collection" here is a MongoDB-ism and the most-practical Domino equivalent is "documents with a specific Form name".

Entity Conversion

The insert, update, and select methods have as part of their jobs the task of translating between Domino's storage and JNoSQL's intermediate representation, and this happens in EntityConverter.

Now, this point of the code has some... nomenclature-based issues. There's lotus.domino.Document, our legacy representation of a Domino document handle. Then, there's jakarta.nosql.document.Document: this oddly-named interface actually represents a single key-value pair within the conceptual document - roughly, this corresponds to lotus.domino.Item. Finally, there's jakarta.nosql.document.DocumentEntity, which is the higher-level representation of a conceptual document on the JNoSQL side, and this contains many jakarta.nosql.document.Documents. This all works out in practice, but it's important to know about when you look into the implementation code.

The first couple methods in this utility class handle converting query results of different types: QRP result JSON, QRP result views, and generic DocumentCollections. Strictly speaking, I could remove the first one now that it's unused, but there's a non-zero chance that I'll return to it if it ends up being efficient down the line.

Each of those methods will eventually call to toDocuments, which converts a lotus.domino.Document object to an equivalent List of JNoSQL Documents (i.e. the individual key-value pairs). Due to the way JNoSQL works, this method has no way to know what the actual desired fields the higher level will want are, so it attempts to convert all items in the document to more-common Java types. There's much more work to do here, some of it based on just needing to add other types (like improving rich text handling) and some of it based on needing a better Notes API (like proper conversion from Notes times to java.time).

In the other direction, there's the method that converts from a JNoSQL DocumentEntity to a Domino document, which is used by the insert and update methods. This converts some known common incoming types and converts them to Domino item values. Like the earlier methods, this could use some work in translating types, but that's also something that a better Notes API could handle for me.

Query Conversion

The QueryConverter class has a slightly-simpler job: taking JNoSQL's concept of a query and translating it to a document selection.

The jakarta.nosql.document.DocumentQuery type does a bit of double duty: it's used both for arbitrary queries (Foo='Bar'-type stuff) as well as for selecting documents by UNID. The select method covers that, producing a QueryConverterResult object to ferry the important information back to DefaultDominoDocumentCollectionManager.

The core work of QueryConverter is in getCondition, where it performs an AST-to-DQL conversion. JNoSQL has a couple mechanisms for querying entities: explicit Java-based queries, implicit queries based on repository definition, or a SQL-like query language. Regardless of what the higher level does to query, though, it comes to the Communication driver as this tree of objects (technically, the driver can handle the last specially, but by default it arrives parsed).

Fortunately, this sort of work is a common sort of idiom. You start with the top node of the tree, handle it based on its type and, as needed, recurse down into the next node. So if, for example, the top node is an EQUALS, all this converter needs to do is return the DQL representation of "this field equals that value", and so forth for other comparison operators. If it encounters AND, OR, or NOT, then the job changes to making a composite query of that operator plus the results of converting whatever the operator is applied to - which is where the recursion back into the same method comes in.

Future Work

The main immediate work to do here is enhancing the data conversion: handling more outgoing Domino item types and incoming Java object types. A good deal of this can be done as-is, but doing some other parts reliably will be best done by changing out the specific Notes API in use. I used lotus.domino because it's present already, but it's a placeholder for sure.

There are also a bunch of efficiency tweaks I can make: more lazy loading in conversion, optimizing data fetching for specific queries, and logging DQL explain results for developers.

Beyond that, I'll have to consider if it's worth adding extensions to the mapping side. As I mentioned, the Darwino driver has some extensions for its JSQL language and similar concepts, and it's possible that it'd be worth adding similar things for Domino, in particular direct FT searching. That said, DQL does a pretty good job being the all-consuming target, and so translating JNoSQL queries to DQL may suffice to extract what performance Domino can provide.

So we'll see. A lot of this will be based on what I need when I actually put this into real use, since right now it's partly hypothetical. In any event, I'm looking forward to finding places where I can use this instead of explicitly coding to Notes API objects for sure.

Building a Full Domino Image for JUnit Tests

Sun Jan 23 15:57:51 EST 2022

Tags: docker domino
  1. Tinkering With Testcontainers for Domino-based Web Apps
  2. Adding Selenium Browser Tests to My Testcontainers Setup
  3. Building a Full Domino Image for JUnit Tests

Last year, I wrote about how I built images to use Testcontainers to run tests against a Liberty app that uses a Domino runtime. In that situation, I used the Domino Docker image from Flexnet but then pulled out the program files and stock data, mixed with pre-configured server support files from the repository.

Recently, though, I've had a need to have a similar setup, but altered in three ways:

  1. It should fully run Domino, not just use the data and program files for the runtime in Liberty
  2. It should not require pre-populating a server ID, notes.ini, and names.nsf from outside - that is, it should be self-configuring
  3. It should also have an extra component installed, one that must be installed after Domino is configured but before the image is fully built
  4. On the next launch, I need a post-install agent to run for final configuration, and the tests need to wait on that

Additionally, it should still be runnable with the same basic tools - the image should be built and the container started and destroyed by automated tests. This stricture comes into play in weird ways.

One-Touch Setup

The second requirement - that the container be self-configuring - is handled adroitly by One-Touch Setup, the feature in Domino 12 and above that lets you specify a configuration JSON file. I used this here in basically the same way as I described in that post: the script sets up and certifies a throwaway domain with a known admin username + password, and also deploys a few NTFs on first proper launch. Since this server intentionally doesn't communicate with the outside world, I don't need to provide any external files like a certificate authority or server ID.

Switching the Baseline

Initially, I continued to use the official image from Flexnet. However, I had run into some trouble with multi-stage builds using this earlier. I lack enough Docker knowledge to be sure, but my suspicion is that it declares /local/notesdata as an expected externally-provided volume. This on its own is fine, but it interacts oddly with automated container building. The trouble comes in when you do something like this:

1
2
3
4
5
6
7
FROM domino-docker:V1201_11222021prod
ENV SetupAutoConfigure "1"
ENV SetupAutoConfigureParams "/local/domino-config.json"
COPY --chown=notes:wheel domino-config.json /local/
RUN /local/start.sh

RUN /some/script/that/uses/notesdata

By the time it hits that second RUN line using automated build mechanisms, /local/notesdata is depopulated. I'm not sure why this is different from a command-line docker build, but it is what it is.

Fortunately, the "community" version of the image builder from https://github.com/IBM/domino-docker doesn't exhibit this behavior. I had been considering switching over already, so this made the decision all the easier.

Breaking Up Setup Into Stages

With this change in hand, I was able to more-properly break up the setup into multiple stages. This is important both for requirement #3 above and because it allows Docker to cache intermediate results. Though I want the server to auto-configure itself when building, I don't need the results of that to be different every run, and thus I can save tons of time in subsequent launches if I handle these caches well.

So my Dockerfile started to look something like this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
FROM hclcom/domino:12.0.1
# Relay Domino output to the container logs
ENV DOMINO_DOCKER_STDOUT "yes"
ENV SetupAutoConfigure "1"
ENV SetupAutoConfigureParams "/local/DominoAutoConfig.json"

COPY --chown=notes:wheel domino-config.json /local/DominoAutoConfig.json
COPY --chown=notes:wheel notesdata/* /local/notesdatatemp/

RUN /domino_docker_entrypoint.sh

# Copy the app executable and support scripts
USER root
COPY appinstall /local/appinstall
RUN chmod +x /local/appinstall/install.sh && /local/appinstall/install.sh

# Back to notes user for the Domino entrypoint
USER notes

But here I hit another distinction between docker build and the automated mechanisms: that RUN /domino_docker_entrypoint.sh line would execute and get to the point where it emits "Application configuration completed successfully", but then would not actually exit properly. Again, I'm not sure why: the JSON file tells the server to not launch after configuration, and indeed it doesn't, but the script just doesn't relinquish control - but does when built from the command line.

So I rolled up my sleeves and wrote a wrapper script to kill the process when it's known to be done:

1
2
3
#!/usr/bin/env bash
/domino_docker_entrypoint.sh > /tmp/domsetup &
until tail -f /tmp/domsetup | grep -q "Application configuration completed successfully"; do : sleep 1; done

This runs the setup process, redirecting the output to a temp file, in the background. Then, it watches that file for the known ending text and exits when observed. A little janky in multiple ways for multiple reasons, but it works. That allows image build to progress normally to the next step in all environments, caching the results of the initial server setup. I replaced the RUN /domino_docker_entrypoint.sh line above with copying in and executing this script, and all was well.

Post-Install Agent

After the "appinstall" step, I have the peculiar need to then run code in a Notes context to fiddle with some components that aren't configured earlier. For now, I've settled on writing an agent that runs on server start, then signing and enabling it in the domino-config.json file:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
{
    "action": "create",
    "filePath": "postinstall.nsf",
    "title": "Post Install",
    "templatePath": "/local/notesdatatemp/postinstall.ntf",
    "signUsingAdminp": true,
    "agents": [
        {
            "name": "PostInstall",
            "action": "sign"
        },
        {
            "name": "PostInstall",
            "action": "enable"
        }
    ]
}

Originally, I had this agent emit the text "postinstall done" when it finished, so that the Testcontainers runtime could look for that to know when it's safe to execute tests. However, this added a good while to the launch stage: at this point, launching the container has to wait on final post-install tasks from Domino, then signing the DB with adminp, then actually executing the agent. This added about a minute to the test pre-run time, and thus was a prime target for further caching.

So I altered the agent to check to see if it actually needs to work and then, if so, shuts down the server when it's done:

1
2
3
4
5
6
Session session = getSession();

if(needToDoWork()) {
    doWork();
    session.sendConsoleCommand(session.getServerName(), "q");
}

Then, I altered my Dockerfile to amend the end bit:

1
2
3
4
5
6
7
8
9
# ...snip
# Back to notes user for the Domino entrypoint
USER notes

# Run the server once to wait for postinstall to execute, then shut down
WORKDIR /local/notesdata
RUN /opt/hcl/domino/bin/server

# Now let the entrypoint launch again, without requiring further configuration

Again: janky, but it works. Now, all of the setup stages are cached on subsequent runs.

Building the Image in Testcontainers

In my original post, I showed using dockerfile-maven-plugin to build the image just before executing the tests. This works, but it complicated the pom.xml a bit and meant that running the tests in an IDE meant first running a Maven build. Not the end of the world, but not ideal.

Fortunately, Testcontainers can also build images. That meant that, rather than building the image in Maven and then re-using the same-named container in Java, I could do it all Java-side. To do this, I created a subclass of GenericContainer to centralize the configuration of the container:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
package example;

import java.io.IOException;
import org.testcontainers.containers.GenericContainer;
import org.testcontainers.containers.wait.strategy.HttpWaitStrategy;
import org.testcontainers.images.builder.ImageFromDockerfile;

public class DominoAppContainer extends GenericContainer<DominoAppContainer> {
    public static class DominoAppImage extends ImageFromDockerfile {
        public DominoAppImage() {
            super("example-app-it-testcontainers:1.0.0", false);
            withFileFromClasspath("Dockerfile", "/docker/Dockerfile");
            withFileFromClasspath("domino-config.json", "/docker/domino-config.json");
            withFileFromClasspath("firstrun.sh", "/docker/firstrun.sh");
            withFileFromClasspath("notesdata/exampledata.ntf", "/docker/notesdata/exampledata.ntf");
            withFileFromClasspath("notesdata/postinstall.ntf", "/docker/notesdata/postinstall.ntf");
            withFileFromClasspath("appinstall/install.sh", "/docker/appinstall/install.sh");
            withFileFromClasspath("appinstall/appinstall.jar", "/docker/appinstall/appinstall.jar");
        }
    }

    public DominoAppContainer() {
        super(new DominoAppImage());

        withImagePullPolicy(imageName -> false);
        withExposedPorts(80);
        waitingFor(
            new HttpWaitStrategy()
            .forPort(80)
            .forStatusCodeMatching(code -> code >= 200 && code < 400)
        );
        withLogConsumer(frame -> {
            switch (frame.getType()) {
                case STDERR:
                    try {
                        System.err.write(frame.getBytes());
                    } catch (IOException e) {
                        e.printStackTrace();
                    }
                    break;
                case STDOUT:
                    try {
                        System.out.write(frame.getBytes());
                    } catch (IOException e) {
                        e.printStackTrace();
                    }
                    break;
                default:
                case END:
                    break;
            }
        });
    }
}

It's a little fiddlier in that now I need to enumerate each classpath resource to copy in, but that also makes it all the more portable. I removed the dockerfile-maven-plugin execution from the pom.xml and switched to this instead. Since I name the image and tell it to not auto-delete on completion, this retained the desired caching behavior.

Conclusion

Overall, this whole process brought the test pre-launch time (after the first run) down from 3-5 minutes to about 20 seconds while also reducing the split between the Maven config and Java. Much more bearable when making small tweaks and re-running the suite, and it makes it a bit more explicable what's going on.

PSA: Reverse-Proxy Regression in Domino 12.0.1

Wed Jan 19 17:08:42 EST 2022

  1. Putting Apache in Front of Domino
  2. Better Living Through Reverse Proxies
  3. Domino's Server-Side User Security
  4. A Partially-Successful Venture Into Improving Reverse Proxies With Domino
  5. PSA: Reverse-Proxy Regression in Domino 12.0.1

For a good while now, I've been making use of the HTTPEnableConnectorHeaders notes.ini property in Domino to allow my reverse proxies to have Domino "see" the real remote system. Though this feature is coarse-grained and is best paired with some tempering, it's served me well when used on appropriately-configured servers.

Unfortunately, HCL saw fit to remove this feature in 12.0.1, declaring it a security vulnerability. I don't think this made it into the release notes as such, but did eventually get patched into the "Components no longer included in this release" page for V12.

Obviously, the true problem here is that it makes my blog entries retroactively less useful. However, a secondary issue is that it will damage your applications in two main ways if you were making use of these headers:

Authentication

The $WSRU header allowed you to specify a username to act as for the duration of the web request, and this would be honored from the core: all legacy and Servlet components would see that as the true authenticated user. This header is essentially a quick-and-dirty SSO mechanism, and works similarly in that way to an LTPA token.

Fortunately, this header has some workarounds, though proper ones would likely require diving into some C:

  1. If you have a simple situation where you were using it to log in as a single known user, you could switch to sending HTTP Basic auth with that user's credentials
  2. If the server in front of Domino happens to have been written by IBM, you could potentially spin your own LTPA tokens on that side and have them trusted by Domino
  3. You could establish your own exchange between the proxy server and Domino to have Domino generate an SSO token for an arbitrary name for you and then pass that along (note: if you do that, use Domino JNA to do the heavy lifting)
  4. You could write a DSAPI filter to implement whatever type of authentication you like

With any of those, Domino will trust the user you provide it to the same extent it previously would trust the $WSRU header when configured to do so.

Note: if you do implement your own authentication, don't just re-use $WSRU. It appears that part (all?) of the change on the Domino side is to hard-strip the $WS* headers from the incoming request.

Remote IP Address, etc.

The other use, which I've used in a good many deployed apps, is to use $WSRA and friends to tell Domino what the true remote IP address is. When you do this, CGI item values like REMOTE_ADDR will reflect the external user's IP address instead of the proxy server's.

Since 9.0.1FP8, there's been a notes.ini property - HTTP_LOG_ACCESS_XFORWARDED_FOR - that will tell Domino to pay a little attention to the de-facto standard X-Forwarded-For header that proxies often use to tip an app server off to the proxy hops a request took to get to its destination. From what I can tell, this support remains limited to just writing to a new field in log entries in domlog.nsf. The in-app CGI variables will still reflect the reverse proxy's address.

Unfortunately, as I found when I was trying to make a DSAPI filter to properly trust that header, these request values are not writable in such filters. It might hypothetically be possible to get some of this with EM triggers juggling fields around for classic use and futzing with the low levels of the XPages stack, but even I wouldn't resort to that.

The upshot is that if, for example, you're storing REMOTE_ADDR in created documents or making use of ServletRequest#getRemoteHost, you'll have to alter your applications to also consider X-Forwarded-For, and the same goes for any other such fields you were using.

As above, if you do this, you can't use $WSRA et al, as I don't think they'll be visible in your app.

DQL, QueryResultsProcessor, and JNoSQL

Thu Jan 13 14:32:04 EST 2022

  1. Updating The XPages JEE Support Project To Jakarta EE 9, A Travelogue
  2. JSP and MVC Support in the XPages JEE Project
  3. Migrating a Large XPages App to Jakarta EE 9
  4. XPages Jakarta EE Support 2.2.0
  5. DQL, QueryResultsProcessor, and JNoSQL
  6. Implementing a Basic JNoSQL Driver for Domino
  7. Video Series On The XPages Jakarta EE Project
  8. JSF in the XPages Jakarta EE Support Project
  9. So Why Jakarta?
  10. XPages Jakarta EE 2.5.0 And The Looming Java-Version Wall
  11. Adding Concurrency to the XPages Jakarta EE Support Project
  12. Adding Transactions to the XPages Jakarta EE Support Project
  13. XPages Jakarta EE 2.9.0 and Next Steps
  14. XPages JEE 2.11.0 and the Javadoc Provider
  15. The Loose Roadmap for XPages Jakarta EE Support
  16. XPages JEE 2.12.0: JNoSQL Views and PrimeFaces Support
  17. XPages JEE 2.13.0
  18. XPages JEE 2.14.0
  19. XPages JEE 2.15.0 and Plans for JEE 10 and 11

As I've been adding new technologies to and talking about the XPages Jakarta EE project, I've kind of danced around a major missing layer: data access.

Technically, the toolchain has provided Domino data access all along, by way of having the same contextual sessions and database as XPages. You could use those to access whatever data you want, and they'd do the job as well as they ever do (c'est-?-dire: poorly). Beyond that, though, there's no equivalent to the (questionable) xp:dominoDocument and xp:dominoView components of XPages, and definitely no pre-provided object-to-database mapper.

The answer is pretty clear: Jakarta NoSQL. This API isn't quite finalized, but it's been usable for a long time: I wrote a Darwino driver for it years ago, and that driver is powering this very blog. I also wrote a Domino driver years ago, but it was very much a proof-of-concept: since it pre-dated DQL, it used formula queries for everything, and thus would scale extremely poorly. It was a nice exercise, but not anything useful.

For XPages JEE, I decided to take another swing at that. The implementation of the driver will warrant a tale on its own, but for now I'd like to focus on the DQL side of it.

DQL

I talked a bit about DQL when it came out, back when it wasn't well-understood, but since then I haven't actually had much occasion to put it to use. For the times I've needed complex Domino data access since then, it's been built on pre-existing operations on top of views. While adding DQL has been something I've considered from time to time, it'd never hit the threshold of being worth it: our needs involve extracting tons of data to bulk send it to service clients, and so views have remained necessary. While we could in theory alter our querying and filtering to select documents and project those selections onto the views, it'd have been a lot of work for partial benefits.

DQL itself has gotten more capable in the intervening years, and just on its own it's a perfect match for JNoSQL needs. Since all JNoSQL operations are sent to the driver as either individual doc IDs or an arbitrary query, something like DQL is required, and it's up to the task now.

It's half of the story, though. What DQL (by way of the DominoQuery object) gives you is a DocumentCollection, effectively just the list of note IDs. You can, as I'd hypothesized about doing, apply that against a view to extract data, but that still requires you to separate out the act of view management from the act of doing queries. If you want to have data sorted or categorized, you would still have to create an equivalent or superset view.

QueryResultsProcessor

So that's where the addition of QueryResultsProcessor comes in. QRP is technically distinct from DQL - you can use it to process arbitrary document collections, for example - but they're certainly a conceptual match. If you're comparing it to a SQL statement, DQL is the "FROM foo" and "WHERE x" parts, while QRP is the "SELECT a,b,c", "ORDER BY y", and "GROUP BY z" parts.

The general way it works is that you:

  1. Create a QueryResultsProcessor from a Database instance (as opposed to Session - this distinction becomes important later)
  2. Feed it sources of documents: DQL queries or arbitrary document collections
  3. Add any desired columns to extract data. These are Domino-style columns, and you can also specify sorting and categorization here, as you would when building a view
  4. Since data may come from multiple databases, you can also customize column formulas to account for that
  5. Execute the process and retrieve the results, currently either as JSON or as a "view". More on these "views" later

When I first heard about QRPs, I had a concern with step 2: I'd thought that you could only pass a built DocumentCollection to the processor, which would significantly limit the room for Domino to add behind-the-scenes efficiencies. However, my fears were unfounded; the ability to pass in a DominoQuery object and the DQL directly and let the QRP execute it means that HCL is free to do whatever they want to make it fast. That's the sort of thing that makes SQL queries potentially so stupidly efficient: because you're just asking the database for results, the DB is free to optimize the heck out of them. This pairing potentially brings that to Domino, and that's what makes it important.

JSON Output

The executeToJson method is pretty straightforward if a somewhat-peculiar choice. It has no parameters, and returns the results of your query as reasonably-formatted JSON. It's unfortunate that this returns a String and not an InputStream, which adds some inherent inefficiency to dealing with it on the Java side, but that will only really hurt with very-large data sets.

Along with the requested fields, formula results, and aggregations, the document entries include the note ID (oddly in "formula" format) and the database file path, so you can use that to open up the document.

Anyway, this is a workmanlike format and can be potentially just sent to REST clients directly, though it'd be good form to at least strip out the DB paths and note IDs.

View Output

Now here's the spicy one. The executeToView method stores the results in a very-weird type of view. This has a few big advantages over the JSON mechanism:

  • The view persists in the database, up to a number of hours you specify programmatically. This allows you to essentially offload some extra caching to the database, which is ideal
  • You can use ViewNavigator and other efficient mechanisms to work with the view data, meaning you don't have the "here's a big result blob in memory" problem you have with the JSON format
  • Since it's a "view", anything that works with view data can work with it. This is presumably the reason it's implemented this way at all, rather than as some new kind of entity - building on existing mechanisms
  • The "anything that works with view data" doesn't just mean things like ViewNavigator: it also means the Notes client and view data sources

Now, these "views" have a lot of weird characteristics. It's useful to see the specifics listed out like that, but they all derive from a core lesson to ingest:

This is not a stored query; it is a cached result.

These views are not auto-updated, nor is there any mechanism I know of to refresh them outside of deleting and re-creating them. They're equivalent in concept to if you took the JSON from the first type and stored it in a document somewhere: it'll only change if you change it. The only way Domino will act on them is to delete them when they're expired.

Anyway, the data in these views is the same data that would go to the JSON format, just stored as Notes collation data instead of a string. It contains columns, potentially categorized and aggregated, for the data you requested, as well as hidden "$DBPath" and "$NoteID" columns at the end. The entry-level note ID (the one from entry.getNoteID()) is arbitrary and intended to not represent an actual document - since, after all, the documents may come from distinct databases. I've found the value of entry.getUniversalID() to be the doc's original UNID, but this is best treated as not a guarantee and so should not be used.

Designer Rights

So here's a fun catch: though any Reader can perform a query, you need Designer access to create a view. This seemed like a problem to me at first, since I'd want the generated results to be from a specific user for reader-field purposes, but it's not really an impediment, at least when you're in an environment like XPages.

Above, I mentioned that the fact that you create a QueryResultsProcessor object from a Database is important, and this is one of the reasons why. Though traditionally you wouldn't mix descendants of session and sessionAsSigner together, there's no actual rule against it. You can re-open your context database with sessionAsSigner, make a QRP object from that, and then feed it a DominoQuery object created from the non-signer database:

1
2
3
4
5
6
7
8
Database database = ExtLibUtil.getCurrentDatabase();
Session sessionAsSigner = ExtLibUtil.getCurrentSessionAsSigner();
Database databaseAsSigner = sessionAsSigner.getDatabase(database.getServer(), database.getFilePath());

DominoQuery dominoQuery = database.createDominoQuery();
QueryResultsProcessor qrp = databaseAsSigner.createQueryResultsProcessor();
qrp.addDominoQuery(dominoQuery, "some DQL", null);
View result = qrp.executeToView("some view name");

Because the QueryResultsProcessor uses the provided DominoQuery object as the "engine" for the DQL search, the query will use the normal user's rights while the processing will use the signer rights.

Naming and Expiring Results

As seen there, you have to name your views. While you could in theory use this mechanism to kind of manage your own views for general use and name them things like "People By First Name" or whatever, you'll likely want to work with them programmatically and name them based on your query input.

In the case of this JNoSQL driver, I compute a predictable-from-input hash-based name from the name of the creating class, the current user, and the sort/skip/limit attributes of the incoming query. You could really do whatever you want here, but having at least some sort of hash like this is likely the way to go.

Now there's the matter of detecting when you need to refresh the data. In some applications, it may suffice to go with the "expire in X hours" parameter when creating the view, though that's extremely coarse and only really useful on its own for specific needs (like a daily report).

The tack I took here was to try to do an efficient check of view creation time compared to the last data modification time from the source database. The Database class only has a "last modified" time in general, but I can't very well use that when my results caches are being added as design elements: a second distinct query would "invalidate" the first even when the data hasn't changed. There might be a proper way to get this in lotus.domino, the NAPI has a wrapper for NSFDbModifiedTimeByName: NotesSession.getLastDataModificationDateByName. That lets you get the last data-mod time in epoch seconds, and you can then compare that to the creation time of the view.

While it's unfortunate that you have to remove the view outright to refresh it instead of doing a delta update like NIF would do, I get it, and it's generally fast enough. Plus, there's enough hand-wavy stuff going on with feeding the DQL query to the QRP that Domino would be free to secretly retain results for a bit and do deltas internally if it so desires.

Storing Result Views

The other interesting aspect of creating a QRP object from a Database and not a Session is that that DB serves as the destination to house the views. While in a single-DB environment it would seem very natural to just store the views in the same place as the data, there's no particular requirement to do so. Moreover, if you're querying multiple databases, you're naturally not going to do this for all docs anyway, so you'll be forced to conceptualize this anyway.

Now, personally, I'm fine with a bunch of temporary machine-named views hanging out in the NSF (especially since the names are wrapped in parentheses to hide them from default UI listings), I can see why it could be annoying. For one, these views sync to an ODP in Designer, which I put in as an Aha idea to change, but might actually rightly be called a bug. Beyond that, while these views won't meaningfully contribute to NIF's workload (since NIF will skip them), they're unsightly and would get in the way if you're trying to tend to the design of your NSF like a garden.

So you might want to have a side database to store these views, and this could also be a way to get around the "needing Designer access" requirement if you're in an environment where you don't have a signer session. In the Notes client, you could store the results in a local NSF; on the server, you could make a "scratch" NSF somewhere to house them, and then add readers to the view design note when doing so to prevent leaking data across users and apps.

Conclusion

Anyway, this is all pretty neat. Reusing view design elements to just be static containers for collation data is weird, but I get the practical reasons why it makes sense. Importantly, this pairing solves some very-real problems with querying and extracting data from Domino. For example, if you do all of your querying via this route, you can use DQL's "EXPLAIN" capability to actually get some insight into database performance for once. You could imagine having an optional mode where you log the EXPLAIN results and execution times for all queries your app is performing, and then manually create "index" views to fix hotspots. It's quite satisfying to finally get that kind of ability in Domino. It'd be neat if that also came to QueryResultsProcessor.

I'm looking forward to expanding the JNoSQL driver further and then either using that directly in client work or adapting the code I use there. I'll definitely add such a logging capability, which will go a long way to put some numbers to the "feels slow" problem that can crop up. Beyond that, barring any show stoppers, I'm thoroughly excited by the prospect of moving away from fetching explicitly-named views in code and switching to an idiom of querying the pool of documents and letting the database make it work for me.

Updating The XPages JEE Support Project To Jakarta EE 9, A Travelogue

Tue Dec 14 16:41:59 EST 2021

  1. Updating The XPages JEE Support Project To Jakarta EE 9, A Travelogue
  2. JSP and MVC Support in the XPages JEE Project
  3. Migrating a Large XPages App to Jakarta EE 9
  4. XPages Jakarta EE Support 2.2.0
  5. DQL, QueryResultsProcessor, and JNoSQL
  6. Implementing a Basic JNoSQL Driver for Domino
  7. Video Series On The XPages Jakarta EE Project
  8. JSF in the XPages Jakarta EE Support Project
  9. So Why Jakarta?
  10. XPages Jakarta EE 2.5.0 And The Looming Java-Version Wall
  11. Adding Concurrency to the XPages Jakarta EE Support Project
  12. Adding Transactions to the XPages Jakarta EE Support Project
  13. XPages Jakarta EE 2.9.0 and Next Steps
  14. XPages JEE 2.11.0 and the Javadoc Provider
  15. The Loose Roadmap for XPages Jakarta EE Support
  16. XPages JEE 2.12.0: JNoSQL Views and PrimeFaces Support
  17. XPages JEE 2.13.0
  18. XPages JEE 2.14.0
  19. XPages JEE 2.15.0 and Plans for JEE 10 and 11

I think it's been a little while since I talked about the XPages Jakarta EE Support project of mine. The goal of that is sort of the inverse of the XPages Runtime project: rather than bringing XPages to a proper modern app server, the JEE Support project brings a handful of current Jakarta EE specs to XPages. It started out a few years ago as a sort of proof-of-concept, but I've since been using it for client work to do things like use newer Jakarta REST (n?e JAX-RS), CDI, and newer EL in XPages and OSGi bundles.

The Specification Move

Originally, I targeted a set of specifications from Java/Jakarta EE 8. Some of these were new to Domino outright, while some (such as JAX-RS) existed in the XPages stack already but in very old forms. I implemented those and for a good while just used the project as a source of parts for client work, tweaking it here and there as needed.

However, the long-prophesized package-name switch from javax.* to jakarta.* came to fruition in Jakarta EE 9, released a bit over a year ago. In the intervening year, most implementations of the specs made the switch, and the versions I was using started to show their age (for example, I was using RESTEasy 3, which was already old when I adopted it, and it's going to 6 now). Beyond just the philosophical sadness of my project being behind, I started to grow specific needs to upgrade components: we switched to JSON-B a while ago, but then some new bug fixes in Yasson were coming only to post-jakarta.* builds.

The Initial Work

I first gave a shot to this in August, initially planning to move only JSON-P and JSON-B over to the new namespace. However, I quickly hit the limits of that, since a lot of these specs are interdependent. JAX-RS using JSON-P and JSON-B to emit JSON content, Yasson has some ties to CDI, and so forth. I realized that it was going to have to be all-or-nothing.

So I rolled up my sleeves and assessed the task ahead of me. At a basic level, there was the job of updating my dependencies, which immediately had some good aspects and bad aspects:

  • Previously, I was using a hodgepodge of spec packages like the JBoss bundling of JAX-RS in order to get something that would work and be license-friendly. Now that it was all over at Eclipse, I could switch to the nice, clean official versions and have no license worries.
  • I also used to have all sorts of OSGi rule overrides to account for Domino-specific oddities like ancient versions of various specs being supplied by the default classpath or other, conflicting bundles, all with no versioning. Once I was looking for e.g. jakarta.annotation instead of javax.annotation, I was no longer bound to that particular nightmare.
  • Not all of my dependencies were ready. When I first started, RESTEasy (my JAX-RS provider of choice) had not yet uploaded a JEE-9-compatible version. My main choices were to try using Eclipse Transformer, which would add a whole new layer to the task, or to switch to another provider.

Then there's the elephant in the room: the freaking Servlet API, which much of this depends on. Since the Servlet API is the job of the web container, I can't realistically upgrade it. Fortunately, that's only half true: I can't give it new capabilities (like Web Sockets), but I can wrap the old stuff with the new. And, like the other specs, the switch of the package name was a tremendous blessing, allowing me to deploy the official Servlet 5 API unchanged. Then, I did the tedious work of writing a slew of adapter classes, half wrapping a javax.servlet component and pretending it's jakarta.servlet and half going the other direction. Since the methods are either direct analogs, optional features, or can be emulated, this was actually much easier than I thought it would be. And there: Servlet 5 on Domino! Kind of!

The Showstopper

However, I soon hit what seemed to be a show-stopper: a LinkageError problem when using CDI that didn't show up previously. My search for the topic found only one hit: an issue in Open Liberty referencing almost exactly the same problem. My heart sank when I read that their fix was to upgrade the Equinox runtime - something that's outside my powers on Domino (probably).

So, disheartened, I set it aside for a couple months. I figured there was a small chance that Weld (the CDI implementation at the heart of the trouble) would put out an update that fixed it - after all, an older version worked.

Resuming Work

After setting it aside, it kept eating away at the back of my mind, and two things kept pushing me to go back to it:

  • I'll need to do it eventually. I (and my client projects) can't just be stuck at the old style forever.
  • I really didn't want to admit defeat and switch back to Gson for JSON processing.

So I went back to it. My initial hope - that a new version of Weld would magically fix the problem - proved to not have come to fruition. Still, though, I wasn't sure that it was the exact same problem Liberty encountered. For one, my use of CDI studiously avoids actually telling it about OSGi, since I've had little luck making use of that with Domino's OSGi stack. That was enough cause to make me think I could work around it.

And work around it I did! The trouble turned out to be, unsurprisingly, a bit esoteric, but boiled down to the runtime re-registering proxy classes for the same core components. My guess is that, somewhere along the line, Weld changed some sort of internal cache in a way that would break when using a bunch of ephemeral per-NSF containers as I do. I implemented my own (since it's an intended extension point) and added a bit of a cache, and I was back to the races.

As a convenient blessing, RESTEasy released 6.0.0.Beta1 just days before I got back to it, a major release targeted at JEE 9. That meant that I could save a ton of work by not having to re-work everything for another JAX-RS implementation. I had been looking into Jersey, which I'm sure would have done the job, but it's fiddly work trying to put all these pieces together on Domino, and I was all the happier to not have to re-do it all.

JavaMail

But then I hit a new problem: the javax.mail API, now jakarta.mail. The first part of this is easy enough: bring in the new spec bundle and everything will point to it. Great! I hit an immediate problem, and one I had been dreading dealing with. Though the spec changed package names, the implementation didn't. That brought me face-to-face again with an old nemesis of mine, sitting there in Domino's classpath, corrupting it:

A screenshot of Domino's ndext directory

The way the Mail API works is that there's a file, called "mailcap", that lists implementations for common data types, like:

1
2
3
4
text/plain;;		x-java-content-handler=com.sun.mail.handlers.text_plain
text/html;;		x-java-content-handler=com.sun.mail.handlers.text_html
text/xml;;		x-java-content-handler=com.sun.mail.handlers.text_xml
multipart/*;;		x-java-content-handler=com.sun.mail.handlers.multipart_mixed; x-java-fallback-entry=true

So, while all the entrypoint classes are jakarta.mail.* now, the implementations remain com.sun.mail.*, all with the same class names. And, since this little jerk of a JAR is sitting in the system classpath, it has a way of showing up all the time, complaining that com.sun.mail.handlers.text_plain is incompatible with jakarta.activation.DataContentHandler.

This is extremely fiddly to deal with, potentially involving writing a special ClassLoader implementation that blocks calls down to the lower-level JAR. While maybe possible, I'm not sure it'd be possible in a way that would be practical for normal use in an app.

And so, with a heavy heart, I forked the thing and added an "org.openntf" in front of all the package names. And that... works! It works just fine. It means that I'm on the hook for manually integrating any upstream changes, but at least it works without having to worry about any conflicts.

That wasn't the end of my trouble with this spec, though. The spec package itself, in jakarta.mail.Session uses ServiceLoader to look for services, and it uses it in the form that looks them up with the current thread's ClassLoader. Because I'm working in OSGi, that ClassLoader - the XPage app's loader - won't know about the implementation classes directly, and this call fails. And, while there's a whole sub-spec in OSGi for dealing with this, I've never had success actually getting it working in Domino.

So I forked that freaking thing too and modified the calls to use its own ClassLoader, which could find the implementation by way of it being a fragment bundle attached to it.

And, with that, finally, I had Jakarta Mail properly hooked up and working without having to jump through too many hoops. I'd still prefer to not have forked the source, but it was the best of a bad lot of choices.

The Final Tally

That brings the specs updated/added in this project to:

  • Expression Language 4.0
  • Contexts and Dependency Injection 3.0
    • Annotations 2.0
    • Interceptors 2.0
    • Dependency Injection 2.0
  • RESTful Web Services (JAX-RS) 3.0
  • Bean Validation 3.0
  • JSON Processing 2.0
  • JSON Binding 2.0
  • XML Binding 3.0
  • Mail 2.1
    • Activation 2.1

Not too shabby, if I say so myself. Technically, Servlet 5.0 is in there, but it doesn't actually bring any newer-than-2.4 powers to the Servlet container, so it's really just infrastructural details.

Now I'll just have the work of updating my client project and finally getting to use whatever that Yasson bug fix was that prompted this in the first place.

Tinkering With Testcontainers for Domino-based Web Apps

Mon Jul 19 12:46:48 EDT 2021

  1. Tinkering With Testcontainers for Domino-based Web Apps
  2. Adding Selenium Browser Tests to My Testcontainers Setup
  3. Building a Full Domino Image for JUnit Tests

(Fair warning: this post is not about testing, say, a normal XPages app via Testcontainers. One could get there on this path, but this has a lot of prerequisites that are almost specific to me alone.)

For a while now, I've seen the Testcontainers project hanging around in my periphery. The idea of the project is that it uses Docker to allow you to programmatically load services needed by your automated test suites, rather than having to have the servers running separately. This is a clean match for something like a WAR-based Java webapp that uses, say, Postgres as its backend database: with this, you can spin up a Postgres image from the public repository, fill it with test data, run the suite, and tear it down cleanly.

However, this is generally not a proper match for Domino. Since the code you're testing almost always directly uses Domino API calls (from Notes.jar or another source) and that means having a local Notes runtime initialized in the test code, it's no help to have a separate container somewhere. So, instead, I've been left watching from afar, seeing all the kids having fun in a playground I never got to go to.

The Change

This situation has shifted a bit for my needs, though, thanks to secondary effects of changes I've made in one of my client projects. This is the one where I do all the bells and whistles of my tinkering over the years: XPages outside Domino, building a bunch of NSFs with Jenkins, and so forth.

For a while, I had been building test suites run using tycho-surefure-plugin, but somewhat recently moved the project to maven-bundle-plugin to reap the benefits of that. One drawback, though, was that the test suites became much more difficult to run, in large part due to the restrictions on environment propagation in macOS.

Initially, I just let them wither, but eventually I started to rebuild the test suites. The app had REST services for a while, but they've grown in prominence since we've started gradually replacing XPages-based components with Angular apps. And REST services, fortunately, are best tested at a remove.

First Pass: liberty-maven-plugin

The first way I started writing test suites for the REST services was by using liberty-maven-plugin, which is a general Swiss army knife for working with Liberty during Maven builds, but has particular support for starting a server before tests and terminating it after them. So I set up a config that boots up a Liberty server that can then initialize using a configured Notes runtime, and I started writing tests against it using the Jakarta REST client API and a bit of HtmlUnit.

To its credit, this setup did its job swimmingly. It still has the down side that you have to balance teacups to get a Notes or Domino runtime configured, but, once you do, it'll work nicely.

Next Pass: Testcontainers

Still, it'd be all the better to avoid the need to have a local Notes or Domino setup to run these tests. There's still going to be some weirdness due to things like having to have the non-public Domino Docker image pre-loaded and having an ID file and notes.ini somewhere, but that can be overcome. Plus, I've already overcome those for the CI servers I have set up with each build: I have some dev IDs in the repository and, for each build, Jenkins constructs a Docker image housing the webapp and starts a container using a technique similar to what I described a few months back to run a Liberty app with Domino stuff brought in for support.

So I decided to try adapting that to work with Testcontainers. Instead of my Maven config constructing and launching a Liberty server, I would instead build a Docker image that would then be loaded in Java with the Testcontainers library. In the case of the CI server scripts, I used Bash to copy files into a scratch directory to avoid having to include the whole repo in the Docker build context (prohibitive on the Mac particularly), and so I sought to mirror that in Maven as well.

Building the App Image in Maven

To accomplish this goal, I used maven-resources-plugin to copy the app and support files to a scratch directory, and then com.spotify:dockerfile-maven-plugin to build the Docker image:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
<!-- snip -->
    <!-- Copy Docker support resources into scratch space -->
    <plugin>
        <groupId>org.apache.maven.plugins</groupId>
        <artifactId>maven-resources-plugin</artifactId>
        <version>3.2.0</version>
        <executions>
            <execution>
                <?m2e ignore?>
                <id>prepare-docker-scratch</id>
                <goals>
                    <goal>copy-resources</goal>
                </goals>
                <phase>pre-integration-test</phase>
                <configuration>
                    <outputDirectory>${project.build.directory}/dockerscratch</outputDirectory>
                    <resources>
                        <!-- Dockerfile to build -->
                        <resource>
                            <directory>${project.basedir}</directory>
                            <includes>
                                <include>testcontainer.Dockerfile</include>
                            </includes>
                        </resource>
                        <!-- The just-built WAR -->
                        <resource>
                            <directory>${project.build.directory}</directory>
                            <includes>
                                <include>client-webapp.war</include>
                            </includes>
                        </resource>
                        <!-- Support files from the main repo Docker config -->
                        <resource>
                            <directory>${project.basedir}/../../../docker/support</directory>
                            <includes>
                                <!-- Contains Liberty server.xml, etc. -->
                                <include>liberty/client-app-a/**</include>
                                <!-- Contains a Domino server.id, names.nsf, and notes.ini -->
                                <include>notesdata-ciserver/**</include>
                            </includes>
                        </resource>
                    </resources>
                </configuration>
            </execution>
        </executions>
    </plugin>
    <!-- Build a Docker image to be used by Testcontainers -->
    <plugin>
        <groupId>com.spotify</groupId>
        <artifactId>dockerfile-maven-plugin</artifactId>
        <version>1.4.13</version>
        <executions>
            <execution>
                <?m2e ignore?>
                <id>build-webapp-image</id>
                <goals>
                    <goal>build</goal>
                </goals>
                <phase>pre-integration-test</phase>
                <configuration>
                    <repository>client-webapp-test</repository>
                    <tag>${project.version}</tag>
                    <dockerfile>${project.build.directory}/dockerscratch/testcontainer.Dockerfile</dockerfile>
                    <contextDirectory>${project.build.directory}/dockerscratch</contextDirectory>
                    <!-- Don't attempt to pull Domino images -->
                    <pullNewerImage>false</pullNewerImage>
                </configuration>
            </execution>
        </executions>
    </plugin>
<!-- snip -->

The Dockerfile itself is basically what I had in the afore-linked post, minus the special ENTRYPOINT stuff.

Of note in this config is <pullNewerImage>false</pullNewerImage> in the dockerfile-maven-plugin configuration. Without that set, the plugin would attempt to look for a Domino image on the public Dockerhub and then fail because it's unavailable. With that behavior disabled, it will just use the one locally loaded.

Configuring the Tests

Now that I had that configured, it was time to adjust the tests to suit. Previously, I had been using system properties passed from the Maven environment into the test runner to identity the Liberty server, but now the container initialization will happen in code. Since this app is pretty heavyweight, I didn't want to do what most of the Testcontainers examples show, which is to let the Testcontainers JUnit hooks spawn and terminate containers for each test. Instead, I set up a centralized class to launch the container once:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
package it.com.example;

import org.testcontainers.containers.GenericContainer;
import org.testcontainers.utility.DockerImageName;

public enum AppTestContainers {
    instance;
    
    public final GenericContainer<?> webapp;
    
    @SuppressWarnings("resource")
    private AppTestContainers() {
        webapp = new GenericContainer<>(DockerImageName.parse("client-webapp-test:1.0.0-SNAPSHOT")) //$NON-NLS-1$
                .withExposedPorts(8080);
        webapp.start();
    }
}

With this setup, there will only be one instance of the container launched for the whole test suite, and then Testcontainers will shut it down for me at the end. I can also use the normal mechanisms from the Testcontainers docs to get the actual name and port it ended up mapped to:

1
2
3
4
5
6
    public String getServicesBaseUrl() {
        String host = AppTestContainers.instance.webapp.getHost();
        int port = AppTestContainers.instance.webapp.getFirstMappedPort();
        String context = "clientapp";
        return AppPathUtil.concat("http://" + host + ":" + port, context, ServicesUtil.DEFAULT_JAXRS_ROOT);
    }

Once I did that, all the tests that had previously been running against a liberty-maven-plugin-run server now worked against the Docker container, and I no longer have any dependency on the local environment actually having Notes or Domino fully installed. Neat!

A Catch: Running on my Jenkins Server

Since the whole point of Docker is to make things reproducible across environments, I was flush with confidence when I checked these changes in and pushed them up to the repo. I watched with bated breath as Jenkins picked up the change and started to build. My heart sank, though, when it got to the integration test suite and it failed with a bunch of:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
Jul 19, 2021 11:02:10 AM org.testcontainers.utility.ResourceReaper lambda$null$1
WARNING: Can not connect to Ryuk at localhost:49158
java.net.ConnectException: Connection refused (Connection refused)
    at java.base/java.net.PlainSocketImpl.socketConnect(Native Method)
    at java.base/java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:399)
    at java.base/java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:242)
    at java.base/java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:224)
    at java.base/java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
    at java.base/java.net.Socket.connect(Socket.java:609)
    at org.testcontainers.utility.ResourceReaper.lambda$null$1(ResourceReaper.java:163)
    at org.rnorth.ducttape.ratelimits.RateLimiter.doWhenReady(RateLimiter.java:27)
    at org.testcontainers.utility.ResourceReaper.lambda$start$2(ResourceReaper.java:159)
    at java.base/java.lang.Thread.run(Thread.java:829)

What the heck? Well, I had noticed in my prep that "Ryuk" is the name of something Testcontainers uses in its orchestration work, and is what allowed me to spawn the container manually above without explicitly terminating it. I looked around for a while and saw that a lot of people had reported similar trouble over the years, but usually it was due to some quirk in a specific version of Docker on Windows or macOS, which was not the case here. I did, though, find that Bitbucket Pipelines tripped over this at one point, and it seemed to be due to their switch of using safer user namespaces. Though it sounds like newer versions of Testcontainers fixed that, I figured it's pretty likely that I was hitting a variant of it, as I do indeed use namespace remapping.

So I tweaked my failsafe-maven-plugin configuration to set the TESTCONTAINERS_RYUK_DISABLED environment variable to false and, to be safe, added a shutdown hook at the end of my AppTestContainers init method:

1
2
3
Runtime.getRuntime().addShutdownHook(new Thread(() -> {
    webapp.close();
}));

Now, Testcontainers doesn't use its Ryuk container, but the actual app container loads up just fine and is destroyed at the end of the suite. Perfect! If all continues to go well, this will mean that it'll be one step easier for other devs to run the test suites regardless of their local setup, which is always a thorn in the side of Domino-realm testing.

Closing: What About Testing Domino Apps?

I mentioned in my disclaimer at the start that this is specifically about testing apps that use a Domino runtime, not apps on Domino. Still, I bet you could do this to test a Domino app that you deploy as an NSF and/or OSGi plugins, and I may do that myself down the line so that the test suite even-more-closely matches what is actually running in production. You could adjust the maven-resources-plugin config above (or use maven-dependency-plugin) to bring in NSFs built earlier in the build with NSF ODP Tooling as well as OSGi update sites and then have your Dockerfile copy those into the Domino data directory and the workspace/applications/eclipse directory. Similarly, if you had a Domino addin that you launch as a task and which then itself listens on a port, you could do the same there.

It's still not as convenient as being able to just easily run Domino API tests without all the scaffolding, and it implies a lot of structure that makes these more firmly "integration" than "unit" tests, but that's still a powerful capability to have.

Tinkering With Cross-Container Domino Addins

Sun May 16 13:35:54 EDT 2021

Tags: docker domino

A good chunk of my work lately involves running distinct processes with a Domino runtime, either run from Domino or standalone for development or CI use. Something that had been percolating in the back of my mind was another step in this: running these "addin-ish" programs in Docker in a separate container from Domino, but participating in that active Domino runtime.

Domino addins in general are really just separate processes and, while they gain some special properties when run via load foo on the console or OSLoadProgram in the C API, that's not a hard requirement to getting a lot of things working.

I figured I could get this working and, armed with basically no knowledge about how this would work, I set out to try it.

Scaffolding

My working project at hand is a webapp run with the standard open-liberty Docker images. Though I'm using that as a starting point, I had to bring in the Notes runtime. Whether you use the official Domino Docker images from Flexnet or build your own, the only true requirement is that it match the version used in the running server, since libnotes does a version check on init. My Dockerfile looks like this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
FROM --platform=linux/amd64 open-liberty:beta

USER root
RUN useradd -u 1000 notes

RUN chown -R notes /opt/ol
RUN chown -R notes /logs

# Bring in the Domino runtime
COPY --from=domino-docker:V1200_03252021prod /opt/hcl/domino/notes/latest/linux /opt/hcl/domino/notes/latest/linux
COPY --from=domino-docker:V1200_03252021prod /local/notesdata /local/notesdata

# Bring in the Liberty app and configuration
COPY --chown=notes:users /target/jnx-example-webapp.war /apps/
COPY --chown=notes:users config/* /config/
COPY --chown=notes:users exec.sh /opt/
RUN chmod +x /opt/exec.sh

USER notes

ENV LD_LIBRARY_PATH "/opt/hcl/domino/notes/latest/linux"
ENV NotesINI "/local/notesdata/notes.ini"
ENV Notes_ExecDirectory "/opt/hcl/domino/notes/latest/linux"
ENV Directory "/local/notesdata"
ENV PATH="${PATH}:/opt/hcl/domino/notes/latest/linux:/opt/hcl/domino/notes/latest/linux/res/C"

EXPOSE 8080 8443

ENTRYPOINT ["/opt/exec.sh"]

I'll get to the "exec.sh" business later, but the pertinent parts now are:

  • Adding a notes user (to avoid permissions trouble with the data dir, if it comes up)
  • Tweaking the Liberty container's ownership to account for this
  • Bringing in the Domino runtime
  • Copying in my WAR file from the project and associated config files (common for Liberty containers)
  • Setting environment variables to tell the app how to init

So far, that's largely the same as how I run standalone Notes-runtime-enabled apps that don't talk to Domino. The only main difference is that, instead of copying in an ID and notes.ini, I instead mount the data volume to this container as I do with the main Domino one.

Shared Memory

The big new hurdle here is getting the separate apps to participate in Domino's shared memory pool. Now, going in, I had a very vague notion of what shared memory is and an even vaguer one of how it works. Certainly, the name is straightforward, and I know it in Domino's case mostly as "the thing that stops Notes from launching after a crash sometimes", but I'd need to figure out some more to get this working. Is it entirely a filesystem thing, as the Notes problem implies? Is it an OS-level thing with true memory? Well, both, apparently.

Fortunately, Docker has this covered: the --ipc flag for docker run. It has two main modes: you can participate in the host's IPC pool (essentially like what a normal, non-contained process does) or join another container specifically. I opted for the latter, which involved changing both the Domino launch arguments.

For Domino, I added --ipc=shareable to the argument list, basically registering it as an available host for other containers to glom on to.

For the separate app, I added --ipc=container:domino, where "domino" is the name of the Domino container.

With those in place, the "addin" process was able to see Domino and do addin-type stuff, like adding a status line and calling AddinLogMessageText to display a message on the server's console.

Great: this proved that it's possible. However, there were still a few show-stopping problems to overcome.

PIDs

From what I gather, Notes keeps track of processes sharing its memory by their reported process IDs. If you have a process that joins the pool and then exits (maybe only if it exits abruptly; I'm not sure) and then tries to rejoin with the same PID, it will fail on init with a complaint that the PID is already registered.

Normally, this isn't a problem, as the OS hands out distinct PIDs all the time. This is trouble with Docker, though: by default, in general, the direct process in a Docker container sees itself as PID 1, and will start as such each time. In the Domino container, its PID 1 is "start.sh", and that's still going, and it's not going to hear otherwise from some other process calling itself the same.

Fortunately, this was a quick fix: Docker's -pid option. Though the documentation for this is uncharacteristically slight, it turns out that the syntax for my needs is the same as the last option. Thus: --pid=container:domino. Once I set that, the running app got a distinct PID from the pool. That was pleasantly simple.

SIGTERM

And now we come to the toughest problem. As it turns out, dealing with SIGTERM - the signal sent by docker stop - is a whole big deal in the Java world. I banged my head at this for a while, with most of the posts I've found being not quite applicable, not working at all for me, or technically working but only in an unsustainable way.

For whatever reason, the Open Liberty Docker image doesn't handle this terribly well - when given a SIGTERM order, it doesn't stop the servlet context before dying, which means the contextDestroyed method in my ServletContextListener (such as this one) doesn't fire.

In many webapp cases, this is fine, but Domino is extremely finicky when it comes to memory-sharing processes needing to exit cleanly. If a process calls NotesInit but doesn't properly call NotesTerm (and close all its Notes-enabled threads), the server panics and dies. This is... not great behavior, but it is what it is, and I needed to figure out how to work with it. Unfortunately, the Liberty Docker container wasn't doing me any favors.

One option is to use Runtime.getRuntime().addShutdownHook(...). This lets you specify a Thread to execute when a SIGTERM is received, and it can work in some cases. It's a little shaky sometimes, though, and it's bad form to riddle otherwise-normal webapps with such things: ideally, even webapps that you intend to run in a container should be written such that they can participate in a normal multi-app environment.

What I ended up settling on was based on this blog post, which (like a number of others) uses a shell script as the main entrypoint. That's a common idiom in general, and Open Liberty's image does it, but its script doesn't account for this, apparently. I tweaked that post's shell script to use the Liberty start/stop commands and ended up with this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
#!/usr/bin/env bash
set -x

term_handler() {
  /opt/ol/helpers/runtime/docker-server.sh /opt/ol/wlp/bin/server stop
  exit 143; # 128 + 15 -- SIGTERM
}

trap 'kill ${!}; term_handler' SIGTERM

/opt/ol/helpers/runtime/docker-server.sh /opt/ol/wlp/bin/server start defaultServer

# echo the Liberty console
tail -f /logs/console.log &

while true
do
  tail -f /dev/null & wait ${!}
done

Now, when I issue a docker stop to the container, the script issues an orderly shutdown of the Liberty instance, which properly calls the contextDestroyed method and allows my code to close down its ExecutorService and call NotesTerm. Better still, Domino keeps running without crashing!

Conclusion

My final docker run scripts ended up being:

Domino

1
2
3
4
5
6
7
8
9
docker run --name domino \
	-d \
	-p 1352:1352 \
	-v notesdata:/local/notesdata \
	-v notesmisc:/local/notesmisc \
	--cap-add=SYS_PTRACE \
	--ipc=shareable \
	--restart=always \
	iksg-domino-12beta3

Webapp

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
docker build . -t example-webapp
docker run --name example-webapp \
	-it \
	--rm \
	-p 8080:8080 \
	-v notesdata:/local/notesdata \
	-v notesmisc:/local/notesmisc \
	--ipc=container:domino \
	--pid=container:domino \
	example-webapp

(Here, the webapp is run to be temporary and tied to the console, hence -it, --rm, and no -d)

One nice thing to note is that there's nothing webapp- or Java-specific here. One of the nice things about Docker is that it removes a lot of the hurdles to running whatever-the-heck type of program you want, so long as there's a Linux Docker image for it. I just happen to default to Java webapps for basically everything nowadays. The above script could be tweaked to work with most anything: the original post had it working with a Node app.

Now, considering that I was starting from nearly scratch here, I certainly can't say whether this is a bulletproof setup or even a reasonable idea in general. Still, it seems to work, and that's good enough for me for now.

Domino HttpService and the NSF Router Project

Thu Mar 18 15:27:04 EDT 2021

Tags: domino java

In my last post and its predecessor, I talked about my tinkering at the XspCmdManager level of Domino's HTTP stack and then more specifically about the com.ibm.designer.runtime.domino.adapter.HttpService class.

The Stack

Now, HttpService is about as generic a name as you can get for this sort of thing, and it doesn't really tell you what it represents. You can think of Domino's HTTP stack since at least the 8.5 era as having two cooperating parts: the core native portion that handles HTTP requests in basically the same way as Domino always did, plus the Java layer as organized by XspCmdManager. The Java layer gets "right of first refusal" for any incoming request that wasn't handled by a DSAPI plugin: before routing the request to the legacy HTTP code, Domino asks XspCmdManager if it'd like to handle it, and only takes care of it at the native layer if Java says no.

XspCmdManager on its own doesn't do much. It accepts the JNI calls from the native side, but otherwise quickly passes the buck to LCDEnvironment (I assume the "LCD" here stands for "Lotus Component Designer"). LCDEnvironment, in turn, really just aggregates registered handlers and dispatches requests. It does a little work to handle exception cases more cleanly than XspCmdManager would, but it's mostly just a dispatcher.

The things that it dispatches to, though, are the HttpServices. These are registered by using the com.ibm.xsp.adapter.serviceFactory IBM Commons extension point, such as here in the plugin.xml form:

1
2
3
<extension point="com.ibm.commons.Extension">
  <service type="com.ibm.xsp.adapter.serviceFactory" class="org.openntf.nsfrouter.NSFRouterServiceFactory" />
</extension>

The class you register there is an implementation of IServiceFactory, which supplies zero or more HttpService implementations on request.

As a side note, I've been using this extension point for years and years, but never before to actually handle HTTP requests. It's extremely convenient in that it's something you can register that is loaded up immediately when the HTTP task starts and is notified as it's terminating, giving you a useful lifecycle without having to wait for a request to come in. I learned about it from the OpenNTF Domino API team and it's been a regular part of my toolkit since.

The HttpService

So that brings us to the HttpService implementation classes themselves. Once LCDEnvironment has gathered them all together, it asks each one in turn (via #isXspUrl) if it can handle a given URL. If any of them say that they can, then it calls the #doService method on each in turn (based on the #getPriority method's return value) until one says that it handled it.

There are a few main HttpService implementations in action on Domino:

  • com.ibm.domino.xsp.module.nsf.NSFService, which handles in-NSF XPages and resources
  • com.ibm.domino.xsp.adapter.osgi.OSGIService, which handles OSGi-registered servlets and webapps
  • com.ibm.domino.xsp.module.nsf.StaticResourcesService, which helps serve static resources

These services also tend to go another layer deeper, passing actual requests off to ComponentModule implementations like NSFComponentModule. That's beyond the scope of what I'm talking about today, but it's interesting to see just how much the Domino stack is basically one giant webapp that contains progressively smaller bounded webapps, like a Matryoshka doll.

For those keeping track, we're about here on a typical XPages call stack:

     at com.ibm.domino.xsp.module.nsf.NSFComponentModule.doService(NSFComponentModule.java:1336)
     at com.ibm.domino.xsp.module.nsf.NSFService.doServiceInternal(NSFService.java:662)
     at com.ibm.domino.xsp.module.nsf.NSFService.doService(NSFService.java:482)
     at com.ibm.designer.runtime.domino.adapter.LCDEnvironment.doService(LCDEnvironment.java:357)
     at com.ibm.designer.runtime.domino.adapter.LCDEnvironment.service(LCDEnvironment.java:313)
     at com.ibm.domino.xsp.bridge.http.engine.XspCmdManager.service(XspCmdManager.java:272)

For our purposes this week, the #isXspUrl and #doService methods on HttpService are our stopping points.

NSF Router Service

In a Twitter conversation yesterday, Per Lausten gave me the idea of using this low level of access to implement improved in-NSF routing. That is to say, if you want "foo.nsf/some/nice/url/here" to actually load up "index.xsp?path=nice/url/here" or the like. Generally, if you want to do this, you either have to set up Web Site rules in names.nsf or settle for next-best options like "index.xsp/nice/url/here".

Since an HttpService comes in at a low-enough level to tackle this, though, it's entirely doable to improve this situation there. So, this morning, I did just that. This new project is a pretty simple one, with all of the action going on in one class.

The way it works is that it looks for a ".nsf" URL and, when it finds one, attempts to load a file or classpath resource named "nsfrouter.properties". The contents of this is a Java Properties file enumerating regex-based routing you'd like. For example:

1
2
foo/(\\w+)=somepage.xsp?bit=bar
baz=somepage.xsp

When found, the class loads up the rules and then uses them to check incoming URLs.

The #doService method then picks up that URL, does a String#replaceAll call to map it to the target, and then redirects the browser over:

NSF Router in action

The user still ends up at the "uglier" URL, but that's the safest way to do it without breaking on-page references.

I felt like that was a neat little exercise, and one that's not only potentially useful on its own but also serves as a good way to play around with these somewhat-lower-level Domino components.

Notes From A Week in Administration-Land

Tue Jan 26 20:06:42 EST 2021

Tags: admin domino

This past week, I've been delving back into the world of Domino administration for a client and taking the opportunity to brush up on the niceties we've gotten in the past few releases. A few things struck me as I went along, so I'll list them here in no particular order.

Containers are the Way to Go

This isn't too much of a surprise, I suppose, but I imagine the only reason I'll set up a server the "installer" way again is for dev servers on Windows. Using Docker just skips over that installation phase completely and makes things so much quicker and more consistent.

It also essentially forces you to make an install-support script in the form of a Dockerfile. I started out using the default one from FlexNet, but then had a need to install fontconfig to avoid this delightful little gotcha that crops up with Poi. Since the program container is intended to be ephemeral, this meant that I had to make a Dockerfile to make a proper image for it, and now there's inherently an artifact for future admins to use.

Cluster Symmetry is a Delight

Years ago, I wrote a "Generic Replicator" agent that I would configure per-server to tell it to do the work of mirroring all NSFs. It's done yeoman's work since then, but I'm all the happier to use a built-in capability. So, tip-of-the-hat to the team that added that one in.

It'd be nice if it didn't also require notes.ini settings, but I suppose that's the way of things.

DBMT is Still Great

I know it's years and years old at this point, but I can never help but gush over DBMT. It's great and should be promoted to an on-by-default setting instead of being something you have to know to configure via a Program document.

It Still Sucks to Configure Every Server Doc

Every time I make a new server document, there's this pile of obligatory "fix the defaults" work: filling in all the stuff on the security tab, enabling web site documents, changing all the fiddly Ports tab options (including having to enable enforcing access settings (?!)), and so forth. That's on top of the giant tower of notes.ini settings in the Configuration document, but at least those can be applied to a server group and are less tedious once you know them.

I put an idea in for that last year and it sounds like it's in the works, so... that'll be nice.

The Server Doc Could Use Lots More Settings

I took the opportunity of re-laying-out servers to move as much as I can out of the data directory - namely, DAOS, transaction logs, FT indexes, and view indexes. The first two of these are configurable in the server doc, which is nice, but the latter two require specification via notes.ini properties. Since they're server-specific, it feels like a leaky abstraction to put them in a Configuration document - while it would work, and I could remove them from the doc once applied, that's just gross.

It would also be good to have a way to properly share filesystem-bound files and have them auto-deployed. For example, I have a notes.ini property in the Configuration doc for JavaUserOptionsFile=jvm.properties. The property is set automatically, but I have to create the file manually per-server. I could certainly write an agent to do that, and it'd work, but it's server configuration and belongs in the Directory.

Ideally, I'd like to be able to obliterate the container and data image, recreate them with the ID and location info, and have the server reconstitute itself after that entirely from NSF-based configuration.

HTTP is Better Than It Used to Be, But Still Needs Work

I'd love to replace my use of the WebSphere connector headers with X-Forwarded-For, but it doesn't work like that, and I'm not about to write a DSAPI filter to do it. Ideally, that'd be supported and promoted to the server config.

Same goes for Java-related settings that you just have to kind of magically know, like HTTPJVMMaxHeapSize+HTTPJVMMaxHeapSizeSet and ENABLE_SNI (I don't know why you wouldn't want SNI enabled by default).

The SSL cert manager in V12 can't come soon enough.

HTTP's better off than it was for a while, and it's nice that the TLS stack isn't dangerous now, but knowing the right way to configure it is still essentially playground lore.

Domino Configuration Tuner Deserves a New Life

I remember discovering DCT back at my old company in the 7.x days, but it unfortunately looks like it hasn't been updated since not long after that, and now doesn't even parse the current Domino version correctly. If it was brought up to date and produced reliable suggestions, it'd be huge.

As it is, my server configuration docs have all sorts of notes.ini properties like NLCACHE_SIZE=67108864 and UPDATE_NOTE_MINIMUM=40 that I saw recommended somewhere once years ago, but I have no idea whether they're still good or appropriately-sized. I want the computer to tell me that (and, in a lot of cases, just do the right thing without configuration).

Conclusion

Anyway, those are the things that came to me as I was working on this. The last few major releases have had some huge server-side improvements, and I like that the pace is continuing. Good work, server core team.

The Intricate Work of OSGi Dependencies on Domino

Wed Dec 02 15:03:06 EST 2020

Tags: domino osgi
  1. Converting Tycho Projects to maven-bundle-plugin, Initial Phase
  2. Winter Project #2: Maven P2 Repository Resolver
  3. OpenNTF Fork of p2-maven-plugin
  4. The Intricate Work of OSGi Dependencies on Domino

One of the main goals of OSGi is proper runtime dependency management, not only allowing a bundle to declare what its dependencies are to ensure that they're there, but even to select one of multiple available versions. For example, you might have one bundle that expects Guava 15 but not higher and another that expects 18 or above, and both can be loaded successfully if you have multiple Guava versions installed. The goal is that you're supposed to be able to just throw a whole bunch of bundles into a pot and the system will figure it out.

Before I get into why this system is hobbled on Domino, I'll add a little more background.

Mechanisms

For our purposes here, bundles have two main ways to declare what they are (there are more than this, but they're not relevant right now):

  • The bundle's symbolic name combined with its bundle version. For example, the core bundle for ODA is named "org.openntf.domino" and has a version like "11.0.1.202006091416".
  • The bundle's exported packages, which are Java class packages and might also individually have versions. Using ODA again as an example, it exports a slew of packages, such as org.openntf.domino and org.openntf.domino.nsfdata, though it doesn't specify versions for any of these.

The versions of the bundle and the exported packages don't need to be the same, nor do all the exported packages need to have the same version. This shows up a lot in "spec bundles", where a vendor will wrap a standard spec API in their own bundle for various reasons. For example, Apache Geronimo has a bundle called "org.apache.geronimo.specs.geronimo-jaxrs_2.1_spec", which provides the JAX-RS 2.1 spec. The bundle itself has a version of "1.1.0", while the exported packages are all version "2.1".

To require a bundle by name, you use the Require-Bundle header, like Require-Bundle: org.openntf.domino;bundle-version="11.0.0", which requires specifically the ODA bundle, version 11 or above. To require a package, you use Import-Package, like Import-Package: javax.ws.rs;version="2.1.0". The two methods have some implications when it comes to how the ClassLoaders work, which I touched on a bit earlier this year.

Culture Differences

The package-based mechanism is generally preferred nowadays over the bundle-based one, largely for the kind of flexibility and division-of-responsibilities it provides. For example, if I want version 2.1.0 of javax.ws.rs, my code shouldn't care at all whether it comes from Geronimo's bundle, JBoss's, or anywhere else - it's all the same thing (in theory). This is extremely common for projects created with bnd and tools based on it, which can generated imported and exported packages automatically based on your code and some small configuration. For example, the bundles that make up Open Liberty use bnd config files that have some loose configuration, which then is processed out to full listings of packages with version information. This is also often paired with OSGi's "Capability" system, but that's one of the "not important for now" things.

Domino - and I believe this is inherited from Eclipse, which does the same thing - is largely based on bundle requirements. For example, the org.eclipse.jdt.ui bundle (part of the Java development tools in Eclipse and Designer) requires a bevy of bundles by name version and doesn't tag any of its exported packages with versions. This is similarly reinforced in the tools. Eclipse, unsurprisingly, uses Tycho to bring the "Eclipse PDE" style to the table, where the MANIFEST.MF file is more hand-crafted, and the tools don't do as much for you automatically. You can go full Import-Package and attach versions to your exported packages with Eclipse, but the tooling doesn't encourage it.

Conflicts in Practice

That brings us back to how this all contributes to making working with OSGi a little extra annoying on Domino. This is something that comes up constantly in my XPages Jakarta EE Support project: I want to bring in an implementation component for a JEE spec, but it will either already have or will have generated for it OSGi rules that lean towards the "package-style" of doing things, versions and all. Because there are common packages (like, say, javax.activation) that are supplied by bundles present in the Domino runtime, I want to use those. However, since the packages from Domino don't have versions specified, I need to re-wrap the bundles to import the packages without versions. Thus begins this ongoing nightmare.

Another approach I could take would be to bring my own, nicely-OSGi-ified versions of the afflicted packages, and options abound. However, that leads to a sneaky other trouble: because some of the Domino bundles are multi-spec monsters - with com.ibm.designer.lib.javamail being the absolute worst culprit - some other bundle on the system might casually import javax.activation and javax.mail. If I have this other spec implementation floating around, then it could get matched to my bundle for the former and IBM's for the latter... and crash right into "exposed to a package via two dependency chains" problem. That wouldn't necessarily be an issue if everything used versions on packages, but leaving them off means it's kind of up to the container to match bundles to each other, and it's entirely happy creating impossible conflicts.

Ways Around It

When working through this sort of trouble, I've found a few ways around the things I've run into. The first is what I mentioned above: using p2-maven-plugin to re-wrap bundles with instructions that make them more Domino-friendly. This involves a few tricks:

Aside from reworking existing bundles, I have a few times ended up creating fragment bundles that glom onto one bundle to tie it to another one after the fact, usually for the components that bridge different JEE specs.

The Wildcard: The System Bundle

In general, with OSGi, you can expect the packages you need to be provided in bundles, but it also allows for the core JDK to be implicitly available - that is, things like java.lang don't need to be imported, and are always present. That's fine, since it's generally well-enough-defined what makes up the JDK and what you'd need to instead bring in.

However, the Domino core classpath doesn't contain just the JDK, but also anything in jvm/lib/ext and (as a curveball) ndext from the Domino directory. Above, I specifically pointed out the javax.activation package, and that's because it suffers the most from both the javamail bundle as well as this. If you run tell http osgi packages javax.activation on Domino's console, you should see that bundles get this from two places: some use com.ibm.designer.lib.javamail, while others use the system bundle, org.eclipse.osgi. That "system bundle" concept is not only the thing in charge, but it also passively provides access to classes found in the lower-level classpath. In a fully OSGi environment, that system classpath will be pretty clean, but Domino's isn't that.

The good news here is that it isn't usually a big problem. If you import packages by a non-zero version, they'll never match from the system bundle this way. Still, it's always there, lurking, and the problems increase if you start adding more JAR files to jvm/lib/ext to avoid policy or amgr-memory issues. We've seen this periodically with ODA, where we used to support the use case of putting the core files there and using just a shim at the XSP layer, before it became too much of a hassle to manage. I ran into it again more recently when Guava showed up there: because I hadn't specified a version, it was able to match from the system bundle, and ended up with classes available at compile time but missing at runtime.

The Upshot

The upshot here is that there's no simple advice for dealing with this. Cultural and implementation factors make bringing third-party code into Domino unusually difficult, but it can generally be dealt with via various patching mechanisms. The XPages JEE project's dependency module ended up turning into a trove of such workarounds, so perhaps it can be useful if you ever run into this sort of thing yourself.

Writing Domino Server Addins With GraalVM Native Image

Sun Sep 27 15:35:56 EDT 2020

Tags: graalvm domino

I was thinking the other day about the task of writing a Domino server addin, the kind that you run by typing load foo on the server console. The way this is generally done is via C or the like: you write a program using your dusty old copy of the C API Toolkit and have an AddinMain function as the entrypoint. That's fine enough if you want to write in C, but, even beyond the language, it carries the tremendous overhead of a fiddly compilation chain that differs per-platform.

I got to thinking, then, about GraalVM, and specifically its Native Image capability. Before I get into what I did, I figure this warrants some background.

What is GraalVM?

GraalVM is a project from Oracle that is, roughly, an alternative core Java Virtual Machine. It's designed to serve a number of goals, but the main ways that I've seen it used is to improve the speed and efficiency of Java-based programs. It also has some neat-looking capabilities for running multiple languages in one app space, but I have yet to look into that.

The Native Image capability is a way to compile Java applications to native executables for a given platform. So, instead of having a JAR file that you then run with an installed JVM, you'd have an executable that you run directly, and which effectively acts as its own "VM". This means you end up with just "some executable" on your system, and the lack of bootstrapping needed to run it opens up some possibilities.

Domino Server Addins

Though Domino server addins have their own set of functions within the Notes C API, they're really just an executable that Domino launches as a sub-process. If you have a basic executable named foo in your Domino program directory, you can type load foo and it'll run it, whether or not the executable does anything with the Notes API at all. It won't necessarily be useful if it doesn't use the Notes API, but it'll run.

It's this "just an executable" bit, though, that was a contributing factor to making Java not a practical language for this. That's also where RunJava fit in: the runjava executable just initialized a JVM and loads the named class, which is afterward responsible for everything, but that was nonetheless obligatory work to get a Java app loaded this way.

The Combination

Once I realized these things, it wasn't a far reach to try implementing an addin this way. One of my initial concerns was the way addins use AddinMain as a C-type entrypoint - my knowledge of how that sort of thing works is limited enough that I wasn't sure if GraalVM's annotations would suffice. However, the C API documentation relieved my worry: using that function name is just a convenience that handles some of the bootstrapping for you. If you just use a normal main(...) entrypoint, the only difference is that you're on the hook for managing your status line more (the thing that shows up when you do show tasks).

Fortunately, the addin-related methods in the lotus.notes.addin.JavaServerAddin class in Notes.jar are extremely-thin wrappers around native calls and aren't actually specific to RunJava in any way. You can subclass it and use it in essentially the same way as in a RunJava addin:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
package frostillicus.graalvm;

import lotus.domino.NotesException;
import lotus.notes.addins.JavaServerAddin;

public class Main extends JavaServerAddin {
	static {
		System.setProperty("java.library.path", "/opt/hcl/domino/notes/11000100/linux"); //$NON-NLS-1$ //$NON-NLS-2$
		System.loadLibrary("notes"); //$NON-NLS-1$
		System.loadLibrary("lsxbe"); //$NON-NLS-1$
	}
	
	public static void main(String[] args) {
		new Main().start();
	}
	
	public Main() {
		setName("GraalVM Test");
	}
	
	@Override
	public void runNotes() throws NotesException {
		AddInLogMessageText("GraalVM Test initialized");
		int taskId = AddInCreateStatusLine(getName());
		try {

			// Do your work here

		} catch(Throwable t) {
			t.printStackTrace();
		} finally {
			AddInDeleteStatusLine(taskId);
		}
	}

}

GraalVM-specific configuration

The GraalVM project provides a Maven plugin to do native compilation for you, and I make use of that in the project's pom.xml:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
<plugin>
	<groupId>org.graalvm.nativeimage</groupId>
	<artifactId>native-image-maven-plugin</artifactId>
	<version>20.2.0</version>
	<configuration>
		<imageName>${project.name}</imageName>
		<mainClass>frostillicus.graalvm.Main</mainClass>
		<!-- snip <buildArgs> -->
	</configuration>
	<executions>
		<execution>
			<goals>
				<goal>native-image</goal>
			</goals>
			<phase>package</phase>
		</execution>
	</executions>
</plugin>

Including that in your project will produce a native executable for your current platform in the target folder, alongside the normal JAR file.

The bit I snipped out, though, ends up being important. In a similar way to what happens during Android "Java" compilation, the GraalVM native compiler builds a map of all of the code used in your project to create its native representation. Additionally, it doesn't support reflection as casually as a normal JVM does, and doing a compilation like this shows just how common reflection is in Java.

Reflection and JNI Configuration

What reflection (and JNI) in Java generally needs is a mapping table of class/method/field names to their class representations, and GraalVM doesn't build this for everything by default. Instead, it does its best guess based on your actual code, but then it's up to you to explicitly specify the parts you'll be accessing dynamically.

For the normal case, Oracle wrote a tool that will monitor an actively-running app in Java for such calls. You build your app and run it non-native with this agent, and then it will spit out a configuration file based on the actually-called reflective methods.

However, as with everything else to do with Domino, it's not the normal case: since what I'm running only reasonably exists when launched explicitly from a server, I had to do it the "hard" way. Fortunately, the it's actually just mostly tedious: build the app, launch the Domino Docker container, wait to look for a NoClassDefFoundError or related problem, add that to the config file, and repeat until it stops yelling. Some cases are a little fiddlier, like how JNA's native component misrepresents the class name it was trying to find, but overall it's just time-consuming.

Practicality

So, this is possible, but is it worth doing? Depending on what you want to do, maybe. It's mildly less unsupported than RunJava, and has the huge advantage of not polluting the server's classpath with all of your application code. Additionally, it should be pretty zippy, as GraalVM boasts some impressive performance numbers. Additionally, at least for Java developers, it's much, much easier to use the native-image-maven-plugin than it is to set up cmake or manual makefiles for a C/etc. project.

However, it can also be a real PITA to get working, especially for a reflection-heavy project. Additionally, though you're technically using Addin* functions with a native executable, it's not like HCL would take your call if you run into trouble with a monstrosity like this (I assume). Most importantly, it's restricted to the sort of thing that would make sense as a server addin to begin with - for example, this wouldn't help with building web apps unless you were planning to use it to (again, just as an example) run a web server that's written in Java.

Future Tinkering

I think that this warrants some more investigation. I'd be curious if this process would work for writing other native components, such as DSAPI filters and ExtMgr addins. In those cases, it absolutely would be important to have the right entrypoints, so it wouldn't be quite so easy. Still, it'd be neat if that worked.

And GraalVM and the Native Image component are definitely worth some time even aside from anything Domino-related. I'm curious about what you can do with the "polyglot" features, for example.

Example Project

I've put an example project up on GitHub, which is a basic example that just accepts strings via tell graalvm-test foo and echoes them back. It also includes a Dockerfile for running via HCL's official Domino 11.0.1 image. I haven't actually tested it any other way, so that's the best way to give it a shot.

Weekend Domino-Apps-in-Docker Experimentation

Sun Jun 28 18:37:19 EDT 2020

  1. Weekend Domino-Apps-in-Docker Experimentation
  2. Executing a Complicated OSGi-NSF-Surefire-NPM Build With Docker
  3. Getting to Appreciate the Idioms of Docker

For a couple of years now, first IBM and then HCL have worked on and adapted community work to get Domino running in Docker. I've observed this for a while, but haven't had a particular need: while it's nice and all to be able to spin up a Domino server in Docker, it's primarily an "admin" thing. I have my suite of development Domino servers in VMs, and they're chugging along fine.

However, a thought has always gnawed at the back of my mind: a big pitch of Docker is that it makes not just deployment consistent, but also development, taking away a chunk of the hassle of setting up all sorts of associated tools around development. It's never been difficult, per se, to install a Postgres server, but it's all the better to be able to just say that your app expects to have one around and let the tooling handle the specifics for you. Domino isn't quite as Docker-friendly as Postgres or other tools, but the work done to get the official image going with 11.0.1 brought it closer to practicality. This weekend, I figured I'd give it a shot.

The Problem

It's worth taking a moment to explain why it'd be worth bothering with this sort of setup at all. The core trouble is that running an app with a Notes runtime is extremely annoying. You have to make sure that you're pointing at the right libraries, they're all in the right place to be available in their internal dependency tree, you have to set a bunch of environment variables, and you have to make sure that you provide specialized contextual info, like an ID file. You actually have the easiest time on Windows, though it's still a bit of a hurdle. Linux and macOS have their own impediments, though, some of which can be showstoppers for certain tasks. They're impediments worth overcoming to avoid having to use Windows, but they're impediments nonetheless.

The Setup

But back to Docker.

For a little while now, the Eclipse Marketplace has had a prominent spot for Codewind, an IBM-led Eclipse Foundation project to improve the experience of development with Docker containers. The project supplies plugins for Eclipse, IntelliJ, and VS Code / Eclipse Che, but I still spend most of my time in Eclipse, so I went with the former.

To begin with, I started with the default "Open Liberty" project you get when you create a new project with the tooling. As I looked at it, I realized with a bit of relief that there's not too much special about the project itself: it's a normal Maven project with war packaging that brings in some common dependencies. There's no Maven build step that expects Docker at all. The specialized behavior comes (unsurprisingly, if you use Docker already) in the Dockerfile, which goes through the process of building the app, extracting the important build results into a container based on the open-liberty runtime image, bringing in support files from the project, and launching Liberty. Nothing crazy, and the vast majority of the code more shows off MicroProfile features than anything about Docker specifically.

Bringing in Domino

The Docker image that HCL provides is a fully-fledged server, but I don't really care about that: all I really need is the sweet, sweet libnotes.so and associated support libraries. Still, the easiest way to accomplish that is to just copy in the whole /opt/hcl/domino/notes/11000100/linux directory. It's a little wasteful, and I plan to find just what's needed later, but it works to do that.

Once you have that, you need to do the "user side" of it: the ID file and configuration. With a fully-installed Domino server, the data directory balloons in side rapidly, but you don't actually need the vast majority of it if you just want to use the runtime. In fact, all you really need is an ID file, a notes.ini, and a names.nsf - and the latter two can even be massively trimmed down. They do need to be custom for your environment, unfortunately, but at least it's much easier to provide just a few files than spin up and maintain a whole server or run the Notes client locally.

Then, after you've extracted the juicy innards of the Domino image and provided your local resources, you can call NotesInitExtended pointing to your data directory (/local/notesdata in the HCL Docker image convention) and the notes.ini, and voila: you have a running app that can make local and remote Notes native API calls.

Example Project

I uploaded a tiny project to demonstrate this to GitHub: https://github.com/jesse-gallagher/domino-docker-war-example. All it does is provide one JAX-RS resource that emits the server ID, but that shows the Notes API working. In this case, I used the Darwino Domino NAPI (which I really need to refresh from upstream), but Domino JNA would also work. Notes.jar would too, but I think you'll need one of those projects to do the NotesInitExtended call with arguments.

The Dockerfile for the project goes through the steps enumerated above, based on how the original example image does it, and was tweaked to bring in the Domino runtime and support files. I stripped the Liberty-specific stuff out of the pom.xml - I think that the original route the example did of packaging up the whole server and then pulling it apart in Docker image creation has its uses, but isn't needed here.

Much like the pom.xml, the code itself is slim and doesn't explicitly refer to Docker at all. I have a ServletContextListener to init and term the Notes runtime, as well as a Filter implementation to init/term the request thread, but otherwise it just calls the Notes API with no fuss.

Larger Projects

I haven't yet tried this with larger projects, but there's no reason it shouldn't work. The build-deploy-run cycle takes a bit more time with Docker than with just a Liberty server embedded in Eclipse normally, but the consistency may be worth it. I've gotten used to running a killall -KILL java whenever an errant process gloms on to my Notes ID file and causes the server to stop being able to init the runtime, but I'd be glad to be done with that forever. And, for my largest project - the one with the hundreds of XPages and CCs - I don't see why that wouldn't work here too.

Normal Domino Projects

Another route that I've considered for Domino in Docker is to use it to deploy NSFs and OSGi projects. This would involve using the Domino image for its intended purpose of running a full server, but configuring the INI to just serve HTTP, and having the Dockerfile place the built OSGi plugins and NSFs in their right places. This would certainly be much faster than the build-deploy-run cycle of replacing NSF designs and deploying the plugins to an Update Site NSF, though there would be a few hurdles to get over. Not impossible, though.


I figure I'll kick the tires on this some more this week - maybe try deploying the aforementioned giant XPages .war project to it - to see if it will fit into my workflow. There's a chance that the increased deployment times won't be worth it, and I won't really gain the "consistent with production" advantages of Docker when the way I'm developing the app is already a wildly-unsupported configuration. It might be worth it if I try the remote mode of Codewind, though: I have some Liberty servers that Jenkins deploys to, but it'd be even-better to be able to show my running app to co-developers to work on something immediately, instead of waiting for the full build. It's worth some investigation, anyway.

The Lay of the Java Land, Early 2020

Wed Mar 04 14:34:38 EST 2020

Tags: domino

I was musing earlier (somewhat incorrectly) about the weird state of Java versions, and it got me thinking about how odd the landscape looks in general lately, even as things are progressing splendidly. And, since not everyone follows a dozen Java-related Twitter feeds to keep up on all this stuff - and because Domino has ignored so much of this for so long - I figured it'd be useful to have a summary.

Java Itself

For starters, there's Java-the-language, which has gone through some changes both in how it's licensed and how it evolves recently, largely for the better.

Release Cadence

The most notable change is how the version pace has picked up. Originally, Java was updated pretty regularly, but things slowed down after Java 6's release in late 2006. For whatever reasons, Sun took a IE-6-style break for a while and didn't come out with 7 until 2011 and then 8 in 2014. Starting with 9 in September 2017, though, some big changes came in:

  • New integer Java releases come out every six months
  • Starting with 11, "long-term support" releases will come out every three years, or every six integer versions

So Java 10 came out in March 2019 and 11 came out in September 2018, but 11 is the next "real" release after 8. Similarly, 12 and 13 have come out in the intervening time, but it won't be until Java 17 in September 2021 that the next LTS will arrive. Similar to other rapid+LTS release systems like Ubuntu's, the "intermediate" releases are expected to be stable and ready for production, but you can only expect to get commercial support for the current intermediate release plus any LTS releases still in their support window.

Along those lines, one could reasonably expect Java server vendors to stick to the LTS releases, even ones like HCL that aren't licensing supported builds from IBM or Oracle.

Preview Features

One of the main points of the faster release cadence is to allow quicker delivery of new features, and paired with that is Oracle's new willingness to put "preview" features in a release before they can call them officially solid and giving them cover if they decide they want to break the syntax. These are enabled with special flags at compile- and run-time, and currently include a handful of nifty features like yield in switch blocks and multi-line strings (a type of "heredocs") that will almost definitely become "official" features by the time the next LTS rolls around.

Removal of Features

They also gave themselves permission to remove features from the core JRE distribution, in particular a handful of specs that have gotten less used over time or otherwise make more sense to be part of Java/Jakarta EE specifically. Of note for Domino developers is the CORBA API, which we almost never use directly but which is required to load Notes.jar. That's gone in Java 11, and so (for now at least) using Notes.jar on Java 11+ requires including a replacement implementation.

AdoptOpenJDK

I mentioned it in a Java grab bag before, but it's worth reiterating: Java (the language and the platform) is free and open-source for all to use, but Oracle's specific JDK/JRE builds are not. For all intents and purposes, AdoptOpenJDK is the sole go-to place to get Java for private and commercial use now, unless you specifically want to pay Oracle or IBM for support.

Additionally, there are now two core JVMs to choose from when downloading Java: HotSpot and OpenJ9. HotSpot is essentially the "normal" one, the Java core that Sun/Oracle have been shipping forever. OpenJ9 is an open-source version of the J9 JVM that IBM has long maintained and put into all of their products (Domino included. IBM open-sourced it and contributed it to the Eclipse Foundation (they'll come up again shortly), and it's making a name for itself as a solid lean and quick-to-start runtime for containerized systems in particular.

Servers

The world of Java server technology, separate from the language has also been going through some churn with positive results. In this case, I mostly mean Java/Jakarta EE - Spring has been chugging along without too much apparent turmoil. Additionally, I don't know too much about Spring, so I won't cover it here.

Over the last decade, Oracle seemed to generally lose a lot of interest in developing Java EE - while they still put out new releases, they started getting further apart and the overall sense was that Oracle would be much happier if they just didn't have to deal with it anymore.

Microprofile

It was around this time that the community outside of Oracle's JEE team got a little antsy about this slowdown and lack of focus and started the Eclipse Microprofile project. It started out as a thin subset of JEE technologies - just JAX-RS, JSON-P, and CDI - tailored for the purposes of making it easier to write microservices with an extremely-low footprint. Quickly, though, it grew beyond just a selection of existing specs and started to grow new specifications that JEE didn't have to support the mission. Beyond just microservices-specific improvements, these specs also bring along tools that are handy in any old application, like an improved REST client and an annotation-based Configuration API. Microprofile turned into the go-to place for new development in the JEE world while Oracle was slowing down.

Jakarta EE

Oracle managed to get the (splendid) Java EE 8 release out the door eventually, but decided they had had enough of shepherding the platform. Fortunately, instead of consigning it to a slow death, they handed the reigns over to Eclipse, which formed the awkwardly-named EE4J project to oversee it. Since Oracle didn't give them the rights to use the name "Java", the actual platform itself was rebranded as "Jakarta EE", which had its own "Eclipse-washed" Jakarta EE 8 release.

That transition has gone very well, though there's one hurdle on the horizon: Oracle also didn't grant the rights to use the javax.* namespace for any new specifications, and so Eclipse decided to make the jump in Jakarta EE 9 to switch all of the specifications over to jakarta.*. What this means in practice is that all of those javax classes (like Servlet) we use now will, in their JEE 9 incarnations, be renamed in the style of jakarta.servlet.Servlet. There will likely be tools in various IDEs and toolchains to help with the conversion, and I suspect that app servers like Liberty will still support the old names for a good while, but it'll be a weird time.

Eclipse

It's also a bit of a weird time for the pairing of Jakarta EE and Microprofile. The latter came about when the former was moribund, but now they're both active and within the same open-source organization. Microprofile's remit isn't exactly the same as Jakarta's, so they're both going to continue as-is for at least a good while. Still, it feels a bit odd to have two Java server frameworks in the same place, and so it's possible that Microprofile will either be subsumed into JEE (like how JEE already has "web" and "full" profiles) or will be something of an innovation area to push new technology faster before sending it back upstream to JEE. That'll be interesting to watch.

GraalVM and Quarkus

Finally, I'll mention a couple "miscellaneous" technologies that have been developing in the background while all of this was happening.

GraalVM (which is presumably like the French word for "grail" and not pronounced like something a goblin would say) is a variant of the JVM core that Oracle has been working on for a few years. I think it's meant to be roughly equivalent to LLVM but for the Java world: higher performance than before, multi-language support, and compilation down to native binaries. It's an open-source (GPL) project and, while Oracle offers a paid Enterprise Edition, the Community Edition is legal for production use.

Alongside this has come along Quarkus, which is a Microprofile implementation laser-focused on speed and resource usage. It doesn't require GraalVM's native compilation, but the pairing of the two is a prime part of Quarkus's message. Quarkus isn't a full Jakarta EE server, but it's a very-intriguing purpose-built stack for developing speedy Java server apps primarily for containers. It's also a good conceptual example of allowing developers to write fairly-dynamic code (like CDI injection) but then turning that into concrete bindings at compilation time instead of deferring all lookups to runtime.

It's been on my list to kick the tires on these things specifically for a little while now. They're both hitting their stride lately, so I suspect that they'll have interesting effects over time.

Writing Domino JEE Apps Outside Domino

Fri Nov 08 13:47:13 EST 2019

In my last post, I mentioned that I decided to write the File Store app as a Jakarta EE application running in a web server alongside Domino, instead of as an app running on the server itself. In the comments, Fredrik Norling asked the natural question of whether it'd have been difficult to run this on the server, which in turn implies a lot of questions about deployment, toolkits, and other aspects.

Why Not

My immediate answer is that it would certainly be doable to run this on Domino, but the targeted nature of the app meant that I had some leeway in how I structured it. There are a couple reasons why I went this route, but most of them just boil down to not having to deal with all the gotchas, big and small, of doing development on top of Domino.

As a minor example, I wanted to add some configuration parameters to the app, and for this I used MicroProfile Config. MicroProfile Config is a small spec that standardizes the process of doing key-value-based configuration, allowing me to just say that I want a named value and let the runtime pick it up from the environment, system properties, or a config file as necessary. It wouldn't be difficult to write a configuration checker, but why bother reinventing the wheel when it's right there?

Same goes for having the SFTP server load at startup and be running consistently. If I did this on Domino, I could either put it in an NSF-based XPages app with an ApplicationListener and depend on the preload notes.ini parameter, or I could go my usual route and use an HttpService that manages the lifecycle. Neither route is difficult either, but they're both weirder and more fiddly than using servlet context listeners in a normal JEE app.

And then there's just the death by a thousand cuts: needing to use Tycho to build, having to deal with Eclipse Target Platforms, making sure anything using reflection is wrapped in an AccessController block to avoid Java policy issues, the nightmare of dependency management, the requirement to use either Designer or the okay-but-still-cumbersome Domino HTTP restart development cycle, and on and on. With a JEE app, all of those problems disappear into thin air.

The Main Hurdle

All that said, there's still the hurdle of actually implementing Notes native API access outside of Domino. At its core, what I'm doing is the same as what has been possible for years and years, initializing a Notes/Domino runtime in a secondary process. The setup for this varies platform by platform but generally involves either setting a couple environment variables for your process or (as is the case with the Domino Open Liberty Runtime) spawning the process from Domino itself.

Beyond setting up your process's environment, there's also the matter of initializing each thread of your app. On Domino, all threads are inherently NotesThreads, but outside of that there's some specific management to be done. You can call NotesThread.sinitThread() and NotesThread.stermThread() manually or run your code in specifically-spawned NotesThreads. I largely do the latter, making use of an ExecutorService to handle maintaining a thread pool for me. I then added some supporting code on top of that to let me run blocks of code as an arbitrary named user while retaining a cached set of sessions. That part wouldn't really be necessary if you're writing an app that doesn't need to act as different user names, but it's handy for something like this, where incoming connections must run as a user to enforce security fields properly.

Philosophical Advantages

Beyond my desire to avoid hassles and get to use modern Jakarta EE goodies, I think there are also some more philosophical advantages to writing applications this way, and specifically advantages that line up with some of HCL's stated long-term goals as well. Domino has long been a monolith, and that has largely served it well, but keeping everything from the DB all the way up through to the app-dev stack in the same bag means you're often constrained in your toolkits and deployment choices. By moving things just outside of the main Domino tower, you're freed up to use different languages and techniques that don't have to be integrated and maintained in the core. This could be a much larger jump than I'm doing here, and that's just what HCL has been pushing with the AppDev Pack and the associated Node.js domino-db module.

I think it's beneficial to picture Domino more as a dominant central core with ancillary servers and apps running just adjacent to it - not a full-on Microservices architecture, but just a little decentralized to keep areas of concern nice and separate. Done well, this setup is a lot more flexible and fault-tolerant, while still being fairly straightforward and performant. It's also a perfect match for a project like this that's geared towards implementing a new protocol - it doesn't even have to worry about HTTP SSO or reverse proxies yet.

So I think that this is where things are heading anyway, and it's just a nice cherry on top that it also happens to be a much (much, much) more pleasant way to write Java apps than OSGi plugins.

Another Side Project: NSF SFTP File Store

Tue Nov 05 16:12:58 EST 2019

When I Know Some Guys kicked off, we bought a couple of Transporter devices to handle our file-syncing needs without having to rely on Dropbox or another hosted service. Unfortunately, Nexsan killed off Transporters a couple years back, and, though they still kind of work, it's been a back-burnered project for us to find a good replacement.

Ideally, we'd find something that would handle syncing data from our various locations transparently while also allowing for normal file access through some common protocol. Aside from the various hosted commercial services, there are various software packages you can run locally, like OwnCloud and NextCloud. I even got a Raspberry Pi with a USB hard drive to tinker with those, though I never got around to actually doing so.

Yesterday, though, I realized that we already have a fleet of privately-owned servers that replicate seamlessly in the form of our Domino domain. They also, conveniently, have nice capabilities for blob storage, shared user authentication, and fine-grained access control. What they didn't have, though, was any good form of file protocol. I'm pretty sure that Domino still has WebDAV built-in, but that's just for design elements. Years ago, Stephan Wissel created a project that works with file attachments, but that didn't cover all the bases I wanted and I didn't want to adopt the code base to extend it myself. There's also Karsten Lehmann's Mindoo FTP Server from around the same time, but that was non-SSH FTP and targeted at the local filesystem.

So that meant it was time for a new project!

The Plan

I initially looked at WebDAV, since it's commonly supported, but it's also very long in the tooth, and that has led to all of the projects implementing that being pretty old and cumbersome as well.

Then, I found the Apache Mina project, which implements a number of server protocols, SSH included, and is actively maintained. Looking into how its SFTP support works, I found that it's shockingly simple and well-designed. All of the filesystem access is based on the Java NIO packages added in Java 7, which is a pluggable system for making arbitrary filesystems.

Using SFTP and SCP means that it'll work with common tools like Transmit and - critically - rsync. That means that, even in the absence of an custom app like Dropbox, mobile access and syncing with a local filesystem come "for free".

The Project

So out of this was born a new project, NSF File Server. Thanks to how good Mina is, I was able to get a NIO filesystem implementation and SFTP+SCP server up and running in very little time:

Screen shot of the SFTP server in Transmit

In its current form, there aren't a lot of tricks: the files are stored as attachments to normal documents in a "filestore.nsf" database with two views, which allow for directory-contents and individual-file lookup while also being pretty self-explanatory to a Notes client. I have some ideas about other ways to structure this, but there's an advantage to having it be pretty basic:

Screen shot of the File Store NSF

Similarly simple are the authentication mechanisms, which allow for both password and public-key authentication based on the HTTPPassword and sshPublicKey fields in a person document, respectively (and maybe via LDAP in directory assistance? I never remember the mechanics of @NameLookup).

The App

Because this is a scratch-our-itch project and I'm personally tired of dealing with Domino's OSGi environment, the app itself is implemented as a WAR file, expected to be deployed to a modern Jakarta EE server like my precious Open Liberty. Conveniently, I have just the project for that as well, making deploying NSF-accessing WARs to Domino a bit more reasonable.

For now, the app is faceless: the only "web app" bits are some listeners that initialize the Notes environment and then spawn the SSH server. I plan to add at least a basic web UI, though.

Future Plans

My immediate plan is to kick the tires on this enough to get it to a point where it can serve as its original goal of a syncing SFTP server. I do have other potential ideas in mind for the future, though, if I feel so inspired. Most of my current logged issues are optional enhancements like POSIX attribute support, more-efficient handling with the C API, and better security handling.

It's also a good foundation for any number of other interfaces. A normal web UI is the natural next step, but it could easily provide, for example, S3 API compatibility.

For now, though I haven't gotten around to uploading a build to OpenNTF yet, feel free to poke around the code and let me know if any ideas strike your fancy.

Java With Domino After XPages

Thu Mar 14 15:10:15 EDT 2019

IBM and HCL held a webcast today to detail some plans for Notes/Domino V11. There were some interesting tidbits elaborating on things like the pub/sub support, and it'll be worth tracking down a recording of the event when it's available.

What's important for this series, though, is that this event served as the long-promised "roadmap" announcement for XPages. The roadmap is, in effect, option three: HCL plans to look into ways to reuse some existing XPages code, but in general you should be aiming to write your UIs in something else, either consuming REST services from an XPages container or accessing Domino data via another route (like the domino-db Node.js module and hypothetical Java gRPC client).

So we know the end of the path: not XPages. However, it's not like we're all just going to throw away our existing apps, so there's work to do determining how we're going to get there. The options remain pretty much what they were after CollabSphere last year, albeit now with the doubt removed. The first two options - returning to LotusScript or going to Node - have their advantages and disadvantages, and you could make a reasonable case for either. Personally, I'm not interested in going down those roads, though, and I think it's better for any app of reasonable complexity to dive into Java. Other members of the community and I have developed tools over the years to make it easier, and now's the time to take some of these steps if you haven't already.

Do Not Use Server JavaScript

Sever JavaScript was always something of a trap for app architecture. There's nothing inherently wrong with having a scripting language on your UI pages, and it certainly helped bridge some gaps, but the way it and Designer intertwined encouraged developers to create non-portable messes. If you're still writing SSJS, stop immediately.

Learn Proper Java

Java has been around for a long time, and the way to right "good" Java code has changed over time and varies greatly by your environment. Some aspects, though, apply generally, and it's useful to stay up-to-date on current practices. I don't know a better resource for this than Effective Java, which has been updated for Java 7-9 since I last read it.

Speaking of which, you should learn about Java 8 streams and lambdas - they're great. Julian Robichaux did a presentation on this topic back at Connect 2017, and the slide deck is very elucidating.

Adopt Standard Java Technologies

Last year, I created a project to bring some modern JEE technologies to XPages. These are some of the same technologies I've been talking about in my "XPages to Java EE" series and, while that project can't bring the full JEE development experience to XPages, using those tools will help you write code that, in some cases, could be directly dropped into a Java EE app with no modifications at all. There's a big asterisk when it comes to actually accessing Domino data, but that's a solvable problem as well (with some more development).

In particular, you should start writing JAX-RS services. Not only is JAX-RS an excellent and very-capable spec, but REST services are portable to absolutely any front end.

Adopt Automated Builds

Maven has been something of a bugaboo for XPages developers for a while, but doesn't have to be. Node development (server- or client-side) revolves around npm and various build plugins, and Maven is much the same thing. One of the biggest improvements I've made lately to all of my active XPages apps is to wrap the on-disk project for them inside a Maven artifact, using the NSF ODP Tooling. That project allows you to automatically build your NSFs alongside other parts of the project (such as OSGi plugins) without having Designer involved.

Check the example project in that repo, and stay tuned for a 2.0 release (probably) imminently.

Learn Other Toolkits

If you're just starting the process of figuring out what to do after XPages, it doesn't particularly matter which other toolkit you learn, as long as it's reasonably modern. If you take some time to learn how to make, say, a React app but end up going with something else down the line, the lessons you learn will apply very closely. A particularly-comfortable option could be to learn JSF, which has a common ancestry with XPages but has up-to-date capabilities.

Whatever it is, though, just learn some other toolkit.

Follow Channels and Accounts for Other Tech

Over the last couple months, I've started following a lot of Jakarta-related blogs and Twitter luminaries. This applies elsewhere - even if you're not using other toolkits yet, it's very helpful to start immersing yourself in the news and culture.

Don't Stay Still

The primary thing to take to heart is the importance of doing something. Unless you're planning to change careers or retire in the short term, you'll have to make one decision or another. XPages is not going to get meaningfully better, and even existing apps will get worse with time as browsers and technology change.

Other environments, though, are already leagues ahead and are constantly improving. Dive in; the water's fine!

Domino 10 for Developers

Tue Oct 09 13:17:06 EDT 2018

Tags: domino

So Domino 10 is upon us, marking the first time in a good while that Domino has had an honest-to-goodness version bump.

More than anything, I think V10 is about that sort of mark. Its primary role in the world is to state "Domino isn't dead" - not exactly coming from a position of strength for the platform, but it's the critical message that HCL has to sell if they're going to be viewed as anything but coroners.

Still, in addition to merely existing, V10 brings some changes that will help developers, particularly those - sadly - maintaining large legacy applications.

DQL

The addition that will have the largest immediate impact on developing codebases is, I think, DQL. I went into a bit of detail on this before and I think that that post is largely accurate, but the general gist of it is that DQL can be thought of as "database.search(...) but good", bringing practical arbitrary queries of non-FT data to Domino.

In its current form, it feels like a long-back-burnered passion project that's implemented in an effective way, bringing some of the benefits of arbitrary queries in SQL and new-era NoSQL databases without having to rewrite NIF or NSF storage.

UpdateAs Karsten Lehmann kindly pointed out, DQL is slated for addition to the LS/Java classes in 10.0.1 at an as-yet-unspecified time.

HTTP Methods in LotusScript

I just let out a heavy sigh after writing that header, but I get why they added these. A lot of Domino developers never left the desiccated-but-comforting embrace of LotusScript or are employed primarily to maintain Notes client apps, where using Java is possible but involves jumping over hurdles.

Network operations have been possible for a long time via OLE (on Windows) or LS2J, and I'm sure I'm not the only one who's had a "Network" LS script library sitting around for over a decade, but having baked-in methods is preferable. Moreover, neither of those mechanisms would work on iOS without a lot of additional work.

This is being billed as enabling all sorts of integrations, which I suppose is strictly true in that it's a bit easier to call HTTP methods in old code now. In practice, I think it will be mostly helpful for the little one-off situations where you have to call some web service to integrate with a product tracking app or the like.

iPad Notes Client

This definitely seems like another back-burner project that was brought to the fore in the HCL transition. They had iOS references in the Mac 64-bit C SDK years ago, and it only makes sense, since the existence of the Mac port at all meant the job was (sort of) half done. It's not out today, but it's logically tied to V10, and they've been expanding a beta over the summer.

Like the additions to LotusScript, it makes sense. I can't imagine that running existing Notes apps on an iPad will be a good experience, but it should be a cheap one, and it'll probably be good enough for at least some cases. They've intimated that there will be affordances in app design to improve the experience specifically for this client, though I don't envy the engineers who have to go in and implement those.

Node.js Support

Dubbed the "App Dev Pack", Node support will be coming in an Upgrade-Pack-like additional download, in the form of a Domino server addon to add a gRPC server combined with a domino-db Node module that I gather is designed to be familiar for Node+MongoDB stack users.

When this intention was first announced, I think that a lot of Domino developers figured it would be like XPages: a new design element or two added to the NSF, plus another runtime crammed into Domino's aging HTTP stack. The other big potential option was essentially a codification of the ExtLib DAS REST services into a wrapper package to be used in standalone Node apps.

The App Dev Pack is more the latter than the former, but the use of gRPC should make it more performant and flexible than just wrapping the existing HTTP services. I'll be curious to see how this shakes out in practice. XPages has been with us for a decade, but it still only captured a slice of the Domino development market, and it carried the advantage of being bundled right into the stack. Node is a very different beast, entirely unlike traditional Domino development, and I'm not sure how many existing Domino developers will make the transition. Ostensibly, one of the main benefits is to also attract new blood, which - well, maybe.

Having this is much better than not, and the notion of having a new RPC connection that doesn't have the local runtime requirements of NRPC is tantalizing.

Overall

Overall, this release definitely feels like a very pragmatic release. Just by virtue of its existence, it covers the base of "Domino isn't dead" in a way that's much better than the older mealy-mouthed messaging of "well, we don't have specific plans to cancel it". Additionally, though, the fact that most of the developer-facing improvements are for "old world" design elements is an acknowledgement that XPages didn't capture the Domino development world (and, probably, that HCL didn't hire the XPages team). The prospect of the community crawling back into the LotusScript cradle isn't great, but there's no avoiding the fact that there are a great many developers who never had a reason to do anything different. Not many cost-cutting IT departments let their developers re-learn their entire skillset when other departments are just asking for a new button on a form.

In an alternate universe, this would have made for a fine "Domino 9.5" release, but the wringer that the 9.0.1 era put us through demanded a full major version bump. I'll be curious to see how Domino 11 and so on shape up. If the "not dead" push works and it turns Domino's fortunes in the market around at all, it would give HCL room to turn it into a real platform again. That's a big "if", since it's a lot easier to get existing Domino developers excited than it is to get IT purchasers to sign the licensing checks, but time will tell.

DGQF and DQL as I Understand Them

Thu Jul 26 12:27:59 EDT 2018

Tags: domino dgqf

At CollabSphere this year, the big information coming from HCL was detail about the Domino General Query Facility (DGQF) and its associated language, Domino Query Language (DQL). They originally announced this a few weeks ago, but it was good to have had some time to let the dust settle and to see the specifics.

Because it was discussed alongside the domino-db Node.js package and because it's one of the first real new ways we'll interact with data in a Domino DB in a while, it's a bit difficult to identify just what it is and what it is not. Here's how I understood it:

What DGQF Is

DGQF is, at least conceptually, a "meta" layer on top of the existing NIF indexing facility. It doesn't provide a core change to the actual storage of documents, but instead treats existing view indexes as (roughly) analagous to both SQL table indexes and SQL views. It trawls through the design elements of a database to analyze their selection formulae and columns to use applicable ones as implicit indexes and also to allow access to arbtirary collections within queries.

Implicit Indexes

Other than the design collection and the "optimize document table" option in a DB, an NSF doesn't really have much in the way of indexing note contents by default. So, if you have a query asking for all documents where FirstName is Bob, a program has no choice but to look through every document for that key/value match. If, however, you create a view that has a column showing the FirstName field, you now have a much-faster index you can use. It's this sort of view that the DGQF picks up on implicitly, using them to accelerate queries: views showing all documents with either a default sort or "click to sort" column showing explicitly a field (and not a formula).

Access to Arbitrary Collection Data

For those qualifying views plus others, you can reference a view by name or alias to compare to a column value by programmatic name (often either the field name for simple columns or something like $4 by default for formulas).

"In" clauses

Additionally, you can use view (and folder, I think) names to refine queries for documents that are in one or more of these collections, equivalent to an "in" subquery or view reference in SQL

What DQL Is

In short, DQL is the human-readable query language used to access DGQF. It's reasonably SQL-like (though it is not SQL) and tends to look like FirstName='Bob' and in all ('Managers', 'Active Users'). This is the language you will use, and so "DGQF" and "DQL" will generally refer to the same thing in practice.

In practice, this is implemented as a new method on the Database class in each high-level language supported by Domino, plus a Node-styled variant in domino-db.

What DGQF and DQL Are Not

Since DGQF sits on top of NIF (and probably the FT index eventually), it's not a core change to data storage. Eventually, the same abilities and limits of Domino remain as they are with respect to this.

Additionally, DQL is, I believe, a query language only: it does not provide a mechanism for creating, modifying, or deleting existing documents. Instead, it is essentially a super-powered and much-smarter version of database.search(
): you can use it to find documents and the processing of them is up to your program.

That last point was a bit muddied by its pairing with the domino-db Node.js package: the Node.js package provides bulk operations that are paired with DQL queries, but that is a function of that library specifically, not DQL or DGQF.

Why It's Cool

Though it's not a reworking of the core NSF, what DGQF does do is abstract away a lot of the manual looping and lookups that we've always had to do, and it allows the system to optimize and do things more efficiently than when written out procedurally. So, while there's theoretically nothing that DGQF does that we couldn't do before, it allows us to do those things with far, far less code and with automatic optimization.

This brings Domino something that SQL servers have enjoyed for a long time. With a SQL statement, you can analyze the trouble spots of a slow-running query and add indexes to improve the speed, with the tooling helping to explain what's going on. DGQF+DQL brings this along for the ride: when you execute a DQL query, you have the option to dump out this "explain" output to see what specifically the facility did, which views it used, and how long each step took. So, if you have a long-running query, you can look to see if you can add an "index" view to automatically speed it up without having to change your code. And, since the language is an abstraction over the task of querying and not the sort of "burned in" process of a normal getNextDocument loop, it can be optimized and short-circuited by the underlying system without the developer having to know the decades of built-up knowledge of how to efficiently search a DB.

All in all, this is a very welcome addition to the server, and it certainly should improve a lot of common tasks.

A (Java-Centric) Domino Wish List

Thu Jul 12 12:04:41 EDT 2018

Tags: domino java

Seeing the information come out of this week's HCL "Golden Ticket" event has got me thinking about some of my wish-list items for Domino development, mostly in the form of enhancements for existing capabilities and entirely around Java (since that's what I do).

Quality of Life

Javadoc

For some reason, the lotus.domino classes ship without Javadoc or even variable-name information, leading to this trainwreck:

Designer has its built-in help, which is also on the web, but that's quite a few steps down. This is table stakes for a Java API and always has been.

Updated p2 Repository

Back in 2014, the XPages team uploaded a clean p2 repository of the XPages artifacts to OpenNTF, corresponding with the 9.0.1 release. This repository saves a ton of hassle when building Tycho-based projects or just setting up an Eclipse workspace. However, it's quite long the tooth, as there have been several Notes.jar additions not included in there, and, in FP10, a significant upgrade to the undergirding OSGi framework.

I ended up writing a script to generated an updated version, but I don't have the legal ability to publish the results anywhere for easy consumption, meaning it has to be done manually and configured for each build environment. It would be a great convenience if there was an official package (ideally including the Designer plugins as well) and, even better, hosted on OpenNTF so that we could reference it by URL as we do for Eclipse releases (and require users to accept a license first).

Mavenized Repository

The p2 repository is good for Tycho-based projects, but, especially when targetting Domino is only one part of a project, it can be much more convenient to use "normal" Maven projects with maven-bundle-plugin. However, those projects can't use p2 repositories as such. For Darwino's needs, I ended up writing a tool in the (available-for-free) Darwino Studio plugins to Maven-ize a Domino p2 repository, but that hits the same snag as above of requiring manual setup in each instance.

This is another case where my preference would be on the OpenNTF Maven repository (plus Javadoc Jars, naturally).

Extension Library Source

The latest Extension Library release on OpenNTF is from FP9, while the latest on GitHub is from the FP7 era. FP10 shipped with a newer version of indeterminate nature. It'd be good to have this on both of those sites and, like with Javadoc, have source bundles shipped with the product in a way that is picked up automatically by Designer and Eclipse.

Source Bundles for Third-Party Components

The source for the undergirding Equinox stack is available, but it would be best to have, as an adjunct to the updated p2 repositories, the source bundles for the actual versions used so that we don't have to cobble together a platform from Eclipse's repositories.

Open-Source the Rest of the Stack

Having XPages, the Expeditor husk, and the other miscellaneous doodads that make up the proprietary layer as open source with an Apache-compatible license would cover a lot of the above and also be of tremendous use for XPages and non-XPages apps alike that run on or with Domino. I have a hard time imagining that it would lead to a lot of community-driven improvement, but it may do some (I'd have a few words to share with the file-download control, for example), and even just as a static release would be a significant boon.

Domino Connectivity in Eclipse

An idea I've been toying with lately is to make an Eclipse plugin that allows you to add Domino servers to the "Servers" view and control them to some extent. The basics would be to start/stop/restart HTTP, but the stretch goals would be to open a console view, get a list of running modules, integrate with the existing "load bundles from PDE" support, and, ideally, an outright "Run on Server" command for OSGi bundles and NSFs. However, I have so much on my plate that I'm not sure that I'll get to this any time soon unless I get a real itch some weekend.

Longevity

Refreshed JVMs

Feature Pack 8 brought Java 8, a vital step forward. However, since then, Oracle moved to a faster release cycle for Java and the JRE is now at version 10. Domino uses IBM's JVM variant, J9, which they recently moved to the Eclipse foundation as OpenJ9, where it has... sort of been keeping up, I think?

In any event, this increased pace of change has meant that the Java 8 honeymoon is over, and Domino development again requires special consideration when using current tools. I have no idea how complex the integration between Domino's tasks and the underlying JVM is, but my ideal would be to have constant or near-constant parity.

Servlet API 4.x

After the JRE version, the most important foundational element of a Java web app is the servlet API release. The current version is 4.0.1, while Domino supports 2.5 (or 2.4, maybe?). The good news is that the Java/Jakarta EE world seems to be used to lagging versions here, and 2.5 is a minimum version for a lot of current tech in much the same way that Java 6 was until somewhat recently, but there has been quite a bit added in recent years.

Presumably, a reason for the lag is the implied requirements of newer versions, such as WebSockets and HTTP/2 support, that would require heavy modifications to the core Domino HTTP code. Honestly, the more practical route is almost definitely to just use a different JEE server paired with CrossWorlds, some Java wrapper for the GRPC stuff HCL has been talking about, or (best of all) a Darwino app replicating with Domino, but still. WebSphere Liberty is actually really nice, by the way.

Refreshed Equinox

Like with the underlying JVM, the Feature Pack 10 update to a Neon-based OSGi/Equinox framework was a critical shot in the arm for the platform, but it is now also two major versions behind. This is a little less critical, since Equinox brings a bit less to the table for our needs and Neon is "new enough" for now, particularly on the server side, but it'd still be proper to keep pace.

Odds and Ends

Non-OSGi JEE Support

The Equinox framework that Domino uses is quite capable, but there's no pretending that OSGi-targetted development has its share of headaches. Most Java apps just target plain-old .war files and don't impose any particular requirements on the build process. Java development is a much more pleasant experience when you can just toss in any Maven dependency and not have to think about building a target platform for Eclipse or jumping through bundle-resolution hoops. I really like OSGi in theory, but I can't pretend that non-OSGi development isn't much smoother.

Domino technically supports "regular" servlets currently, but, uh, here's a snippet from the current documentation on that:

Hrm.

Full Extension Manager Support for Java

JAVADDIN/DOTS added a lot of EM hooks, but it doesn't cover the full suite of capabilities that a C addin can provide, such as authentication handling. Having this be fully accessible from Java would be useful even when treating Domino just as a data store and not as an app server.

 


 

I'm sure I could come up with more, but that's probably good for now. All easy, right?

First Steps to Code Coverage Analysis in Domino Plugins

Thu Nov 09 08:53:04 EST 2017

Tags: maven domino java

I'm always interested in getting the computer to tell me how to tell it what to do more successfully, and, to further that pursuit, I've started taking an interest in code coverage.

If you're not familiar with the term, "code coverage" refers to reporting on which lines of code were actually executed during runtime, most commonly in association with unit tests. Eclipse (and presumably other IDEs) has support for this, and I've decided to give it a shot.

Since I'm starting this out in the context of Domino plugins, there are more wrinkles than in most tutorials. Namely, the test suites I've written run exclusively through Maven instead of the Eclipse UI due to all the Notes environment setup, so I can't just use the normal UI tools to gather the data. Fortunately, Eclipse's EclEmma will work just fine with the output from a Maven project, as long as you configure it properly. I looked around for a while to find the right combination of tools to use, but it ended up being fairly simple to configure basic output that can be consumed in Eclipse's Coverage view.

There are two main additions. First, add the jacoco-maven-plugin to your root project's project.build.plugins block:

<plugin>
	<groupId>org.jacoco</groupId>
	<artifactId>jacoco-maven-plugin</artifactId>
	<version>0.7.8</version>
	<executions>
		<execution>
			<goals>
				<goal>prepare-agent</goal>
			</goals>
		</execution>
	</executions>
</plugin>

In normal cases, that would suffice. However, since the test configuration I have for Notes overrides the argLine property of the Tycho test runner, there's another step - add the tycho.testArgLine property manually into those blocks, such as in the Windows profile:

<profile>
	<activation>
		<os>
			<family>Windows</family>
		</os>
		<property>
			<name>notes-program</name>
		</property>
	</activation>

	<build>
		<plugins>
			<plugin>
				<groupId>org.eclipse.tycho</groupId>
				<artifactId>tycho-surefire-plugin</artifactId>
				<version>${tycho-version}</version>
 
				<configuration>
					<skip>false</skip>
 
					<argLine>${tycho.testArgLine} -Dfile.encoding=UTF-8 -Djava.library.path="${notes-program}"</argLine>
					<environmentVariables>
						<PATH>${notes-program}${path.separator}${env.PATH}</PATH>
					</environmentVariables>
				</configuration>
			</plugin>
		</plugins>
	</build>
</profile>

Once that's configured, running the test suite via Maven will create a new file in the target folder of the test plugin: jacoco.exec. This file can then be consumed in Eclipse by opening the "Coverage" view:

Eclipse's Show View window

In that view, right click and choose "Import Session..." and point to the data file. Click "Next" and check the projects+source folders from your workspace you're interested in analyzing. When you click "Finish", it'll do two things. First, it'll fill the Coverage view with statistics from your run:

Code Coverage stats

(We have a lot of work to do fleshing out our test suites for this one)

Secondly, it'll start highlighting your code to show you what code is executed, which branches are only partially covered, and which lines are skipped entirely. For example (ignore the sickly color scheme - I need to work on that):

Code Coverage example

This shows how several of the if branches are only tested in one direction, while the "Faces" block is skipped entirely. That also shows some of the trouble with testing XPages-run code: the Tycho environment can't reproduce the XPages environment fully, so some branches aren't testable in that way. I haven't looked into the possibility of gathering similar data from JUnit for XPages, so perhaps that's possible.

For now, though, this will have to do. And, like with these other "code improvement" techniques I've integrated lately, there's a lot of potential tedium - juggling when to write a test to cover some code that will obviously always work just to improve the highlighting vs. just focusing on the low-hanging fruit - but I expect that it will be a nice addition to my workflow over time.

Change Is In The Air

Fri Aug 26 17:41:50 EDT 2016

Tags: xpages domino

During last week’s MWLUG, there was a clear sense that things are a little different this year. Dave Navarre dubbed the technical implications “platform agnosticism”, while I geared my presentation towards the feeling that change is in the air.

This is not totally new. Red Pill Now cast aside the XPages UI layer and most of the assumptions of Domino development to move to a new level; PSC's presentations have long developed a polyglot tone, and this ramped up this year; and people like Paul Withers have been growing with tools like Vaadin.

It's not too important to dive into the specifics of market forces or the shifting sands of technology, and the platform defensiveness that we tend to wear as a cozy blanket doesn't serve anyone properly. Our beloved Domino app-dev platform has grown pretty long in the tooth and it doesn’t seem like it’s in for a revitalization.

The situation is, fortunately, an opportunity.

One of the things I hoped to convey in my presentation is that, though the prospect of learning some of the ever-changing array of modern development tools is daunting, it is also exhilarating and profitable both professionally and personally. As insane as the tangled web of server frameworks, JS optimizers, language transpilers, automation tools, dependency managers, and so forth may seem, particularly compared to the simple days of Notes client development, there is a great deal of good news.

The popular tools are awash in documentation, with clear examples for doing basically whatever you will want to do. There's also a lot of conceptual overlap and familiarity: if your tool of choice loses favor, it won't be a start from square one to learn the next. And it's not required to learn every single thing that comes along. If you build yourself a Java web stack using, say, Spring that does the job, it's not also necessary to learn every single new client-side JS app framework that comes along.

And, in the mean time, there's a lot of great work left to do with Domino. There are XPages applications in use and development, and these will go a very long way. Domino remains a very capable platform, and the path through XPages can be a very natural lead-in to other technologies, especially if you focus on the aspects that carry on: Java, data separation, REST services, and so forth.

For my part, I, too, still have great work to do on Domino and XPages, but I'm also expanding beyond it with eyes open. As I mention frequently, I believe that Darwino is the best path for a number of reasons. When I have the opportunity, I plan to start getting into the meat-and-potatoes reasons why and examples of how to actually use the thing. Time permitting, I hope to have a series at some point for converting my blog over to a Darwino+JEE application, and I'll share my process of picking my tools and replicating with its current NSF form as I go. If all goes well, it should serve as an example of taking an existing XPages app and transforming it into something new.

This is an opportunity, and it's an exciting time.


Postscript: This is the optimistic take, granted, and some people’s situations are a bit more dire. Admins, I imagine, are in a strange spot (I hope you’ve been brushing up on ancillary tools!), and a lot of companies are doing a lot of hand-wringing about the future for app dev and maintenance as well, particularly those with a heavy Notes-client dependency. My point is that it’s not necessary to get too mired in the doom and gloom.

Change Bitterness and Accidents of History

Fri Jun 17 15:16:10 EDT 2016

Tags: domino

It's pretty easy to see that change is in the air for Domino types. It's been taking a number of forms for a while now - the long delay since the release of 9.0.1 and associated aging of the tools and infrastructure have led to a series of forced adaptations for developers and administrators. Developers, for example, have had to keep light on their feet to adapt to new browsers and devices that the framework doesn't automatically support, as well as a shift toward manually including jQuery and other tools that have a bit more wind at their back than Dojo. Administrators, for their part, have had a series of heart attacks related to SSL and other security matters, usually involving a a lot of noise followed by (unfortunately, I feel) a good-enough patch from IBM.

That sort of thing isn't likely to get any smoother. In large part, that's entirely distinct from anything IBM does: the genie's out of the bottle when it comes to fast-moving platforms, and the best we can hope for is a sort of still-moving linga franca that can be sort-of-stable on the majority of targets. But then part of that is Domino: for all its virtues, it hasn't adapted for the modern world, no matter how much some of us would have liked it to. And that's had some negative side effects on us as a customer base, side effects that manifest as a gut-reaction rejection of the modern ways of doing things.

Take the SSL thing, one of my soap boxes: though the immediate problem could be summarized as "IBM should update their SSL stack", the larger issue it exposed was that our beloved monolith is mortally vulnerable to a single component falling behind. And, in fact, it made clear that we're spoiled by the approach: many of the reactions were basically that we shouldn't have to worry about things like reverse proxies or the general notion of distinct systems for web front-ends and the app server. And that initial rejection of the hassle has implications, limiting the average Domino installation's capacity to scale for load balancing or failover in a smooth way, things that come almost "for free" with a reverse-proxied setup common among almost all other app servers.

Developers have it worse: the "power user turned developer" history of Notes and the particular peculiarities of the platform* have left us desperately behind the baseline for modern development. The tooling has coddled us into preferring inline scripts and procedural programming to structured code organization, into viewing a thoroughly-staid language like Java as something outrageously complex, of only begrudgingly adopting SCM due to the platform-induced hassle, and of almost entirely ignoring automated testing. And similarly to the SSL discussion, the historic "one tool for every job" nature of Domino leads to natural pushback when faced with other platforms.

And I get why! And I feel it too. It's a real PITA to now always have to be on the lookout for some new point release of iOS or Android to break drop-down boxes or something, as opposed to years of deploying Notes apps that looked and worked identially across every version forever (pretty much). It's also a drag to run into situations where the problem is "easy" on Domino but more cumbersome elsewhere, like platforms that farm out their FT indexes to distinct servers, or don't include document/record-based security. That makes it very easy to become blinded to the tradeoffs, though. It may be nice that Domino is a one-stop-shop for so much, but it's a shop that requires Designer, that makes it very difficult to use third-party-dependency systems like Maven (even within Maven projects), that lags in DB features found elsewhere, that is only awkwardly accessible from other app-dev frameworks, that has an API that's a bit older than Windows Me, and that essentially never showed up for the modern development conversation.

Domino has always had a lot to recommend it, and XPages has carried us very far. And hey, this is enterprise software - even if there's never a major new version, there'll be paying work forever. It just may not be the kind of work you want to do, and it is almost definitely not truly healthy for the companies paying for it. So what I recommend is that you have a plan. The good news is that there are a great many next steps that build smoothly on existing Domino knowledge and, potentially, infrastructure. Certainly, I'm thoroughly biased in the direction of Darwino, but that's one of many. You could also do worse than learning a mature platform like Ruby on Rails (heck, you could run that on JRuby on Tomcat or WebSphere). Take some time to learn about reverse proxies and modern web-server setups. Basically: something. Just do it with an open mind, and don't balk at the first thing that's more complicated than the Domino equivalent. I think a bit of that will serve you very well.


* And, to be fair, of the overly-conservative nature of enterprise programming.

Other than, of course, being one of the progenitor NoSQL databases.

Platform Defensiveness

Fri Jul 18 20:50:50 EDT 2014

Tags: domino

If you were a Mac user in the 90s and early 2000s, life could be tough. Though you loved your platform of choice and could see its advantages plain as day, you were in a distinct minority. The world was full of people ready to line up to explain why you're an idiot on a sinking ship. And the part that stung was that they weren't always wrong: for every person talking out of their ass about how great having a menu bar in every window was or the joys of running ancient line-of-business DOS apps, someone else pointed out the very real stability and performance issues that plagued classic Mac OS and OS X respectively, the fifth-class-citizen status among game developers, or the very-real possibility that Apple would join Netscape and Be under the treads of Microsoft's then-indomitable war machine.

The recourse for users was reflexive defensiveness. Mac market share is down to 2%? Well, that's the good 2%. Windows 2000 is actually stable and fast now? Well, the UI is still second-rate. Schools are switching en masse to PCs? Boy, they'll regret that when their cheap Dells give out! Much like the slings and arrows thrown at the Mac community, these defenses contained just enough truth to soothe the wounds, but it didn't matter. Maybe the community circling the wagons staunched the bleeding a bit, but the thing that really mattered was Apple breaking its own malaise and making great products again.

Unsurprisingly, what has me thinking about this is Domino. Specifically, the popularity of a post on Chris Miller's blog about Denny's apparently switching away from Notes. Now, I don't have a gripe with the post or the specifics of the situation - for all I know, Denny's is making the worst decision of its corporate-IT life. What bothered me is the paint-by-numbers way the story echoes through the Domino community. We've seen these things go around before, and the components are familiar when Company X decides to ditch Notes/Domino for Competitor Y:

  • "Just wait until they see the REAL cost of Y! Then they'll be sorry!"
  • "They must not be thinking about all their crucial apps! Y doesn't do that!"
  • "Their stated reasons are wrong! They said Domino doesn't do Feature Z, but Domino is actually the BEST at Feature Z!" (nowadays, replace with "Use some other IBM product to get Feature Z!")
  • "This is the work of some clueless IT manager who's just following the trends!"
  • "They must be using an ancient version of Notes! Modern Notes clients are exemplars of memory efficiency and clean design!" (alternative for server admins: "They must not have configured their mail environment properly!")
  • "I betcha they'll still be using Notes for their crucial apps in ten years, 'migration' or no!"

Much like the defenses of classic Mac OS, these all contain kernels of truth, and maybe Company X would indeed be better off sticking with Domino. But are these really claims you want to hang your hat on? The last one in particular is telling. Does it really fill your heart with pride that companies are going to be saddled with ancient, un-migratable code until the sun goes nova? There's a reason why "modernization" is such a big topic for Domino developers: the unfathomable mass of legacy Notes client apps is a severe issue. Whether you attempt to deal with it via clever en-masse approaches, by using remote-desktop workarounds, or by dragging apps one-by-one into the present, there's no getting around the fact that depending on Notes client apps is now an undesirable condition for a company.

So what's to be done about it? Well, for the most part, that's IBM's job. But for an individual developer, it's better to focus on why you should build a Domino app today, not decades ago. It builds on the past, sure - an XPages app I've been building for a client brings together data from dozens of their existing apps built over years in ways that would be much more difficult on other platforms. But even better is to acknowledge reality with clear eyes. Google Apps are really good, and they work on everything! Exchange provides a better mail/contacts/calendar experience for varied clients than Domino does. Non-XPages web-dev environments are brimming with surprising features and deployment has gotten good in recent years. The advantages of other platforms are not necessarily enough to be worth a switch, but they still exist.

Personally, yes, I care whether Domino ends up flourishing, but I profit more from addressing reality as it is rather than immediately dismissing bad news as illegitimate.

The Trouble With Developing on Domino

Tue Jul 08 10:57:51 EDT 2014

Tags: domino

The core trouble in my eyes with developing on Domino is that it is unloved for what it is. Not so much by customers (I have no interest in whether companies use Exchange or SharePoint or whatever), but more by IBM. The situation reminds me of Apple with products like WebObjects, Xserve, and Aperture: there's clearly a dedicated and talented team behind it, but the organization's heart isn't in it. The analogy is inexact: first, Apple's overarching motivations and strategies are much easier to grok than IBM's; second, enterprise software never really dies - it just goes all COBOL and lingers in maintenance somewhere forever (did you know OS/2 is still a thing?).

So Domino is still around, and still exists, but is generally positioned as something for large companies to use for mail, or continue using if they happened to start doing so decades ago, or as an adjunct to Connections. But it's like with Lotusphere's rebranding: when (dev conference) + (biz conference) = (biz conference), the algebra to figure out the perceived value of (dev conference) isn't difficult. To a certain extent, this is just how IBM does business: they talk to large organizations' higher-ups and make their sales that way, not by being outwardly compelling. However, on this front, Bluemix's entry has provided a telling counter-example: though there's still the stock over-puffed business page, the bluemix.net site talks directly to developers using a layout and writing style that an actual human being might enjoy. They even have honest-to-goodness Developer Evangelists!

It leaves us, as Domino developers, in an awkward position. We have a full-fledged NoSQL server with an integrated Java dev stack, albeit one without a guiding soul. We have an easy-to-install, flawlessly-clustering, cross-platform server that is perpetually 20% away from being perfect for the modern world. Large portions can be charitably characterized as having been in maintenance mode for a long time: the core API, SSL in all protocols, calendar/contacts connectivity, indexing, the admin client, and so forth. The bad news is easy to perceive: less vendor interest and customers driven primarily by inertia make for a poor career path. But the point of this post isn't doom and gloom! The way I figure it, there are a number of bright sides to explain why I and others continue to develop on this platform:

  • There's always a chance that Domino will be less "Xserve" and more "Mac Pro", a product seemingly at death's door for years until it was given a fresh breath of life.
  • As Paul Graham pointed out years ago, when you're writing server-run software, you can use whichever platform you'd like. Though, all else being equal, it's nicer to have a popular platform, if you can use the tool to do greater work with your time than you would elsewhere, it's worthwhile.
  • Due to Domino's nature, almost all of its (technical) problems are solvable even if IBM never touches it again. Some of them, like the API, are things where the community has already built a workaround. Some, like SSL for HTTP, have seen IBM package their own workaround with the server. And the rest, like NIF and CalDAV/CardDAV support, linger as perfect "if I had time" projects.

The last one is crucial for our position: though a totally unsupported problem would eventually fall prey to incompatibility with newer operating systems and machines, Domino has reached a point in the last couple years (thanks to the extensibility API in 8.5.2 and the vital bug-fixes in 8.5.3 and 9) where it's an agglomeration of replaceable parts. There are enough hooks and APIs to - with varying degrees of difficulty - take advantage of the core strengths of the platform while dramatically improving the components on top. That's not enough to last forever, but it doesn't have to be. Apps built in the mean time will still run and improve, and any time spent programming on the platform is valuable experience that can be applied elsewhere, whether directly or more generally.

Just don't ask about licensing.

Another Round of URL Fiddling

Thu Jun 19 17:59:51 EDT 2014

Tags: domino urls

I was working on another installment of "Be a Better Programmer", but the result isn't where I want it to be yet, so it'll have to wait. Instead, I'll share a bit about my latest futzing around when it comes to improving the URLs used in an XPages app. This has been a long-running saga, with the latest installment seeing me stumbling across a capability that had been right under my nose: the ViewHandler's ability to manipulate the URLs generated for a page. That method worked, but had the limitation that it didn't affect navigation between pages.

As it turns out, there's a slightly-better option that covers everything I've tried other than navigation rules specified in faces-config.xml (but who uses those in XPages anyway?): the RequestCustomizerFactory. This is one of the various classes buried in the Extensibility API and present in the XSP Starter Kit, but otherwise little-used. In the Starter Kit stub (and in a similar commented-out stub in the ExtLib), it can be used to add resources to a page without needing to specify them in code - say, for a plugin to add JS or CSS libraries without being specified in a theme.

However, there are other methods in the RequestParameters object that would appear to let you shim in all sorts of settings on a per-request basis - different Dojo implementations, different theme IDs, whatever "debugAgent" is, and so forth. The one that interests me at the moment is the UrlProcessor. This is aptly named: when a URL is generated by the XPage, it is passed through this processor before being sent to the browser. This appears to cover everything other than the aforementioned stock-JSF-style navigation rules: link value properties, form actions, normal navigation rules, view panel generated links, etc..

Following my previous desire, I want to make it so that all links inside my task-tracking app Milo use the "/m/*" substitution rule I set up for the server. The method for doing that via a UrlCustomizer is pretty similar to the ViewHandler:

package config;

import java.util.regex.Pattern;

import javax.faces.context.FacesContext;

import com.ibm.xsp.context.RequestCustomizerFactory;
import com.ibm.xsp.context.RequestParameters;

public class ConfigRequestCustomizerFactory extends RequestCustomizerFactory {

	@Override
	public void initializeParameters(final FacesContext facesContext, final RequestParameters parameters) {
		parameters.setUrlProcessor(ConfigUrlProcessor.instance);
	}

	public static class ConfigUrlProcessor implements RequestParameters.UrlProcessor {
		public static ConfigUrlProcessor instance = new ConfigUrlProcessor();

		public String processActionUrl(final String url) {
			String contextPath = FacesContext.getCurrentInstance().getExternalContext().getRequestContextPath();
			String pathPattern = "^" + Pattern.quote(contextPath);
			return url.replaceAll(pathPattern, "/m");
		}

		public String processGlobalUrl(final String url) {
			return url;
		}

		public String processResourceUrl(final String url) {
			String contextPath = FacesContext.getCurrentInstance().getExternalContext().getRequestContextPath();
			String pathPattern = "^" + Pattern.quote(contextPath);
			return url.replaceAll(pathPattern, "/m");
		}

	}
}

The method of adding this customizer to your app directly is, however, different: instead of being in the faces-config file, this is added as a "service", as mentioned in this handy list. For an in-NSF service, you accomplish this by adding a folder called "META-INF/services" to your app's classpath (you can add it in Code/Java) and a file in there named "com.ibm.xsp.RequestParameters":

In this file, you type the full name of your class, such as "config.ConfigRequestCustomizerFactory" from my example. You'll likely have to Clean your project after doing this to have it take effect (and possibly in between each change to the code).

Now, my current use is very limited; the UrlProcessor has free reign for its transformations and is passed every URL generated by the XPage (except hard-coded HTML, unsurprisingly) - you could change URLs like "/post.xsp?documentId=foo" to "/blog/posts/foo" without having to code that explicitly in your XPage. You could write a customizer that looks up all Substitution rules for the current Web Site from names.nsf and automatically transforms URLs to match. You could go further than that, trapping external links to process in some way in the "processGlobalUrl" method. The possibilities with the other methods of the customizer go much further (and are largely undocumented, to make them more fun), but for now just fiddling with the URLs is a nice boon.

Pretty URLs: A Step in the Right Direction

Fri May 09 20:20:07 EDT 2014

Tags: domino

A long time ago, I mused a bit about the URL problem in XPages. The core problem then, as it is today, was that XPage URLs are ugly as sin. Domino URLs were never pretty, but XPages took them back a few steps. Just look at this normal-case monstrosity:

https://someserver.com/foldername/app.nsf/somepage.xsp?action=openDocument&documentId=E0023409DE744F8085257CD3007006A3

Barely any part of that has anything to do with the task at hand! Some parts - like the folder path and NSF name - are only tangentially related, while others - like "action=openDocument" are little more than an implementation detail. The inhuman UNID gets a half-pass by virtue of its value as a cluster-friendly identifier, but it would be better as a human-readable unique key.

What you'd REALLY want would be a WordPress-and-others-style URL like this:

https://someserver.com/blog/2014/5/9/pretty-urls

Now THAT'S a functional URL: every part of it is both explicable to humans and useful to computers as a unique identifier. More importantly, it establishes a clear hierarchy: any part of it can be traversed to get a conceptually-useful URL (assuming your app implements it). "/blog/2014/5" has a clear meaning: "give me all entries in the 'blog' app for May 2014".

Domino provides little assistance in getting to this point. There are web rules, yes, but that leaves two crucial problems: 1) your app needs to know about these rules, so it generates nice URLs and not eldrtich horrors and 2) you need to set up a server-level config for every web site + function combination. You can't just say "send all requests for /blog to so-and-so app" and then have that app handle everything past that. This is something of an impassable brick wall: Domino's original URL router is still in full effect even in the most modern of apps, and so the XPages side of an app doesn't even hear about a URL request unless it's in the form of "app.nsf/somepage.xsp" or "app.nsf/xsp/whatever" (and the latter form is something of a Wild West full of resource providers, servlets, and who-knows-what else).

But there's a twinkle of hope: because you specify all URLs in an XPage as relative to the app, that means that there's a post-processing service in there that translates a URL like "/foo.xsp" into a full server-relative URL like "/somedir/app.nsf/foo.xsp". That service has a name: ViewHandler. ViewHandlers have been one of my preferred tools in XPages ever since I discovered their utility in instantiating my soon-to-be-renamed "controller" classes. In addition to their role in creating pages and potentially intercepting page requests, ViewHandlers serve a crucial role: providing resource and action URLs. In XPages/JSF parlance, an "action" URL is what shows up in the form tag on the HTML result, while a "resource" URL is, well, basically everything else. When a control on an XPage specifies a URL, it passes it through the resource method to get the "real" URL. Normally, these are turned into the normal ".nsf" paths, but there's no reason they have to be. Here's an example of the two methods in my project-tracking app (named "Milo"):

@Override
public String getResourceURL(final FacesContext context, final String resource) {
	if(!isGlobalResource(context, resource)) {
		// Then switch it to "/m/whatever"
		return "/m" + resource;
	}
	return super.getResourceURL(context, resource);
}
@Override
public String getActionURL(final FacesContext context, final String pageName) {
	return "/m" + pageName + ".xsp";
}

I set up a substitution web rule on the server to translate "/m/*" to the full path of the DB. Because of this change to the view handler, now every <xp:link/>, <xp:image/>, theme resource, etc. uses the "pretty" path. Assuming I don't run into any major down sides, this is a huge piece in the puzzle! Now I can write my apps normally - referencing XPages, resources, etc. using the same portable, app-relative syntax as usual - and then let my ViewHandler determine if it should clean up the URL. This is a nice step above my second solution, which required either a special EL pattern or a custom control to translate the URLs; now that job can be passed up a layer of abstraction.

There are still two problems I know of standing in between me and using this everywhere: automatic detection of server support (e.g. reading names.nsf for matching web rules) and navigation between pages. Though the ViewHandler covers normal links and form action URLs, it doesn't handle navigation between pages based on navigation rules. I don't expect that to be an eternal problem, though; JSF has hooks for this sort of thing, so it should be a matter of figuring out how to use it there.

Overall, it's still something of an ugly solution: where other platforms have clean routing configs, XPages has a hodgepodge of server-level settings and in-app shims. However, I hope to turn it into a worthwhile combination of automatic configuration (by reading names.nsf) and clean declarative settings that I can build into the Scaffolding project.

Domino the Identity Server

Tue Feb 11 09:10:16 EST 2014

Tags: domino

As seems to happen a lot lately, my fancy was struck earlier by a Twitter conversation, this time about the use of Domino as a personal mail server. Not only do I think there's potential there, but it should go further and be a drop-in replacement for personal mail, calendar, and contacts storage.

I think there's tremendous value in controlling your own domain and the services on it, without being permanently attached to someone else's name for an email address (your school, your company, your ISP, Google). This is good not only for personal freedom, since it lets you pack up and move at will, but also for security, since a large third-party mail service is a particularly juicy target.

Unlike Domino's inherited-but-abandoned place as the preeminent NoSQL server and replicating app-dev platform, Domino is just barely shy of being this server. You could already hit the three main services reasonably well by using the Notes client or iNotes, but actual humans shouldn't have to do that. It's already (more or less) there for mail with IMAP, while support for CardDAV for contacts and CalDAV+public iCalendar feeds would round it out for the other two pillars. Technically, the only things standing in between the existing open-source WebDAV plugin for Domino and this imagined future are the complexities of plugin development and the RFC.

The other main aspects that could make this great are further refinements to Domino's existing capabilities: a bundled spam filter (say, one of the open-source tools that can do the job already) and a strong configuration focus on creating an SSL-secured public-facing server. Non-SSL variants of IMAP, POP (if you must - that could be removed entirely), and HTTP should be off by default and the configuration should encourage you to acquire SSL certificates with your own private keys (the Server Certificate Admin would need a revamp for proper key size and ciphers), as well as S/MIME certificates to tie with each user for signed/encrypted mail outside the Notes client. Though Domino's history with the NSA is... checkered, it's still remarkably well-positioned to provide a secure foundation to deter snooping eyes. Certainly, running a Domino server own your own or a rented/virtual server is leagues better than a fully-managed service in this respect.

Having those features in place and smoothly integrated with a nice setup assistant would make for a very compelling product: an all-in-one, easy-to-install server that runs on several modern OSes and handles secure replication across physical locations at already-actually-affordable prices. Admittedly, I don't know how compelling that product would be for IBM's accountants, but it's certainly compelling for me, and the world could use more decentralization like this.

How I Want To Use Domino, Take 2

Wed Oct 30 16:10:50 EDT 2013

  1. How I Want To Use Domino
  2. How I Want To Use Domino, Take 2

A while back, I wrote a post about how I wanted to use Domino. The gist of that was that I was enamored with the idea of using Domino as a back-end database, but not necessarily as a app-dev platform on its own - basically, how you would use a SQL or NoSQL database. Since then, I've doubled down on my use of XPages as an app-dev platform with many advantages, but I still find it very useful to imagine Domino not as "Notes apps on the web, now with a modern coat of paint", but as a collection of related but not mutually-required components competing with other web-dev stacks.

I started working on an updated take on that post, inspired by some recent posts and chat conversations I've had (and I may return to it), but then I ran across Grand Decentral Station, which is a vision of an OS/app-dev platform taking the best of the lessons of the last decade and turning them into a coherent platform. It's a compelling vision, and reading it made me realize something: most of those goals are describing Domino, or, more accurately, the Domino that could be. Take a look down the list and see how many points Domino goes about 80% towards:

  • App Installer & Updater: though Domino doesn't really handle app versions, the deployment strategy is nonetheless quite good, with all app code contained in a distinct NSF, not a bunch of files strewn in a couple directories.
  • Sandboxed apps: again, Domino doesn't quite sandbox apps, but appropriate use of ACLs can bring you close.
  • Mail server: I hear tell that Domino is capable of acting as a mail server.
  • Calendar server: with proper CalDAV support, Domino could act as a real calendar server for non-Notes clients like OS X and iOS.
  • Addressbook server: like with the calendar server, this is just a matter of adding CardDAV support.
  • Asset handling: CSS and JS optimization in XPages is a huge step in this direction.
  • Avatar server: the Directory can already act as a profile-picture server for Sametime, and something like mypic could standardize this use.
  • vCard server: well, it already serves LDAP.
  • Unified sessions: done.

And to cap this off, the prescribed per-app database is CouchDB, which is already modeled on Domino. And, of course, it already has a standard API for email, which conveniently doubles as a method of cross-app messaging in some cases, and its replication and clustering are top-notch. It's not, itself, an OS, but its fairly-cross-platform nature means that that problem is already "solved": install Domino on the server platform you're most comfortable with and it acts basically the same.

Of course, that final "20%" is, as always, the crux of the problem. Domino is only really a fully-integrated mail/contacts/calendar server when you use Notes or iNotes, XPages and legacy Domino dev are really the only games in town if you want to maintain the benefits of the NSF package, agents aren't integrated with the XSP environment and DOTS isn't a real replacement yet, there are still a number of items on the list not at all touched on by the existing stack, and licensing basically removes Domino from consideration for app development for anyone not already mentally invested in it. But hey, one can dream, no?

Domino Wishlist, September 2013

Fri Sep 20 13:06:48 EDT 2013

I joined in a tweet from Paul Hannan earlier about desired Domino features (that's a very worthwhile thread to read - lots of great ideas from others) and it's had me thinking about more on my list. Fortunately, I have a blog for this kind of thing! I'm leaving off a couple big-ticket items like Eclipse-4.x-based Designer on the Mac because IBM may not be required for that.

  1. Make $WSIS not trigger crummy SSL behavior (really, first-class support for proxies in front of Domino, not just IHS/Windows)
  2. Direct access to note item data via Java
  3. Newer JRE
  4. Direct creation/update of NSFs from git server-side
  5. New XSP model-object framework
  6. New views, though that may be our job
  7. Aggressive, small-company-friendly licensing to compete with OSS stacks
  8. NSF-based SSL keychains
  9. NSF-based jvm/lib/ext distribution
  10. "Sensible defaults" mode for clean-install servers to get appropriate memory/cache/performance settings on modern setups without learning dozens of secret notes.ini settings
  11. apt-get-based distribution and updates for Domino on Debian/Ubuntu
  12. Improved XPages file-upload and -download controls to handle modern file uploading and provide better hooks for non-dominoDocument bindings
  13. Bootstrap theme for XPages. Done - thanks!
  14. Built-in default support for authentication fields in LDAP
  15. init.d startup script installed with Domino, because seriously, who ships server software without one?
  16. OS X build of Domino 64-bit
  17. Built-in Firefox sync server, because that would be a nice overture to general open-source use
  18. Support for HTTP authentication filters in Java
  19. Built-in beer database to compete with Couchbase's substantial lead in the area
  20. Easier configuration of multiple TCP/IP adapters on a server and assignment of services to bind to them
  21. Full control over URL routing inside a database (e.g. "no legacy" mode where a Java class can map requests to resources)
  22. Keep up the ODS improvements - 8.x was a great set of releases on this front
  23. CalDAV and CardDAV support - Domino should be a close-to-drop-in replacement for Gmail on OS X and iOS
  24. Along the lines of #7, a general push to establish Domino as a top-flight NoSQL database and app-dev platform, not just "your old company's mail system"
  25. A real focus on standard-API performance, with less XSP-specific cheating

I'm sure this list is not exhaustive.

The Greatest Domino Poster of All Time

Fri Apr 20 19:23:00 EDT 2012

My company is moving out of the offices it has inhabited for... longer than I've been alive, basically. During this move, we've had plenty of opportunities to come across relics of its past form as an instructor-led training location: old courseware, ancient versions of Windows, NetWare, and Notes/Domino (from before it was Domino), and some priceless marketing materials. Some of the posters we've found are worth a good chuckle or two (a NotesMail poster boasting "Finally, a mail system you will never have to exchange"), but this one is my absolute favorite thing related to Domino:

Amazing Domino poster

It's amazing, like a Domino ad out of Yellow Submarine. What I wouldn't give to have IBM's marketing to be more "Then, suddenly, bursting from a magic lotus blossom" and less "The Social server software for Socially Socializing with Social people in a Social way - just ask this guy in a tie."

Fun fact: the hooded "executioner" guy in the top right being punched out by Domino is labeled "SuiteSpot". SuiteSpot, it appears, was some sort of Groupware/web/mail/directory offering from Netscape. I guess Domino won that battle.

File Under "Man, I Hope Designer Still Works After This"

Sun Apr 08 18:15:00 EDT 2012

Tags: domino

Well, I think the new blog has gone relatively smoothly, other than my accidental re-posting of my SQL-migration post (which oddly seems more popular than its first run, just one day earlier). That means it's time to get started on the next phases.

Other than the mundane setting-up-a-blog stuff like implementing search and proper draft posts, I have a lot of work to do surrounding my Ruby bindings. Since I'm going to eventually want Ruby "script libraries" and other handy non-inline uses, I'm going to need a proper Ruby editor. TextMate-via-WebDAV is OKAY, but not exactly elegant, and it involves too much overhead and awkwardness to be a great solution. Better would be to get DLTK - which has Ruby syntax highlighting - working in Designer.

On my first attempts, I ran into problems where using DLTK's update site demands an updated version of Eclipse, which could mean disaster for Designer. However, it turns out that GLTK has been part of mainline Eclipse for a while, so I figured I'd check to see if I could get to it via another route. Evidently, Designer 8.5.3 is based on Eclipse 3.4 (Ganymede), so I added Ganymede's update site to Designer's plugin installation list and, lo and behold, Ruby was there in the "Programming Languages" section. It still required the 3.4 version of org.eclipse.core, but, having backed up my Windows VM for this purpose, I felt daring. I clicked "Select Required" to select all applicable dependencies and hit Yes as many times as it took to finish the installation. After rebooting Designer, I found that Ruby scripts are now all fancily colored:

Ruby in Designer

If you'd like to try this yourself, I strongly suggest you do as I did and make a backup copy of your Windows VM first (or do whatever it is people who use Windows as their host OS do when they're about to ruin their system). Designer can be very picky about what plugins you install.

Making the Dogfooding Switch

Sat Apr 07 23:55:00 EDT 2012

Tags: domino blog

I've finally done it: I've switched my blog over to Domino. I did it for a couple reasons:

  • To silence the voice in the back of my head constantly saying "why are you using WordPress? You're a freaking web programmer! Write your own!"
  • To put my Ruby-in-XPages code through its paces in the way only a live site can.

I've already had to fix a couple holes in my Ruby adapter, mostly revolving around the fact that I haven't bothered to properly handle serialization and JSF's StateHolder interfact. For now, I've patched the worst problems, but this gives me a great reason to come up with a proper solution. To test it out, I'm making sure to avoid "#{javascript: ...}" code blocks in favor of using plain-EL and "#{ruby: ... }" exclusively. So far, the only really awkward parts are the giant swaths of yellow-squiggly "I don't understand this" underlining in Designer and having to put xp:repeat values on xp:dataContexts rather than writing the computation inline. Not too shabby.

The whole thing's a bit shoot-from-the-hip at the moment, since it's only existed for a day. There's no search (though that'll be easy), the site design is from my old college-era blog, and I have to write the posts' HTML by hand in the Notes client (you know, like on all modern blogging platforms). But hey, I have the old posts in there, plus an archives list and, as long as it doesn't mysteriously die again like it did a minute ago, Akismet-backed commenting.

So let's see how this thing works! Don't be surprised to see the WordPress version again if things go catastrophically wrong.

Import from SQL to NSF: It's So Easy!

Fri Apr 06 19:57:00 EDT 2012

Tags: domino sql

I decided I should probably finally get around to moving this blog from WordPress to Domino, if for no other reason than to have a perfect testbed for the weird stuff I've been doing lately. The first task is to write an importer, so I decided to just do a straight SQL rows -> Domino documents import. This couldn't be easier if you follow this simple guide:

  1. Write a Java agent that uses the appropriate JDBC connector for your database. In my case, it's MySQL, so I had it do a "show tables" to get a list of tables, then loop over those to do "select * from whatever" statements to get all documents.
  2. Since it's Domino, you can just do doc.replaceItemValue(column, rs.getObject(column))!
  3. Oh wait, you can't all the time. Make sure to handle a couple cases, like converting BigIntegers to longs and Timestamps to DateTimes.
  4. Wait, Domino doesn't even handle Boolean? For frack's sake, FINE: row.replaceItemValue(column, cell.equals(Boolean.TRUE) ? 1 : 0);
  5. Oh hey, that import went really smoothly! Now I'll just make some views based on the table name and... crap. Forgot to include the table name! Maybe I shouldn't have written this importer at four in the morning.
  6. Better delete all the documents and start over.
  7. What's this, an error when I went to delete all the documents in the All view? "Field is too large (32k) or..." oh no.
  8. Oh crap.
  9. Crap!
  10. Ah, I know - I'll just write an agent to do session.CurrentDatabase.AllDocuments.RemoveAll(True) and that'll fix that.
  11. Hmm, nope, that didn't work. Alright, based on the documents created around the same time, I can guess that it's the "option_value" field that's too big. Why did it even cram that data into summary if it completely breaks the document? Well, no point in dealing with that now. It's time for StampAll to take care of that!
  12. Nope? Okay, how about session.CurrentDatabase.GetView("All").AllEntries.StampAll?
  13. Not all categorized? What kind of error message is THAT?
  14. Time to delete the database and start over!
  15. Alright, THIS TIME, check to see if the String value is over, let's say, 16000 bytes and, if so, store it in a RichTextItem instead.
  16. Oh nice, it worked.
  17. Oh crap, I forgot to include the table name again.

And that's all there is to it!

#{ruby: 'it\'s a start'}

Tue Apr 03 19:55:10 EDT 2012

Tags: domino ruby

Oh man, I think this might actually work. Feeling adventurous this evening, I decided to look into the XPage runtime's expression language handler. After poring through tons of methods, interfaces, implementation classes, EXTENDED implementation classes, and disparate JARs, I narrowed the prefix handler down to the "FactoryLookup" property of the IBM-specific variant of facesContext's Application. With that, which is basically a hash, you map a prefix to a handler factory (it's always factories with Java, isn't it?). Once there's a handler registered, you can then bind expressions with that prefix.

This actually means what it implies:

<xp:text value="#{ruby: 'hi from ruby'}" />

And it works! Better still, the show-stopper in Designer seems to have been resolved in one of the recent versions: rather than error'ing out when it sees an unrecognized EL prefix, it nags you with a warning but otherwise proceeds without problem, saving the script as-is in the resultant Java.

Now, this is VERY much a first pass at the idea - all I did was write extremely skeletal implementations of the Factory and Binding classes and then I register them in the beforePageLoad event of a page:

var app = facesContext.getApplication()
var facts = app.getFactoryLookup()

var rfac = new mtc.ruby.RubyBindingFactory()
facts.setFactory(rfac.getPrefix(), rfac)

I can't even begin to stress how not the right way this is. It doesn't work in all controls (repeats, for example, seem to do an extra syntax check), it doesn't yet have any context from the surrounding environment, and, most importantly, who knows what horrible things it's doing to the HTTP stack?

Still, the crucial point is that it really, really looks like it can work.

The two classes I wrote, which no one should, under any circumstances, use, are here: https://github.com/jesse-gallagher/Domino-One-Offs/tree/master/mcl/ruby

Putting the Domain Catalog to a Bit of Use

Tue Apr 03 13:33:52 EDT 2012

Tags: domino

Since I kind of backed my way into Domino development and administration, there are a number of areas of the server's functionality that I'm either unfamiliar with or casually brushed off as unreliable or not overly useful.

The Domain Catalog is one such area: I've been vaguely familiar with it, but have never bothered to tend to it or use it to solve problems. Fortunately, a problem it's perfectly suited to fell into my lap. In an overarching administration database, I want to get from a document with a database doclink to the database itself as quickly as possible. The easiest way, programmatically, is to use the .ServerHint and .DBReplicaID properties in a call to an unconnected NotesDatabase object's .openWithReplicaID(...) method.

This works well, but doesn't take into account the existence of local replicas - the admin database is running on one server, while the databases are created and linked to on the primary production server. Given that these servers aren't even in the same physical state, it's significantly faster to use a local replica when available, and only fall back when it hasn't been created yet. My first attempt at getting around this was direct: I first call .openByReplicaID(...) with a blank first parameter and, if it's not open, try the real server. Again, this worked, but is not ideal. It's programmatically ugly and, besides, there are more than just the two servers. .openWithFailover(...) looked promising, but seems to only accept real file paths, not replica IDs, which ruled it out for this purpose.

This is where the Domain Catalog comes in. It already contains a (hopefully-up-to-date) list of all of the replicas of every database in the domain along with everything I could need to connect to them. Furthermore, I realized I could use this information with a dash of extra intelligence: since I know which servers are physically closest, I could use that to find the "best" replica in each situation. So I made a view for this admin server ("Ganymede") of all of the database stubs, sorted first by replica ID and then by a column with this formula:

name := @Name([CN]; Server);
@If(
     name="Ganymede"; 1;
     name="Demeter"; 2;
     name="Invidia"; 3;
     name="Dionysus"; 4;
     1000
)

With replica ID in hand, this means that I can just do a @DbLookup() in the view and the first result will always be the best, regardless of which server it is. I have another column with the full "Server!!FilePath" path, so I can just pull that value and do it in one function. Nice, easy, and it lets Domino take care of the nitty-gritty details for me.

What Makes the Hassle Worthwhile

Mon Apr 02 22:34:03 EDT 2012

Tags: domino ruby

I've been toying with my Ruby servlet a bit this evening and it didn't take long to start having some fun. For example, here's a snippet from a page I'm building with Markaby, which is an aging little library that makes building HTML pages declaratively a cinch:

$database.views.sort { |a, b| a.name <=> b.name }.each do |view|
li { a view.name } unless view.name =~ /^\(.*\)$/
end

That prints out the names of all the non-hidden views in the database, sorted alphabetically, inside an HTML list. That's barely scratching the surface and not often useful, of course, but it's a proof of concept.. Server JavaScript can do some cool things, but the required syntax makes it a classless language by comparison.

I also realized earlier today that the same idea here can be translated to writing agents in Ruby by creating them as Java agents and putting the Ruby code in attached Resource files. It's a BIT of a hassle, in that I can't use the WebDAV trick to edit the scripts with TextMate, and it seems like I have to manually rebuild the agent whenever I change the script file, but that's not too bad. I'll probably try it out on some non-critical agents on my dev server to see if there are any critical memory issues.

Next step: figuring out this newfangled OSGi doohickey.

My Recurring Ruby/Domino Dream

Sun Apr 01 18:54:17 EDT 2012

Tags: domino ruby

As is no doubt clear by now, one of my obsessions when it comes to Domino is trying to make my programming life better, and one of the best ways I can think of accomplishing that is if I can make it so I can program in Ruby instead of one of the godforsaken languages natively supported.

In general, my attempts towards this goal have fallen into three categories:

  1. Accessing Domino from Ruby as one might a normal database. The very-much-in-progress fruit of this is my Domino API for Ruby project.
  2. Using Ruby as an alternative scripting language for XPages, like "#{ruby: blah blah blah}". This would be nice, but I haven't made any progress towards it.
  3. Running Ruby scripts out of an NSF via the web, as one might a non-scheduled agent

The last one has been the one at which I've taken the most swings, and I actually made some progress along those lines today. I decided to give a shot to using Domino's ancient servlet support combined with JRuby to accomplish this, and I think I've done it.

Since this is Domino, there were a couple hurdles:

  1. Servlets aren't automatically authenticated as the current Domino user. When I was looking into this, I read a lot about using NotesFactory.createSession(null, token), where "token" is the value of the LtpaToken cookie in an SSO configuration, which I use. However, this didn't work for me, possibly due to using site documents. Fortunately, there appears to be a better way: NotesFactory.createSession(null, req), where "req" is the HttpServletRequest object passed in to the servlet's method. This works splendidly, giving me a session using the current user without having to worry about figuring out their password or dealing with tokens manually.
  2. Finding the context database and script. The servlet is triggered properly by ".rb" extensions in file resources, but that still left the problem of actually opening that resource. Fortunately, the Session class has a resolve(String) method that can be used to get a Domino product object from a Notes URL. For file resources, this uselessly gives back a Form object, but you can get its UNID from the getURL() method.
  3. Reading the contents of the script. File resources appear to store their data in a $FileData rich text field, but that doesn't make it easy to deal with. For now, at least, I've gone the DXL route: I use the Document's generateXML() method, find the BASE64-encoded value in the <filedata> element, and decode that with sun.misc.BASE64Decoder.
  4. Actually executing the script. I banged my head against this for a long time: I kept getting NoClassDefFound exceptions for org.jruby.util.JRubyClassLoader when trying to run the script. I assumed at first that this was a problem with the class loader not finding this secondary class, so I tried tons of stuff with moving the JAR around, repackaging my servlet, and even trying manual class loading.  However, the exception was a red herring: it was really a permissions thing, so, not wanting to take any guff from a JVM, I granted myself all permissions and it worked great.
  5. Editing the Ruby script. This part wasn't really a crucial flaw, but the fact that Designer doesn't know about Ruby meant that editing the file resource involved no syntax highlighting or other fancy features. DLTK doesn't seem to play nicely with the Eclipse version in Designer and I've ruined my installation by messing with that stuff too much in the past, so I decided to try WebDAV. Unfortunately, Domino runs the servlet on WebDAV requests too, so, rather than trying to figure out how to handle that nicely, I just switched to accessing the database via a clustered server that doesn't run the servlet.

It was a lot of hassles, but it seems to work! The cleaned-up version of the method I used is:

  1. Put the "complete" JRuby JAR into Domino's jvm/lib/ext folder.
  2. Put the compiled class file for my servlet into Domino's data/domino/servlets folder.
  3. Grant all permissions to the JVM.
  4. Enable servlets for the server in question and specify "rb" as one of the file extensions.
  5. Enable the servlet in servlets.properties, as in my example.
  6. Restart HTTP/the server.
  7. Add .rb Ruby scripts as file resources to a database. I made one that makes a list of some documents from a view in my DB.
  8. Open the resource in a web browser and see it work! Hopefully!

One caveat: though I set the error and output streams of the Ruby environment to be the HTTP output, this seems to not always work, so I've found it best so far to use "$response.writer.println" instead of "puts" for writing text to the stream. I'll see if I can fix that. Turns out writing to the HTTP output should be done by calling setWriter(Writer) rather than setOutput(Writer).

Additionally, that "$response" there isn't a built-in feature of JRuby - it's a variable I put into the runtime. I do this along with "$request", "$session", and "$database". Think of them like the equivalent variables in XPages.

Some questions you may have are "is it stable?" and "is it fast?" and to those I have a simple answer: beats me! I just got it to work a bit ago and I don't know if I'll ever even use it. It's quite exciting, though, and that's what really matters.

https://github.com/jesse-gallagher/Domino-One-Offs

In Between My Project and XPages

Thu Mar 15 16:11:56 EDT 2012

Tags: domino xpages

Despite my grousing about the state of programming for Domino in general and Designer in particular, I'm still mostly a fan of XPages. I use it for my guild's web site and pretty much every new project at work. However, I haven't been able to crack migrating my main work database template over.

Without getting too much into it, the point of the template is to create one database per project to act as a project web site listing online events with arbitrary registration forms and exit evaluations (among other things). Except for custom changes, everything is done via the normal Notes client, not Designer - pages exist as documents with a Body rich text field and associated data and register forms/exit evals, crucially, contain a set of response documents describing their fields. It's important to keep everything as visual as possible, because the users (the people at my company that set up these web sites) are not programmers.

I scrupulously try to avoid modifying the design of the database, so I go out of my way to do custom work via the existing mechanisms as much as possible, and I make pretty good use of rich-text-isms like embedded views, tabbed/computed tables, attachments, buttons, and computed text. In the case of the generated forms, fields can be placed either automatically (with a surrounding template of HTML per-field) or directly into the rich text body via <%FieldName%> placeholders (creating an experience like form design in Designer). Additionally, I have "WebQueryOpen" and "WebQuerySave" fields in the documents for formulas that I pass on into @Eval() in the equivalent places in the actual forms; most of the time, I use these for running agents.

Computed Text

Of the various potential show-stoppers, I think computed text fares the best. For one, it mostly worked - the computed formulas are indeed evaluated and the result is put into place correctly. However, they don't have the same environment you get with the "classic" design elements. The big one that I ran into immediately was @UrlQueryString(...) - it appears that the rich text renderer doesn't inform the rich text about its web environment completely. Prior to 8.5.3, "Display XPage instead" pages didn't know about the real URL, so they couldn't get query parameters at all, but 8.5.3 appears to pass that information along properly. So that means it MIGHT be fixable, if I find a way to properly set fields in the document before the rich text is rendered, so I can set QUERY_STRING - if I can do that, either @UrlQueryString(...) will work or I can manually parse the string as needed.

Embedded Views

They work! ...ish. It looks like icon columns get their URLs a bit messed up, but I can't think of a time when I actually used them, particularly in a situation where I couldn't just write out the URL myself.

Tabbed/Computed Tables

From a cursory glance, I think I'd be SOL when it comes to these. Tabbed tables render flattened out and computed tables don't seem to work. The former can be improved on easily with Dojo, but the latter would mean replacing server-side business logic with client-side JavaScript, which would lead to headaches.

Attachments

These show up as "(See attached file: foo.jpg)". So... not functional. I might be able to fix it by using a filter on the text to replace the HTML with a link to the document, but I don't know if I could get the attachment image properly. Attachment images aren't amazing, but sometimes they do the job.

Buttons

Nope.

WebQueryOpen/WebQuerySave

In some cases, I could run the formulas through session.evaluate(), though I'm not even sure queryOpenDocument/postOpenDocument let me hook into the right spots (sometimes I set fields on document open). I don't think that would run agents, though, so I'd be stuck trying to parse out the text to look for @Command([ToolsRunMacro]; "...").

User-generated Forms

This is the toughest one. I've given this thought a number of times, and I can't think of a great solution. I can't just re-use the subforms I have already, I don't think I can generate and import an XPage via DXL (though I haven't given that a significant shot), and I can't just do a <xp:repeat/> to create all the fields, since some are in the body area. The main routes I can think of to try are a) arranging the fields on the page after the fact via JavaScript and b) generating the page components on the fly with Java or Server JavaScript. I'll probably give the latter a shot eventually, but I expect it to be a bag of hurt.

 

It's a lot of hurdles! I'd love to switch over to XPages, particularly since, for everything that's more difficult than using classic elements, there are 10 things that are way easier. It'd just be quite an investment of time to merely get up to par with the functionality I already have, if it's even possible in all cases.

Quality of Life: Eclipse Color Themes

Sat Mar 10 18:21:20 EST 2012

Tags: domino

Well, this is a nice quality-of-life improvement: the plugin Eclipse Color Themes, which (so far) works just fine in Designer (8.5.3). Having to go through every single editor type and manually pick each color for each code element every time I wanted to change a color theme was always a huge annoyance, particularly compared to other editors. Fortunately, that plugin handles it pretty well, though it unsurprisingly doesn't support LotusScript, so that's still manual. I installed it without problem, found its settings in General\Appearance\Color Themes, installed a port of my preferred theme, and saved myself tons of hassle.

How I Want To Use Domino

Wed Mar 07 12:43:00 EST 2012

Tags: domino
  1. How I Want To Use Domino
  2. How I Want To Use Domino, Take 2

From my perspective, there are three main problems with Domino: the limits, the client, and the server. Now, that's a lot of stuff... most of the product, in fact. However, the facts that I'm still programming for it and that my company's 16GB project-tracking database is as snappy as it was when it was empty attest to the core quality of the product. Off the top of my head, I can think of a number of things that make Domino salvageable:

  • Reader fields. These are hard to beat and hard to find elsewhere. While you can do everything without them that you can do with them, you'd likely spend a lot of time worrying about edge cases or limiting yourself to coarse security. If a user doesn't have Reader access to a document, it doesn't exist (well, except for some shadows in view indexes).
  • Solid, straightforward directory integration. Going along with reader fields, it's very helpful that the directory system is part of the overall product and works well enough with LDAP that you don't have to worry too much about it. User names, group memberships, and roles all make sense.
  • Full-text searching. It works and it's quick.
  • View indexing, more or less. As long as you don't rock the boat too much and don't think about what you used to do with SQL tables and views, Domino does a fine job of maintaining views for you.
  • File attachments. They work well and can even be full-text searched if they're the right type.
  • ID files. I used to hate these, and they're often more hassle than they're worth (as my C API programming is demonstrating), but they have some distinct advantages. If you squint, you can see them as really long password that may themselves have another level of password. They're also essentially the same concept as SSH keys. They also tie into handy encryption and certificate abilities that I don't usually have much use for.
  • Potential capability as a blob store. I saw a great post a while ago talking about the ability to just cram arbitrary data into a Domino document, and I think this is a very valuable line of thinking. If your data fits into the model of "a couple queryable bits and then a block of arbitrary data" or the "upside-down" method of memory-storage-first from the post, such a setup could work extraordinarily well.
  • Replication and clustering. Sync is not hard... unless you let your deletion stubs expire.
  • Agents, more or less. The language choices and lack of immediacy make agents kind of a pain sometimes, but it's still handy to have code directly associated with the database with schedule and event triggers.
  • Read marks. It's nice to have the option to let the server handle this for you when it fits your needs.

That's on top of the various benefits you get from document databases generally, like multi-value fields and the lack of schemas. Note, though, that none of these are (directly) related to Domino's chops as a mail platform, web server, or GUI app environment, its choice in programming languages, the value of its standard array of templates (though they can be handy), or pretty much any virtue extolled by its marketing materials. The guy writing on the invisible whiteboard on IBM's page doesn't care about reader fields or blob storage.

Though most of my Domino work is for the web and XPages are a better way to do that than the legacy elements, it's still a drag. Java as a language is a hassle (the platform has its charms), Server JavaScript isn't a full replacement in the way that Groovy and JRuby can be, and working on top of a giant Jenga tower of Java classes and abstractions can be hazardous to your health.

I really just want to treat Domino like MongoDB, CouchDB/Couchbase, and the rest: an environment-independent document database. This is probably not worth the effort, particularly since IBM seems passively hostile to the notion, but I find it to be a compelling idea. It doesn't have to fit into, say, Rails, but grinding Domino down to its solid NoSQL core could open up a lot of possibilities.

Some Niceties of Implementing a Notes API

Mon Feb 20 07:56:43 EST 2012

Tags: domino ruby

There are a couple things about writing my Ruby wrapper for the C API that make it particularly fun, mostly related to getting to add abilities that I desperately wish were there in the normal APIs.

  1. Ruby-style (forall) looping. Anyone who has iterated over a NotesDocumentCollection knows the drill: set a variable to the first element, start a while loop, and make sure to set the variable to the next one at the end. Writing it one time isn't so bad. Writing it hundreds of times, though? It gets to be a drag. Getting to write docs.each { |doc| ... } is a breath of fresh air.
  2. Easier design-elements-as-notes access. The normal API lets you get a NotesDocument version of a NotesView via its UniversalID property, but for everything else you need a NotesNoteCollection, which is a hassle. Since all design elements are Documents anyway, I've just made their wrapper objects subclasses of Document (though I may change that to just a #document method if it gets hairy) and I've put a #get_design_note method on Database that lets you find a design note by name and flag class (NIFFindDesignNoteExt).
  3. HTML and DXL everywhere. I use DXL fairly constantly (mostly for design elements), and for the most part it's a hassle. Not only do you have to create a NotesDXLExporter, but you also have to get the Document version of the design note you're dealing with and run through its process. Not impossible, but sort of a hassle. Now, though, I just put a #to_dxl method on everything - this is like the .generateXML() method in Java, but consistently applied. Similarly, I can't count the number of times when it would have been handy to get an HTML representation of some element, even if it was just the dated stuff that the legacy renderer puts out. Since that's all there in the C API, I just put a #to_html method on everything and a #get_item_html method on Document for very easy access to web-friendly versions of MIME and Rich Text items.

It's just a shame that I probably won't be able to use this: what I'd really want to do would be to use it like a database driver on a Ruby-driven web site, but the fact that it's so tied to ID files makes that tough. Still, it's great exercise to write it, and maybe I'll cave and set up some sort of multi-process hydra beast to suit my needs.

Started Work on a Ruby Wrapper for the C API

Wed Feb 15 19:28:43 EST 2012

Tags: ruby domino

As I had mentioned before, I've been tinkering about with the Domino C API, specifically with Ruby. Although I'm not sure I'll actually have a use for it (from what I can tell, the C API is very tied to ID files and threads, which would make a multi-user web server thing cumbersome), I've decided to go for it and write a wrapper for the API generally based on the Java/LotusScript API. This is serving a number of purposes:

  1. It gets me off my (metaphorical) duff and in front of a text editor during my off hours.
  2. It lets me work with Ruby in a real capacity. And moreover, it'll expose me to how to write a Ruby library.
  3. It's introducing me to proper version control, something I absolutely need to do better.
  4. It will eventually force me to write structured documentation, which I haven't had to do in the past.
  5. It should fill a hole currently only partially filled with OLE-based libraries, though the ID-file thing will make it awkward.
  6. It brings me back to the world of C structures and pointers, albeit cushioned by FFI.
  7. By the time I'm done, I'll know Domino inside and out.

So far, I have some of the basics working: creating a "Session" by passing in the program directory and the path to an INI file, fetching databases, traversing and full-text-searching views, reading view entries, and reading basic and simple MIME data from documents. Obviously, there's plenty more work to do, and I don't know if I'll bother implementing the more esoteric classes like NotesRegistration, but it's off to a fun start so far:

https://github.com/jesse-gallagher/Domino-API-for-Ruby

Point 7 on that list has proven surprisingly interesting. It's fun to see the bits that apparently never fully made it onto the product (like number ranges), the little optimized paths where the documentation goes out of its way to point out that it's particularly efficient, and all the data structure sizes that create the various limitations that make Domino programming such an... experience.

This Dynamic View Customizer Is Getting Into Shape

Mon Feb 13 10:08:12 EST 2012

Since last week, I've made two nice improvements to my dynamic view customizer:

  1. I added some support for twistie images when the referenced DB is on the same server. The code assumes that the referenced images are image wells with at least two entries, but I can't imagine why that wouldn't be the case in practice.
  2. I vastly improved my handling of color columns. Previously, I had been resorting to hacky methods like hidden <div>s read by JavaScript or surrounding <div>s styled to take up the whole cell, but those were terrible and easy to break. Now, though, I'm doing it right: the code adds a value binding for the column's style attribute to create the CSS for each cell, which is ideal.
  3. Empty categories now are translated to "(Not Categorized)", as in the client.

Now that my code is presentable (albeit oddly structured and uncommented), I figured I may as well toss it up on GitHub:

https://github.com/jesse-gallagher/Domino-One-Offs/blob/master/mcl/reports/DynamicViewCustomizer.java

To note if you want to use this in your own project: it references the "mcl.JSFUtil" object, which started as the mindoo object of the same name and has since turned into my bin for common functions. The methods used here are getSession(), which just gets the value of the JSF "session" variable, and the xmlEncode() and specialTextDecode() functions from my string utils. Additionally, it references "com.raidomatic.xml.*", which are quick wrapper classes I made to ease basic XML access, and which are also available on GitHub:

https://github.com/jesse-gallagher/Domino-One-Offs/tree/master/com/raidomatic/xml

Enhancing xe:dynamicViewPanel For My Own Purposes

Thu Feb 09 08:45:30 EST 2012

I think I have my view rendering problem licked. To recap, I've been working on a way to show views in XPages that met a couple requirements:

  1. Entirely dynamic. Since this will be for a combined reporting site that will show views from customized project databases of wildly varying needs, I couldn't make any assumptions about view layout, categorization, or content. It should pull as much display information from the view design as possible.
  2. Fast. Some of these views have thousands of rows, so it should be as fast as possible to load from the database and render for the browser.
  3. Pagination. Though I don't really like pagination for the average case (who wants to look at a 40-row view 30 rows at a time?), sending thousands of table rows to a browser causes a miserable experience even when that browser isn't IE.
  4. Full-text searching and other Domino goodies. I don't want it to be a pain to just be able to search through a view like you would in the Notes client.
  5. Support for "second tier" Domino view features. I use special text, column hide-when formulas, "show values in this column as links", Notes-style [<span>pass-through-HTML<span>], and color columns constantly.

Of these, 1 and 5 are the most important.

I originally wrote my own code to generate a big HTML blob for the view, which was perfect on points 1 and 5 (and looked great, since I converted categories to OneUI sections), but wasn't as hot on the other points. The Extension Library's xe:dynamicViewPanel control hits points 1 through 4 with aplomb, but misses the crucial point 5. For one crazy moment, I toyed with the idea of trying to get the output of the "legacy" HTML view renderer (which is perfect on all points) via client-side JavaScript or some other hack, but decided against it as being too rickety.

I considered changing my custom HTML code to instead create a bunch of Java objects to make pagination easier, but performance would be an issue - it's tough to beat built-in controls for raw speed. As anyone who has looked at the code for the mail template knows, IBM/Lotus cheats like crazy, and often the only thing you can do is to just go with what they've done and beat it into shape. Thus, the answer came to me: customizer beans for the ExtLib's dynamic view panel.

The dynamic view panel has a "customizerBean" property that takes a string class name of a bean to create to hook into a couple overridable methods and events:

  • ViewFactory getViewFactory() - this lets you override how the panel pulls in all of its information about the view.
  • boolean createColumns(FacesContext context, UIDynamicViewPanel panel, ViewFactory f) - this lets you override how the panel uses the gathered view information to generate the list of data table columns (or simply run code before that happens, if it returns false)
  • IControl createColumn(FacesContext context, UIDynamicViewPanel panel, int index, ColumnDef colDef) - similarly, this lets you change the way each individual column is created based on the ColumnDef (an object generated by your ViewFactory)
  • afterCreateColumn(FacesContext context, int index, ColumnDef colDef, IControl column) - this is an event fired after each column is generated, allowing you to customize the generated column without having to build it from scratch
  • afterCreateColumns(FacesContext context, UIDynamicPanel panel) - similarly, this lets you run code after the column creation is done

These events provided just the hooks I needed to get the dynamic panel to do what I wanted.

Special Text

To add special text support, I added an implementation of afterCreateColumn(...) that replaced the column's default ViewColumnConverter with my own subclass that overrides getValueAsString(...). With that, I can walk up the current component's ancestors to find the panel, find the individual ViewEntry's variable name from there, and use that to process the special text properly.

Column Hide-When Formulas

These are a bit trickier. The getViewFactory() hook lets me provide my own subclass of DefaultViewFactory, but there's a reason that the dynamic view panel doesn't support these formulas by default: they're not exposed by the NotesViewColumn class. The two ways I can think of to get at this information are the C API (or otherwise reading the design information from the view note directly) and DXL. The former is no doubt significantly faster, but DXL is much, much easier and is good enough for my needs at the moment. So I export the view as DXL and process its column nodes and their children. Then, for each with a hide-when formula, I evaluate the formula in the context of a new document in the project's database and use the result to set the column-hidden flag.

"Show values in this column as links"

This goes hand-in-hand with the hide-when formulas. The column nodes in the DXL have a showaslinks attribute that I can look for to set this flag.

Pass-through-HTML

Much like special text, once I have the custom converter attached to the column, I can break apart and reassemble text values on [< and >] and XML-encode the non-HTML bits. While I'm attaching the converter, I set the column's content type to "html", since I'll be handling the encoding.

Color Columns

These are... tricky, and I'm not yet proud of my fix. There are two problems: getting each cell to know about its color information from previous cells and then actually applying that color information. For the former problem, I dealt with it by adding an "activeColorColumn" property to the ColumnDef object and setting it to the programmatic name of the most recent color column before each column. The latter is the ugly part. You can't just set the column's style, since that will apply to the entire column and make a fine mess of everything. What you really want to do is assign it to the current <td> element, but I don't know a good way to do that. In the mean time, I've hacked around it by making a hidden <div> in each cell with the style tucked into a data-cell-style attribute. Then, I have some client JavaScript look for these divs and apply their styling to the parent table cell. It's not pretty, but it works for now.

If I get that last bit of code cleaned up to a point where I'm comfortable letting people see it, I'll post my example code. In the mean time, I'll count this whole thing as a win for only writing the code you need to and letting the platform take care of the rest.

Formatting View Content on the Web

Wed Feb 01 10:06:31 EST 2012

Tags: domino

My current work project involves displaying arbitrary views from various databases into a combined reporting site (written in XPages). This has presented me with two hurdles: pulling in the data completely and accurately and then figuring out a good way to format it.

The former problem is one of the few areas where it seems like "classic" Domino development has an edge: views render rapidly and, as long as you've configured the server to display lots of rows or add in pager controls, completely. And with the recent "enhanced HTML" property, they generally get tagged with some useful CSS classes. The default XPages xp:viewPanel control, by contrast, needs the column definitions at design time, which makes it tough to work with. You can sort of roll your own with nested xp:repeat controls, but you quickly run into problems with categories and other Notes-y features. Fortunately, the Extension Library, as usual, comes to the rescue: the  xe:dynamicViewPanel does pretty much what you want: you give it a view and it renders it out nicely. Using it, you don't have to worry about handling categories, you can add in standard pagers, and you can include all the standard filtering and searching options on the view. It's not perfect (yet), though: it doesn't handle fixed-value columns (like a formula of "hi"), use color-column information, follow column hide-when formulas (though it supports the static "hide this column" checkbox), do Notes-style bracketed pass-through-HTML, process special text, or honor "show values in this column as links." These are all understandable limitations, being as they are either rarely encountered or technically difficult. Unfortunately, I've run into all of them rapidly, so I don't know if I can use xe:dynamicViewPanel as-is. The best thing to do would be for me to roll up my sleeves, implement these features in it, and submit my changes, and maybe I'll do that down the line, but there'd be a lot of learning overhead before that point. What I've taken to in the mean time is a Java class to write out the view contents as HTML. I add the color-column info via inline style="..." attributes, look up the hide-whens by exporting the DXL and then running them through session.evaluate (read: not fast), and split and recombine the strings to handle pass-through-HTML. Unfortunately, I lose speed (because I'm presumably not as good at this as IBM's programmers) and the flexibility to easily use searching, filtering, and paging. The first two will be relatively easy to implement, but paging will be a problem, since the XPage just gets back a big string blob. I could return Java data structures, but then I'd run into problems with categories. I could write classes to dynamically add XSP elements to the page, but at that point I'd be best off cracking open the ExtLib source code anyway.

The other problem is that, regardless of how I render the view, it can be tough to actually display all the data well on a web page. For normal views, with a handful of columns, it's fine to just plop it right on the page and be done with it. However, it doesn't take much to make the table look like crap: headers that are way too long, large strings in the row data, or just having lots of rows all very quickly stretch or break the design. I think I'll try to handle the headers by adding some CSS text-overflow rules to keep them to a fixed width, which will also mean adding in a title="" attribute so the user can hover over them to see the full value. Still, not a big problem. Large data cell values are a bit tougher. Theoretically, I could do the same thing: have the data overflow and add a tooltip. That would work fine if it's a rare occurrence, but what if the whole point of the view is to be able to easily look at that data. I may try to pull out the row-height property from the view and use that to provide a max-height to each cell via inline CSS, so that way I could solve the problem on a view-by-view basis. I could consider doing the same with width as well, rather than going the default route of letting the content width determine the column width. Having a lot of rows can be partially solved with pagination (unless I'm going the "dump out a blob of HTML" route), but is that really a good user experience if the goal is to see all of the data? Paging on the web is pretty awkward, with the number of rows never exactly matching the height of the viewport, worrying about scrolling back to the top when going to the next page, and properly dealing with categories. Via CSS, I've made it so that the primary content area has its own scrollbars via absolute positioning and overflow: auto and that works pretty well for the most part, but I still run into trouble with views containing thousands of rows: with a lot of data, it simply takes a long time to download the HTML and render the table. There are a couple options provided by Dojo and the Extension Library that provide fixed-size grids to store data that I've tinkered around a bit, and maybe I'll settle on one of them, but in the mean time every solution has some serious problem that I quickly run into with my example data.

Getting My Feet Wet With Ruby and the Domino C API

Thu Jan 26 16:07:26 EST 2012

Tags: domino

My search for a useful way to use Domino as a database back-end for a Ruby front-end has continued and, more specifically, has continued to be difficult.

I gave a shot to the Java CORBA API. Initially, that went well: after grabbing NCSO.jar from my Designer installation, enabling DIIOP on the server, and setting up JRuby, I was able to connect to the server using the host name, canonical user name, and password. Great! However, once I exported my app's Java model classes (modified to use NotesFactory instead of the JSF runtime), I immediately ran into a problem: NotesView.resortView() is not implemented in CORBA. Guh. Theoretically, I could rework the app to use a different view for every index, but it's a canary in the coal mine: no doubt plenty of other new things (FTSearchSorted, for example) have been left unimplemented as well. Plus, this is Domino, so "new" probably refers to anything added after, say, R5.

So that's out, at least for now. That leaves me with the C API. Oh, the C API... so much promise, such a PITA to use. I haven't used C itself much since college, and my experience with the C API is limited to progress bars in the Notes client and, more usefully, my authentication filter for my server. However, I'll be programming in Ruby, not C directly, so it won't be so bad, right?

There are three main ways to work with C libraries in Ruby, I've learned: writing a Ruby module, DL, and FFI. In the first case, I'd basically have to write my API in C and then just use those objects from Ruby. That would be the best performance, but it's just asking for a nightmare. DL is programmed in pure Ruby, but it's so sketchy that just loading the library causes your program to emit a warning telling you you probably shouldn't be using it. FFI, though, is pretty great. Much like DL (or declaring external functions in LotusScript), you declare which functions you want to import, along with their signatures, and then it more or less just works.

That "more or less" is, naturally, tricky. I ran into tremendous trouble just getting it to load up libnotes.so, but most of that was due to using a 64-bit OS and a 32-bit Domino installation. I may make a post about it some time in case anyone else is crazy enough to try the same thing, but the upshot is that I ended up compiling my own 32-bit Ruby environment, which started to work.

Since then, I've been wrestling with the inherent traits and idioms of C (using string buffer arguments to return values instead of just returning a "string", for example) and the specifics of Notes (needing to be pointed to the executable directory as well as needing an ID file, an INI file, and a names.nsf to find the servers). After thrashing around for a while, though, I have it working to the point where I can point the initialization routine to an arbitrary INI file and it can pick up on everything it needs from there (I'm using a password-less ID file - I don't know if you can automate passwords).

Now that I have my foot in the door, it's "just" a matter of learning the entire API. I'm looking forward to that part, though - I've demonstrated that I can open a database on a server and get information out of it, so it seems like I have the right environment. If all goes well, I should be able to create a proper Ruby API for Domino, and a very fast one at that.

 

For reference, here's the script I've written. Don't expect clean code, reusability, comments, or proper methodology - this file is the result of random mauling during the course of development and just barely works to demonstrate the idea. It DOES demonstrate it, though, and works to ping a (hard-coded) server and get some info about its names database.

test.rb

A Couple Handy Domino Java String Utils

Wed Jan 25 11:08:30 EST 2012

As most developers probably do, I have a grab bag of "utility" functions/methods I use across various projects, and I figured I'd post a few of the handier ones.

The first is a basic XML-encoding function that just takes a string and returns something suitable for putting into an XML or HTML file. I've been too lazy to figure out if the standard Java library has an equivalent that doesn't involve creating an actual XML document in memory, so I just took the one I use for LotusScript and ported it over. It passes standard letter and number characters through as-is and then just encodes all others as Unicode entities. The string size increases accordingly, but it's presumably fast and it gets the job done, even if you have oddball characters.

The next set are just quick-and-dirty Java equivalents to the StrLeft, StrRight, etc. functions from LotusScript. They do more or less the same thing as in LS and make it so you don't have to worry about string indexes all the time.

The last is potentially the most useful, since it's the most esoteric: a special-text decoder. I've had a couple occasions where I want to spit out the contents of a view for the web, but also want to preserve things like child counts on categories. Fortunately, special text is stored in the view column in a workable format: a special non-alphanumeric character, a single letter indicating which function it corresponds to, a number representing the parameter count for the function, and then a delimited list of those parameters with character counts. Using that, I wrote a routine to process all of the special text functions I could think of other than @IsExpandable (since that's meant for the Notes UI). It even reproduces the weirdo behavior you get when you pass a multi-character string to @DocLevel.

StringUtils.java

Confound It; That's Two APIs Down

Sun Jan 22 23:02:37 EST 2012

Tags: domino

In the interests of getting crap done, as soon as I was finished with my previous post, I fired up TextMate and a couple Terminal windows to start writing a Ruby wrapper for the DAS API.

It started out great! Wanting to avoid the minor hassle I ran into before with Ruby's built-in Net::HTTP library, I did a quick search for Ruby HTTP/REST libraries and picked one that worked well, named HTTParty. Before long, I had the rudimentary elements of a Notes API working, with classes to represent the server connection, NotesDatabase, and NotesView.

I was about to start working on the NotesViewEntry class when I decided I'll need a way to page through the view, since the call to the view "collection" (wisely) doesn't return all of the data at once. The API developers cleverly store this information in an HTTP header:

Content-Range: items 0-9/35

Perfect! But, um... I was getting the list of available forums as Anonymous, which can only see two, not all 35. Sure enough, the results only contained the two Reader-visible entries (phew), but that left a showstopping problem: I have fairly frequent need for the count of entries, and I don't relish the notion of having to fetch all of the data to get that. Crap.

So DAS is out for now. I remembered, though, about ?ReadViewEntries, an older and thus clearly more mature data-access API. I could probably do everything view-related I need with that and then use DAS to get any non-view document data. So I hit the view with ?ReadViewEntries&OutputFormat=JSON (the results are the same with the XML version) and, lo and behold, the top-level object fields (with data snipped) revealed the same lack of Reader knowledge:

{ "@timestamp": "20120123T034916,85Z", "@toplevelentries": "35" }

Crap!

My quest to treat Domino like a standalone database is off to a rocky start. I can think of a few remaining options:

  1. COM: I don't deploy on Windows, so... that's out.
  2. C API: seems like overkill and just asking for all sorts of new bugs.
  3. Web Services: MIGHT work, but I can't imagine the performance would be anywhere near acceptable.
  4. Some sort of crazy thing with servlets: seems too crazy.
  5. JRuby and the Domino Java CORBA API.

I think the last is the most plausible. My forum app already has Java classes in between the XPages front-end and the Domino database, so I could possibly take those wholesale and just change the methods I use to get the session and current database to instead initialize a CORBA session. I'm a bit nervous about speed and running across some of the various "not implemented in CORBA" landmines hidden across the API, but it might just work. If it does, my next task will be deciding on a web server and a Ruby web framework that's not terribly SQL-centric.

Keychain DB: Very Rough First Script

Sun Jan 15 16:20:31 EST 2012

Tags: domino

I've had a chance to start on my Keychain project from last week, enough to put together a thoroughly rough and unmaintainable script to do the uploading. It has all kinds of horrible properties: it doesn't do the Keychain dump itself (instead reading from a hard-coded file containing a keychain dump from the "security" tool), it doesn't abstract away any of the Domino DAS access, it doesn't check for existing versions of the items, it doesn't handle field data types properly, and it even writes to the filesystem. But hey, it's a start:

keychain-upload.rb (requires the json rubygem and uses auth information from ~/.netrc)

In spite of all its faults, it DOES push the data up to the database, and that's something. It stores the keychain file name in a field named "keychain", the "class" in one called "entry_class", the data in "data", and the rest of the arbitrary fields from the item in fields prefixed with "attr_". Once it was in the database, I set up a view called "Passwords" with the following formula:

SELECT Form="Item" & entry_class="genp" & attr_type != "\"note\""

The extra quotes around the type field's value are an artifact of the non-existent type handling in the original script - it just stores the values exactly as they appear in the dump file, rather than converting them to null, numbers, or strings.

It's not pretty, but I now have a way to view my Keychain entries on the web. I'll probably give a shot to doing this the "right" way via a Cocoa app and structured classes down the line, but for now it'll already be useful to me.

A New Personal Project With Keychain and DAS

Wed Jan 11 19:18:52 EST 2012

Tags: domino

With Apple's transition to iCloud, they're getting rid of Keychain sync. This is too bad, since that was one of the MobileMe features I actually used, loved, and never had problems with. Fortunately, all is not lost: worse comes to worse, I can pick up a copy of 1Password, which has the advantage over MobileMe of being cross-platform.

But I'm a programmer, right? Why buy something - especially for $50+ - when I can just write something myself? My needs at the moment are fairly simple - I only REALLY want a way to back up my Keychain and view it on the web, for when I'm in Windows or on my phone and want to check a password. I only use the one Mac currently, so I don't need proper sync, and I don't need to pull new passwords back down into the Keychain. Maybe down the line, but one step at a time.

The Keychain itself takes the form of a file housing a collection of records with a body of essentially arbitrary data (usually a string, but it can be different for certificates or notes), with a composite key consisting of, I think, the type, server, account, and maybe some other fields. The only thing that would make this a better test case for Domino 8.5.3's shiny new Data Access Services would be a 32-digit hexadecimal unique identifier, but it's pretty darn close as it is.

The main problem I have to getting starting is actually getting at the data. I think there are two main ways: the C-based security/Keychain API or using the "security" command-line tool, which is a more-or-less friendly wrapper for much of the same functionality. Being the lazy type that I am that only deals with C when I have to, I'm starting with the "security" tool. It has a handy "dump-keychain" command, so I can type something like "security dump-keychain login.keychain" and get a formatted list of all entries in that keychain, minus the "data" field. I can add the "-d" switch after "dump-keychain" to include that field, but that comes with a big caveat: you have to click an "Allow" button for every single item. Given that I have about 1,100 items, I'd rather avoid this fate. I've tried running it through sudo, but to no avail. I've tried fiddling with the "authorize" command in the tool, but also with no luck - I'm not sure if I'm just doing it wrong or if it's related to something else, which may or may not be what I want. I could use the "-r" switch instead of "-d" to get the encrypted data, which would satisfy my backup desire, but not my "check it from the web" desire. Absolute worst case, I could put on a movie or podcast and click "Always Allow" a thousand times.

However, once I get past that hurdle - whether it be via the C API or a lot of clicking - the rest should be easy. I should be able to create a basic database, grant DAS access to it, create a view sorted by the various components of the items' composite keys, and do a basic "view lookup, then update or create the doc" routine, pushing up the field values pretty much as-is. That should act as a perfect basic-case project for trying DAS out while also solving an actual problem I have.

Pondering RSS Syncing

Wed Nov 23 13:10:28 EST 2011

Tags: domino

I was listening to the latest episode of Build and Analyze on the way home from work yesterday and, as I am wont to do, I started yelling at my iPhone when they started talking about Google Reader and the difficulty of syncing. Admittedly, at the end, they got to the fact that, even if you could do it technically, it'd be tough to make money off of providing an RSS sync server. That part is fair enough, but I still can't let the technical difficulties stand, and I've been thinking more about how it would be done in Domino.

In the basic form, the problem in question is pretty much exactly what NSF and the Notes/Domino relationship is designed to do: seamless replication, deletion stubs, unread marks, and so forth. In fact, RSS syncing is a better fit for the model than mail, since mail required adding all kinds of extra (but useful) functionality, while RSS syncing would just be data and an agent to fetch the feeds periodically.

The way I figure it, there would only be a couple technical hurdles, both related to scaling: storing large volumes of data and fetching new feed content periodically.

Storing large volumes of data might not be too bad. There are a couple ways you could do it. One would be to store the user's list of subscribed feeds and "read" stubs in one database per user, and then store the feeds and feed content in another database (or databases), and do all data access via agents or web services that would pull the data from each distinct location. Another way could be to store the feeds and entries in each user's database, keeping the feed content as attached HTML documents and letting DAOS handle efficient storage. The latter route would let you take advantage of the Domino Data Service and read marks (which DAS conveniently supports).

Updating the feeds would be rough for a single server, but the job could be farmed out to many clusters in a server. You could write agents that would determine the server they're on and, based on, say, its name, pick a slice of the feeds to check, so if you have 10 servers, each would update 10% of the feeds.

I'm sure there'd be other roadblocks during actual implementation (it IS Domino, after all), but I think that'd be basically all you'd need on the server side. The client would be a little tougher, since you couldn't just use NSF and selective replication, but that wouldn't be terribly difficult to handle.

It's too bad it's likely not profitable, between licensing, hosting, and bandwidth costs - it'd be a fun project to try out.

That Counts as Progress

Tue Nov 22 16:28:21 EST 2011

Tags: domino xpages

A while back, I described the problem I'm having in my guild-forums XPages app, which is that it very easily gets its environment out of whack, to the point where changing any design or data note from outside the XPages environment caused an "X is incompatible with X" ClastCastException. This improved gradually over time. At some point in 8.5.2, I started being able to modify data documents again without the problem. When 8.5.3 came out, it improved again: I can now replicate over changes from my development server to the production one without having to bounce HTTP on the production one. At that point, it became much less of a hassle - sure, I still had to re-save a Java class file every time I modified an XPage, but that was only in development, so I could deal.

However, I think I've found the proper solution. In 8.5.3 (I think), IBM added a custom editor for the xsp.properties file (which you can reach using the "Package Explorer" Eclipse view, in whatever.nsf/WebContent/WEB-INF). Many of the options are duplicates of what you see in the normal "Application Properties" editor (since they edit the same properties), but there are some nifty additional goodies. I won't go into them all, but I urge you to take a look. The important one here is in the "Timeouts" section of the "General" tab: "Refresh entire application when design changes" . That sounded perfect and, lo and behold, it seems to do what I want: once I turned that on, re-saving an XPage stopped causing the ClassCastException. I haven't given it a proper testing, so it might not be everything I'm looking for, but I'm thrilled at the initial results. Having to re-save a Java file for each change wasn't a HUGE problem, but it was a niggling one, and not having to jump through the hoop helps make the whole environment seem less rickety.

That's Weird

Wed Nov 09 09:35:21 EST 2011

Tags: xpages domino

Yesterday, I started working on a small sidebar widget app using an XPage, after finding out that XPages can now (as of 8.5.3) be used in the Notes sidebar in the same way that Forms could before. It's quite a simple page, very Twitter-like: one text input field and then a list of posts. However, even though it's very simple, I ran into two annoying bugs quickly.

The first of them is a Schrödinbug. I set up some code in the "onClientLoad" event to start a setInterval to do a partialRefreshGet on the list of posts, so it would keep itself up to date:

<xp:eventHandler event="onClientLoad" submit="false"> <xp:this.script><![CDATA[ setInterval(function() { XSP.partialRefreshGet("#{id:messagesPanel}") }, 2000) ]]></xp:this.script> </xp:eventHandler>

Simple, right? But I started getting errors in the status bar, along the lines of somethingorother not being defined. That's... odd. So I swapped over to Safari, figuring it'd be some XPiNC-specific thing, but I started getting similar (albeit more specific) errors there, along the lines of:

TypeError: 'undefined' is not an object (evaluating '_166.formId')

Huh. So I tracked it down and the problem was in the code called by partialRefreshGet, where "_166" is the name given to the second (optional) parameter. It makes sense when seeing it - though the code is supposed to fail over when it doesn't work, I can see why the browser would throw up its hands when you try to get a property of an un-provided parameter. But this has worked before! Just looking around on the web, you run into plenty of examples that leave out the second parameter. But sure enough, when I changed the line to XSP.partialRefreshGet("#{id:messagesPanel}", {}), it started working great. I can only think that this is either some 8.5.3-specific bug or some weird thing I managed to do in my code... but there's not even really enough code to mess up.

The second bug is still annoying, and it's somewhere in between a bug and a security feature of Firefox/Gecko. Basically, while you can call the .click() method on a button in client-side JavaScript in Gecko, it doesn't behave exactly like clicking on the button. Specifically, I have a CSS-hidden button that executes the actual action of creating the message document from the text you type in, and I want it to happen when you hit enter. The button itself works great - if I have it show up, I can type, click, and it executes the action and partial-refreshes the list of posts. However, if I hit enter, which uses the .click() method, I get an error about not being able to refresh that part of the page, but then it forces a full-page refresh and still works. So it's KIND OF clicking it, but not quite.

I'm not sure what to do about this one. I looked up a couple things online about trying to emulate the actual click event, but with no better results. I could do a REST service on the page, but I don't want to have to roll out the Extension Library to everyone in the company. Maybe I'll look into sending along the data in a XSP.partialRefreshPost call and eliminate the button entirely. We shall see.

Trying To Escape From Designer

Tue Nov 08 11:14:30 EST 2011

Though I've grown to more or less enjoy writing Domino applications, I always feel like this is in spite of the tools, namely Designer. In a lot of ways, Designer has improved significantly over the last couple versions: as long as you ignore the speed, the Eclipse-ified Java and LotusScript editors are miles ahead of the antiquated previous ones, and it's handy to be able to switch to the Java perspective. However, so much else makes it a drag:

  • It's a Windows app. I use a Mac, so there's simply a big hurdle to using Designer. Parallels has smoothed the process greatly - running Notes in Coherence mode means I can use my preferred OS while still getting work done. However, it means that it's a big to-do whenever I want to make any tiny change in the code. For day-to-day work stuff, I can get a lot done in the Mac Notes client, since I put in a lot of work to make managing client web sites doable without going to the design side, and the Mac client still lets you edit views and agents. However, there's no XPages editor, so I can't use that for serious work.
  • It has a mind of its own. For some reason, a new clean install of Designer I made the other day got it into its head that opening any database should involve recompiling every XPage and Java class. Sometimes, it turns off "Build Automatically" for no reason, and then turning that back on also requires a full recompilation. Sometimes, it just holds up all user actions while it does... something for five minutes. What's it doing? Beats me.
  • How many times have I seen this window? A billion?
    Removing a database from the project list has about a 30% chance of causing a crash and quitting Designer has about a 50% chance. Sometimes, saving a form will do it. Sometimes, walking away from the computer to get a drink is enough. And every time it crashes, I have to dismiss the dialogs and check the task manager to get rid of any residual processes, such as an instance of nsd pegging a processor, then relaunch Notes and get back to the environment I had, which is not a speedy process.
  • I'm not that crazy about Eclipse. DDE is better than previous Designers, yes, but Eclipse is still a giant beast with the same sense of style and simplicity as the monstrous Java language that spawned it. On the plus side, the blue look that IBM came up with is actually rather attractive compared to the standard Eclipse UI, but little quality-of-life things are a drag. For example, how do you specify how many spaces a tab should take up visually? It's in a couple places in the preferences and you have to set them all, including buried inside a weird sub-preferences dialog for messing with your Java formatter. And how about changing your code syntax coloring? You have to go to a dozen places, one for each syntax type. Compare to a text editor like TextMate, where you can swap between packaged color schemes with a drop-down.

That's enough for the rant. So what is there to do about it? I give this problem a thought from time to time, and I don't think there's really a great option, but there are some places to start. The real key to any alternative scheme is DXL - using that, you can (more or less) view and modify design elements freely. I've toyed with this notion before - maybe a web UI that lets me pick the DB and design element I want so I can tweak the DXL manually for when I want to make a quick change but don't already have Windows or Designer open. It would mostly work, but it would take a lot of work to make it practical.

There's a big sticking point, too: XPages. If you export an XPage to DXL, you can see that the exporter basically punts on it - it's exported as a generic "note" type with Base64-encoded binary fields. I haven't run the field data through a decoder, so maybe one of them is the XML "source" of the page, and maybe it could be made to work, but that just raises more questions. Would that require updating the other binary fields in some way? Would importing it with a DXL importer cause it to generate and compile the Java representations? What about generic Java classes? Would those work with DXL and would they be compiled?

I'm starting to get an idea of what would be ideal and almost practical, though. One could write a WebDAV server (possibly with a complex servlet, a small web server to the side, or other trickery) that represents the design elements as editable files in a folder structure similar to the Java perspective view of the database. Traditional files and image resources could be edited as-is (which I think the built-in WebDAV server does), but design elements could be represented just as DXL and then re-imported when modified. Even if it doesn't support XPages, such a scheme might have a lot of promise and wouldn't be reliant on Designer or any other IDE.

If I ever get brave enough to delve into WebDAV or frustrated enough with Designer, I might just look into it myself.

So Here's Why I Hate LotusScript

Sat Oct 29 13:44:43 EDT 2011

Tags: domino

For the most part, writing agents in LotusScript is the best way to go (at least when it can't be done in formula language), mostly for smoothness of interaction with the built-in libraries (no .recycle()) and because it's less prone to running into memory problems when other agents go wonky than Java agents are. That doesn't mean I have to like it, though.

If there's one thing that drives me nuts about LotusScript more than any other aspect, it's its handling of arrays. This came to the fore with one of my recent projects, which involves spitting out the contents of a view, which in turn entails lots of use of entry.ColumnValues. My first, quick-and-dirty draft ran into an early performance issue, which is that every call of .ColumnValues on a NotesViewEntry seems to be as expensive as the first, meaning that the class doesn't do any internal cacheing and has to re-fetch it every time. Ugh, fine - I'll just assign the value to a new variable at the start:

Dim colValues as Variant colValues = entry.ColumnValues

Not too bad - two extra lines at the start of a loop is a small price to pay for a significant speed improvement. Unfortunately, it doesn't work - it throws a Type Mismatch error at runtime. After doing a TypeName() on entry.ColumnValues, I saw that it considers it "VARIANT( )", an array of Variants, which makes sense. It's a bit weird, since I've stored arrays in Variants before, but whatever - with a quick code adjustment, we're off to the races:

Dim colValues() as Variant colValues = entry.ColumnValues

Great! Now hit Ctrl-S and... compiler error. You're not allowed to assign to the entirety of an array like that. Argh! So I guess I'm going to have to make my new array manually and loop through the original to copy each entry over individually. If you're familiar with another scripting language, that probably sounds like a simple task, but LotusScript's array annoyances continue. Because this is like BASIC, you can't just do "colValues.add(something)" - you have to ReDim the array to the right size. Here's the code I ended up with:

Dim columnValues() As Variant ReDim columnValues(-1 To -1) ForAll columnValue In entry.ColumnValues If UBound(columnValues) = -1 Then ReDim columnValues(0 To 0) Else ReDim Preserve columnValues(0 To UBound(columnValues)+1) End If columnValues(UBound(columnValues)) = columnValue End ForAll

Before you look at that "-1 to -1" crap and deem me insane, hear me out. Though the NotesViewEntry doesn't cache its property value, the ForAll loop does, meaning that, according to the profiler, ColumnValues is only called once for each view entry, which is about as efficient as it gets. All that extra crap about ReDim'ing the array over and over instead of just once is essentially "free" compared to the expense of the product object call, so it ends up being completely worth it.

Sync

Tue Oct 25 21:52:39 EDT 2011

So there's a new round of talk lately about syncing and the trouble involved, thanks to some changes in Google Reader's behavior and the desire to find a new safe haven for RSS syncing. The best example is, unsurprisingly, from Brent Simmons:

Google Reader and Mac/iOS RSS readers that sync

However, the whole time I was reading this article, my brain kept yelling at me, louder and louder as time passed:

This is Lotus Notes! The system you're describing is Lotus Notes! It does syncing and deletion stubs and read marks! IT'S LOTUS NOTES!

This kind of thing would indeed be really easy in Notes/Domino, particularly if you were actually using the Notes client (though it wouldn't be much to look at). Subsets of data, managing deleted elements, timed refreshes from the source, storing each feed entry as its own entity, and offline access that can have its changed synced back to the master are all things that Notes has done since its conception - the only problem is that it's so ugly and arcane that mass-market appeal is nigh-impossible.

Nonetheless, it got me thinking about the viability of using Domino as a syncing server for this. You wouldn't be able to use NSF files in your RSS clients, which would make the job a bit tougher, but the new "XWork" licensing model would fit into this nicely. Scalability would be a serious concern, but the simple nature of the data would keep view updates quick, and it'd just be a bit of cleverness in the database layout to direct users to the correct place. Toss a couple clustered servers in there and you should have some good load balancing, too. The Domino Data Services API might be enough to handle data access from the client, but, if it's not, a couple simple agents would do it.

I'm sort of tempted to try hashing something out.

The Domino Data Service

Wed Oct 05 10:22:32 EDT 2011

Tags: domino

Though I don't have a use for it currently, I can't help but get kind of excited about the Domino Data Services in 8.5.3 and the Extension Library. If you're writing a normal Domino application - using either legacy elements or XPages - you probably won't have terribly much use for it.

However, the really cool aspect of it is that it significantly smooths the process of using Domino as a backing data store for another front end written in PHP, Ruby, or anything else. This has always been sort of possible - you could use the Java API or a combination of ?ReadViewEntries, ?CreateDocument, and ?SaveDocument URL commands to access Domino data without actually being in Domino, but it wasn't exactly a smooth process. With the Data Services, now Domino is very similar to, say, CouchDB, but with reader fields and impenetrable licensing terms for non-vendors.

One nice little side effect of the fact that it uses the HTTP stack is that DSAPI modules work. When I was testing around, I was able to get a list of available forums as Anonymous, resulting in only the two visible ones. When I included my user authentication filter cookie, it started showing me the rest of the forums that the user could access, exactly like you'd want. While you could presumably just pass the username and password in each request using normal HTTP authentication, it's pretty cool that any alternate methods like this work as well.

It's all pretty exciting, and I'm itching to find a use for it.

My next two favorite features of 8.5.3

Wed Oct 05 08:07:30 EDT 2011

Tags: domino

Since 8.5.3 has been out for about 24 hours now, I naturally rolled it out on both my development and production servers. Fortunately, my irresponsibility was greatly rewarded: the largest problems I've had so far were a change in the way Java classes are accessed in JavaScript (I could no longer just call methods on non-public classes defined in the same file as a public one, so I had to split them out into their own files... which is what you're supposed to do anyway) and a minor CSS change where the top borders of my Dojo tabbed tables are now back to a light grey color, so I need to find the new CSS rule to change them back to brown.

I'm rather happy so far about two minor things in particular: CSS/JavaScript aggregation and OSGi auto-loading.

The CSS/JavaScript aggregation is almost a freebie: once you have Designer 8.5.3, you get a new option in the database properties sheet to turn this on and then 8.5.3 servers will happily obey it. I immediately noticed a decent load-speed increase of about 1/3 and one non-technical guild member said that the odd problem of dog-slowness that they (and not other people) had has been fixed. My favorite aspect of this is that it's a smart feature: due to the way you define Dojo modules in an XPage as <xp:dojoModule/> elements and not just text on a page like normal HTML, Domino knows ahead of time what you're using and can thus feel free to optimize it in transparent ways like this. It feels good seeing the same code go from one form to a more efficient one just by virtue of done the "right thing" when writing it to begin with.

The OSGi plugin auto-loading was mentioned briefly on Dec's Dom Blog back in June and I hadn't seen much reference to it since, so I was afraid it wouldn't necessarily make it in. Fortunately, it has: I created a new Update Site with the template from an 8.5.3 server, imported the latest Extension Library, ran the "Sign All" agent, and set the notes.ini parameter to look there. And lo and behold: it properly loads up the extensions from the NSF, so I was finally able to delete the filesystem versions that were previously necessary. This makes managing the Extension Library much smoother and it's one less potential gotcha when I upgrade my dev server first and then want to deploy it - since the Update Site has a replica on both servers, the upgrade is handled with the normal replication process and I don't have to remember to copy any files over from server to server. And the fact that it's an NSF theoretically gives you all kinds of other, more complicated deployment options, like server-based Reader field control or partial replication to control which servers see which plugins if you're so inclined. Very cool - I approve, IBM.

My Favorite Minor Feature in 8.5.3

Wed Sep 28 18:23:55 EDT 2011

Tags: domino

I don't have access to the beta versions of new Notes/Domino versions, so I haven't been able to tinker around with all the cool new things that are slated to appear in 8.5.3, but that hasn't stopped me from getting pretty excited about some of them. The big-ticket items are clear: the new Domino Data Services and relational database access (through the Extension Library, which may as well be standard) could make practical very different ways of using Domino either as a standalone data source with a different front end or as a standalone front end with a relational data source. While both types of setups are theoretically possible now, they're such a hassle that they're not worth it - but, with 8.5.3, they're almost top-tier choices for system architectures.

However, since I don't plan to rewrite my entire architecture, those new features probably won't affect my day-to-day life for a while yet. What has me most excited in a practical sense is much more lowly: being able to do a full-text search with sorted results. I've found that one of the big bottlenecks in my guild-forums app is the sheer size of the views, particularly the Posts one. I used to stuff pretty much all of the summary data into the view, but then I found that removing the non-sorted columns sped up responsiveness dramatically. That whetted my appetite for clearing out unneeded sorted columns - since each sorted column contains a full view index, having even a handful can increase the total index size dramatically. Since it appears that FTSearch's performance is almost (but not quite) as good as getting all entries by a key, I'll be able to remove the rarely-used sorted columns, speeding up all the common operations in exchange for a very minor hit in the rare case. Plus, it'll just feel good to put Domino's searching capabilities to proper use.

Getting Domino LDAP to Work for Authentication

Thu Aug 25 16:23:59 EDT 2011

Tags: domino

Recently, I've been toying with the idea of setting up a couple extra services on my guild's Domino server - voice chat, non-Sametime chat, what have you - and I figured I should give a shot to LDAP authentication with the Domino directory for these. However, this is something I've never done, and the documentation is a little rough - most LDAP info on the web refers to non-Domino servers, while most Domino-specific information was written in about 1996.

I'll leave out the depressing details of the various things I tried in my quest to get LDAP working as an authentication mechanism for my Linux server (as a relatively simple test case) and point you instead to this dead-but-still-archived page: http://web.archive.org/web/20040614140723/http://www.dominux.co.uk/ldap.html. The key information on that page is the list of fields that you have to add to your user documents to use them for this purpose. During my harried testing, all /var/log/auth.log was telling me was "Invalid credentials", but what it really meant was that the user account it found didn't have the right attributes. Thanks, Linux!

Freaking Reader Fields

Thu Jul 14 15:02:42 EDT 2011

Tags: domino

Periodically (read: every day), I wonder about switching from Domino to a SQL server for data storage on my guild web site. The primary reason for this is speed: I'm doing primarily relational things, so I've had to wrangle Domino quite a bit to do this with any amount of speed and code cleanliness. Additionally, while most of my documents are entirely distinct from each other, I had to make concessions here and there, such as storing the latest Post date in Topic documents so I can sort them that way, and, each time I have to do that, there's another little bit of code maintenance and clustering-unsafety.

However, my ideas always come to a screeching halt when I remember Reader fields. They're simply too good, and the replacements I've found on the open source SQL databases have been, to put it kindly, lacking in comparison. They generally involve having some access level field or, best case, a multi-value field of names that are allowed to see the document, and then making sure that all of your queries or views honor that. Every method has some severe downside, ranging from inflexibility (access level) to nightmarish piles of code everywhere (multi-value name/group/role fields). Everywhere I accessed the database, I'd have to worry about security and document access, bloating up the code and just asking for data-leak bugs.

Domino, for all of its faults, makes this something you just don't have to worry about. If you have a Reader field, you can toss names, groups, and roles in there with impunity, and the server will handle the rest like you'd want. You don't have to do your own directory lookups, security checks, or nested queries. If the current user isn't on the list, the document may as well not exist. Even if the user had the UNID of the document and designer access to the database, it'd be beyond their reach. This is enormously comforting. And even though it's just a guild web site and not a giant corporate database, I'd still rather deal with a bit of tricky code for performance than the headaches and drama involved with people seeing what they're not allowed to see.

So, until I either get entirely fed up with Designer or I find an equivalent to Reader fields in a free SQL server, I'll be sticking with Domino.

Starting work on a DSAPI filter for Domino 8.5.2 on 64-bit Ubuntu

Thu Apr 14 16:33:28 EDT 2011

Tags: domino

For my main project, which contains a forum, one of the problems I ran into was Domino's session handling. Namely, it's designed such that HTTP user sessions last for the duration of the browser session and time out after 30 minutes. That's fine for, say, a corporate app, where you don't want a client logged in indefinitely. However, having to re-log-in to a forum every time you visit it would be a hassle.

The session timeout is easy to fix - you can just up the timeout period in the web site config. The cookie took a little more trickery, but wasn't too bad either. I set up some code in the beforePageLoad event to look for a DomAuthSessId cookie and, if present, create a persistent cookie. The server can't tell the difference between the session and persistent cookies, so this works.

However, it's sort of ugly. While I absolutely don't want to have people re-log-in every time they restart their browser, I also don't really want tons of session-scoped managed beans floating around for months, and this kind of trick defeats any use of the Internet Users list in Domino Administrator (a minor quibble, but a quibble nonetheless). Ideally, I'd be able to automatically log people in on subsequent visits, but not have to actually maintain server sessions for them. From all I've read, this calls for a DSAPI filter. Since that involves C/C++, there's bound to be a lot of setup and tribulation.

First off, I had to grab the Notes/Domino C API toolkit. Fortunately, Google led me directly to the IBM download page and, after re-confirming that I don't want them to send me email, I was given a download link for a quaintly compress'ed file and some installation instructions. I dropped the libraries into /opt/ibm/lotus/notesapi and set up a symlink at /opt/lotus/notesapi just in case. The samples in the toolkit do indeed include a DSAPI filter, and I found another one doing something very much like what I want here:

http://www-10.lotus.com/ldd/46dom.nsf/55c38d716d632d9b8525689b005ba1c0/dd8cc7c9ad12887a85256bca003476bf?OpenDocument

To keep things simple, I started with that code and commented/deleted everything that actually does something - I just wanted the simplest possible filter to see if it works at all. All it does is announce that it's alive and then proceed to not handle any requests:

dsapilogin.c

Now came the issues. First off, the DSAPI example in the toolkit only came with a handful of Makefiles, and none for Linux. The Solaris one was close, so I borrowed and modified that one with some tips from http://www-10.lotus.com/ldd/nd6forum.nsf/55c38d716d632d9b8525689b005ba1c0/fbd4bb196ce4146685256df10038a0b9?OpenDocument . The most vital thing was that, since I'm running in a 64-bit environment but using a 32-bit Domino server, I had to go out of my way to compile a 32-bit library - this is done by adding "-m32" to the CCOPTS and (maybe required) LIBS lines. The Makefile I ended up using is:

Makefile

Along the lines of the "-m32" flag, I had to install some additional development libraries to support 32-bit compiling. In my travails, I ended up installing all sorts of apt packages before it worked, so it's possible that only the last (or another, non-C++ library) is necessary:

g++
lib32stdc++6
libc6-dev-i386
g++-multilib

Unfortunately, even when you get the library building properly, it's tough to get it to load into Domino, and it's not very helpful as to why - all it says is "Failed to load DSAPI filter: " and then the file name... and it gives the same error if you point it to a non-existent file, so there's no way of knowing WHY it failed to load it. It's all trial and error (unless you actually know what you're doing, presumably).

Once all that was in line, though, I could build the library with make, copy it to /opt/ibm/lotus/notes/latest/linux, add "libdsapilogin.so" to the Configuration tab of my Web Site document in the Directory, and restart HTTP. When HTTP is loading, it will print out "HTTP Server: DSAPI DSAPIDLL Loaded successfully" (or whatever message you put in there), so you know it's working.

Now comes the task of actually writing the filter. Considering that I haven't written any C or C++ since college, I expect lots of fun with buffer overflows and mysterious crashes. Exciting!

Installing Domino 8.5.2FP2 on an Ubuntu 10.10 server

Tue Apr 05 19:19:24 EDT 2011

Tags: domino

Because I am insane, I decided to install Domino on my newly-minted 768 slice from Slicehost. That's just above the 512MB RAM minimum that Domino demands and half the recommended 1.5GB, so we'll see how it goes.

The Linux installation I went with for my slice was Ubuntu 10.10 64-bit, which firmly puts it into the realm of unsupported, as far as Domino is concerned. Fortunately, three pages I found made the process relatively painless:

  1. http://www.collaborationmatters.com/blog/cmblog.nsf/dx/installing-lotus-domino-8.5.2-on-ubuntu-server?opendocument&comments
  2. http://www.danilodellaquila.com/blog/how-to-install-lotus-domino-8.5-on-ubuntu-part-ii
  3. http://stackoverflow.com/questions/3747789/how-to-install-sun-jdk-on-ubuntu-10-10-maverick-meerkat

The first wrinkle you run across is that Domino adores Java, but stock Linux installs do not. Fortunately, link #3 there explains how to cleanly install Java on your server. Namely, add this line to /etc/apt/sources.list:

deb http://archive.canonical.com/ubuntu maverick partner

Once you do that, you can run apt-get install sun-java6-jre, wince at the huge number of dependencies it requires, and let it do its thing.

After that, you're almost ready to start installing. If you're running on a 64-bit installation, you'll need the standard 32-bit libraries, since I don't think there's yet a 64-bit Domino server for any-old-Linux. That one's easy, though: apt-get install ia32-libs and you're all set.

The Domino installation proper was relatively straightforward. I created a "notes" user and associated group beforehand, picked and created some directories for the program files and data owned by that user, and went through its command-line wizard via ./install -console . For the final step, I told it to expect a remote installation and fired up the Remote Server Setup tool on Windows, which worked flawlessly.

The next step was the FP2 installer. As is the case with the Windows version, the installer for the fix packs is totally different than the normal installer for some reason. It initially complained about the size of my Terminal window and, rather than trying to convince it that my GUI terminal was plenty good enough for its installer, I took its advice and did export LOTUS_NOROWCOLCHECK=1 . After that, it complained about needing to know where the data directory was, but provided a similar instruction for setting that variable. Once that was in order, the installation went smoothly.

The next tough part was how to get Domino to run at startup. I've been familiar from time to time with various *nixes and their service setups, but it seems to change every couple of years, so I wasn't sure where to go with this. Fortunately, link #2 above did all the work for me. Since I found it long after installation, I didn't do the "customized distribution" bit, but I DID grab the Domino init script. Once I put that in place, modified the variables to point to my directories and user, and followed the other instructions, I was all set. Now I can control Domino via "service domino (start|stop|restart)" like you'd expect.

Due to the nature of Linux, the question of where to keep the program and data directories is a weird one. The defaults are something like "/opt/ibm/lotus" and "/local/notesdata", respectively, which would certainly WORK, but don't fit in with much else. I've fiddled with my placement a bit and settled on the same thing for the former, but "/var/lib/domino/data" for the latter. That allowed me to set up "/var/lib/domino/daos" and "/var/lib/domino/logdir" (for transaction logging) and keep things relatively clean. One thing to note: if you switch around the directories after the fact, make sure to change both the init script and the notes.ini in your program directory to reflect the new locations.

All told, it seems to be working pretty well. When you're using it like a normal Notes server, there's not much distinction other than the lack of OLE. It has all the same bugs (like the aggravating XPages Java classloader "X is not compatible with X" errors I run into constantly) and same features, which is really the idea.

Pretty URLs

Mon Mar 21 20:09:36 EDT 2011

Tags: domino

Way back when I wrote my own blog back-end, I went out of my way to make decent-looking URLs that didn't betray the use of PHP for the code. It's not that using PHP was inherently bad, but the ".php" in the URLs was ugly and would have made it tough to move to any other back-ends. It's just one of those good-idea cleanliness things. Back then, I did it with Apache MultiViews and just had names like "archive.php", so I could make URLs like "/archive/2002/12" for that month's posts.

Since I've started using Domino, however, it's gotten a bit tougher. The parts of the URL within a Notes database can actually be pretty great - something like "/People/Foo+Fooson" is about as good as you'd want. However, it's the database path that kills me. You can set up URL substitution rules, but that's a weird combination of setting up a Directory Web Rule plus making sure that every place you make a URL in your code uses that name. For something like a view column, that'd mean either hard-coding your server-specific setting or sucking it up and using @WebDbName. For the most part, I've given up and gone with the latter, for the sake of portability.

XPages bring good news and bad news.

The good news: your application almost never needs to care about the database file path, ".nsf" extensions, or anything of the like. There may be edge cases where you do, but for the most part you can just start your URLs with "/" and the database path is filled in for you (at the cost of starting non-DB paths with "/.ibmxspres/domino").

The bad news: even though the database path reference is now handled by the runtime and not the programmer's code, I don't know of a way to override the behavior. I'm sure it's there SOMEWHERE, buried in the hordes of Java classes and method calls spawned for each page request, but I don't see it. What I'd really want is to tell the XPages runtime that, while "/wow/forums.nsf" is the "right" way to reference the database, I'd rather it just start with something like "/forums". It seems like something that must be handled in the xsp.properties file, but I haven't found anything about it yet. Admittedly, it's tough to search for - most pages with the keywords I want are talking about @WebDbName replacements in SSJS and how to references external Domino resources. If it's not present already, maybe it's something they'll add in a future point release. One can hope.

Tivoli

Fri Feb 13 00:40:04 EST 2009

Tags: domino

My company is currently dealing with a client that wants us to look into Single-Sign-On options to integrate with their portal. Since this client has a lot of money, I've been doing just that lately. The keywords are straightforward: they want single-sign-on using SAML. Gotcha!

Step one: search for the keywords. There are a couple promising links:

The first one is an overview article that has two followups, but the only one that mentioned SAML is the first, and only in passing. Right. The second one is a notes.net forum posting asking if it's possible to use SAML to authenticate with a Domino server. No replies.

My further searching led me to Tivoli Federated Identity Manager. I've known of Tivoli primarily as that "other" group of files when I go to download something on IBM's site, part of the sea of giant "IBM Application Services for Application Server Version 6.1, for IBM Application Services 6.1 Enterprise for e-business for Windows XP, Windows 2003 Multilingual (1 of 3)" names.

But the page I found, part of a help DB for the program, contained a kernel of promise: Domino is listed as one of the usable directory stores. Now, I'm not sure if that means this thing will allow you to sign INTO Domino, or if it will just let you sign into OTHER stuff using names FROM Domino, but it's worth investigating, right?

So skip ahead a while and I'm looking at an installer for Tivoli Access Manager. Well, I THINK I'm looking at an installer. I'm also looking at a bunch of other potential installers:

Installers?

So I take a stab at it. "install_ammgr.exe" must stand for "Install Access", uh, "Mmanager", right? Double-click and... nothing. I try some of the others. Double-click and... nothing.

So I put it aside for now. I find some other downloads, such as one that seems to be what I was originally looking for, the Federated Identity Manager. I had a horrifying time with this, including a foray into WebSphere, before I decided to come back and take a whack at Access Manager again. I found the problem this time! You see, since IBM decided that useful file names are for wusses, their downloads are named things like "C1AV9ML", and I, as I always do with Domino, renamed it something useful, "Tivoli Access Manager Base - C1AW2ML", and extracted it into a folder of the same name.

If you're familiar with IBM, or if you have arrived at the present day via a time machine from circa 1990, you may see the problem immediately: the spaces. I mean, sure, Windows has had spaces-in-the-filename support since at least 1995, so not accounting for them was an amateur mistake fourteen years ago, but here we are. I renamed the folder, clearing out spaces, and voila! The installer launched!

It was very promising, too, asking me what directory I wanted, what the Domino server name was, and everything. But when I installed it, it failed, telling me only that it failed and that I should probably check a log file (which I guess it was unable to do itself). When I did, I noticed this suspicious text:

HPDHZ0021E   This file could not be found:  M:\Desktop\IBM\SAML\TAM-Base-C1AV9ML\windows\TivSecUtl\Disk

Oh god. What is in the "TivSecUtl" folder, exactly? Surprise surprise:

Disk Images

Excuse me? There's a folder in the directory tree that was in the zip file that IBM made that it simply can't use because they don't know how to escape spaces? Let me reiterate:

IBM's installer can't work out of the box.

There's already the whole issue that they use these horrible Java-based installers for their "Enterprise" apps. When the main problem is how weird it is that Lotus Notes uses a platform-native installer while Lotus Domino uses something that looks like it was mugged by X11, it's just weird. But when the JVM initialization takes up as much time as the entire rest of the installation, and you have to sit through it half a dozen times because you keep having to make more changes to the installer script, it's like IBM is punching you in the face.

The only theory I can come up with is that IBM doesn't actually want you to use their software. What other explanation is there for an installer package that simply doesn't work? It's not a OS version issue, or a weirdo setup issue, it just doesn't work.

This is even beneath the callous explanation that they do it so they can charge for support contracts. The RIGHT way to lock customers in to painful support is to at least make the program INSTALL and provide a basic level of functionality that is ALMOST what people want, but then make getting from there to the goal like wading through an ocean of steel briars. Like Notes, for example. At least that shows a basic level of respect, not just letting the programmer zip up whatever in-development folder he had and uploading it to the software catalog.

Bah!

It makes all the stupid crap like mysterious line breaks in rich text on the web seem almost user-friendly.