The Loose Roadmap for XPages Jakarta EE Support

Thu May 04 10:29:44 EDT 2023

  1. Updating The XPages JEE Support Project To Jakarta EE 9, A Travelogue
  2. JSP and MVC Support in the XPages JEE Project
  3. Migrating a Large XPages App to Jakarta EE 9
  4. XPages Jakarta EE Support 2.2.0
  5. DQL, QueryResultsProcessor, and JNoSQL
  6. Implementing a Basic JNoSQL Driver for Domino
  7. Video Series On The XPages Jakarta EE Project
  8. JSF in the XPages Jakarta EE Support Project
  9. So Why Jakarta?
  10. XPages Jakarta EE 2.5.0 And The Looming Java-Version Wall
  11. Adding Concurrency to the XPages Jakarta EE Support Project
  12. Adding Transactions to the XPages Jakarta EE Support Project
  13. XPages Jakarta EE 2.9.0 and Next Steps
  14. XPages JEE 2.11.0 and the Javadoc Provider
  15. The Loose Roadmap for XPages Jakarta EE Support
  16. XPages JEE 2.12.0: JNoSQL Views and PrimeFaces Support
  17. XPages JEE 2.13.0
  18. XPages JEE 2.14.0
  19. XPages JEE 2.15.0 and Plans for JEE 10 and 11

At Engage, HCL officially announced Java 17 in Domino 14 (I'm sure they announced other things too, but I have my priorities). This will allow me to do a lot in pretty much all of my projects, but it's particularly pertinent to XPages JEE.

Currently, the project targets generally Jakarta EE 9, which came out in late 2020 and was "just" a switch from javax.* to jakarta.*, with no official new features. However, Jakarta EE 10 came out a year ago - in addition to bringing a raft of new features, it also bumped the minimum Java version to Java 11, pushing it outside of Domino's realm. Accordingly, I've had to hold off on a lot of major- and minor-version bumps in the XPages JEE project as new releases started being compiled for Java 11. Once V14 is out, though, I'll be able to move to the current JEE platform... at least until JEE 11 comes out next year and requires Java 21, anyway.

So I've been working on how I'm going to approach this, and what I'm thinking is that I'll do it in two phases: first, a final 2.x release that provides Java 17/Domino 14 compatibility for existing components, and then a new 3.x breaking-changes release to bring in Jakarta EE 10 components.

The Final 2.x Release

I currently have this penciled in for the next release, 2.12.0, but that may change if I decide I want to get a real 2.12.0 release out before Domino 14 is at least in stable beta form. Let's call it "2.99.0" for now.

The idea here will be that I'll want to make sure all existing code in NSFs continues to work unchanged: upgrade your server to V14, install 2.99.0, and your apps keep working. In theory, this shouldn't be too complex. There's some shimming needed for Weld (the CDI implementation) to account for changes from Project Jigsaw in Java 9 and later, and there might be some stuff around AccessController, but in general I expect it'll just be some tweaks here and there. Time will tell, of course.

Once that's out, I plan to not look back (unless there's demand, I suppose). The switch to Java 17 is a huge deal, and I don't think it'll be worth spending much more time with Java 8 once it's no longer required. The 2.x branch is already, I feel, in a pretty good place, so I'll feel comfortable having a stable final version.

The Breaking 3.0 Release

Then, the plan will be to start down the path of 3.x with breaking changes - not everything, but some. For one, JEE 10 has a handful of backwards-incompatible changes. Those are mostly for legacy true-JEE code, though, and the main ones that XPages JEE code will likely want to be aware of will be the switch of XML namespaces to shorter representations. That will affect JSP and JSF code, but the old URIs (the jcp.org ones) will continue to work, at least for a while.

Most of the breaking changes will probably happen internally. I've talked for a long while now about my desire to do some reorganization of the project. The big one is wrangling the proliferation of Eclipse Features and XPages Libraries. Anyone who has installed the project in Designer is well aware of just how many times you have to click "yes, I want to install the thing I'm installing", and that alone is enough to warrant a reorganization. Beyond that, though, I've had to take care to try to make it so that the individual components don't depend on each other unnecessarily. There's a certain amount of good discipline that provides, but it eventually wears a bit thin.

I'm not quite sure what form the consolidation will take, but it'll probably be something like three features: "core", "extended", and "MicroProfile". "Core" would probably roughly map to the actual Jakarta Core Profile, plus things that I find essentially obligatory like Bean Validation. "Extended" would be all the things like JSP and JSF, the "leaves" on the dependency tree: they depend on core features, but nothing depends on them. Then "MicroProfile" would be, well, MicroProfile features. The only thing still giving me pause is that there's not too much case for not installing all of these all the time anyway - if you don't want to use, say, JSF, you don't have to; additionally, it's not like Domino is a svelte cloud-native mini server meant to be deployed a thousand times in a cluster, so having the extra bundles sitting there isn't really onerous. We'll see. I hem and haw a lot on this, but eventually I'll have to make a decision.

Regardless of what form that takes, I expect that the changes to in-NSF code will be either minimal or none - for users of the project, it'll mostly be a matter of making sure to fully uninstall the old plugins before an upgrade and then tweaking Xsp Properties to select whatever the new form of the XPages Libraries ends up being.

Side Note: Jakarta NoSQL and Data

One interesting aspect of this move will be the path Jakarta NoSQL has been on. Though I've included it in the XPages JEE project for a little while now (and continue to heavily expand on it), it's always been technically a beta release. It's clearly proven itself stable even in its beta form, but it's going through a shift in the run-up to Jakarta EE 11. Specifically, the higher abstraction levels - the Repository interface and friends - are moving to a new project, Jakarta Data. The idea of that project will be that it will be able to sit on top of Jakarta NoSQL and other storage types, namely JPA.

It's going to be very neat, but it's created a bit of a pickle for me. Since it's targetting Jakarta EE 11, that means the release of it and NoSQL are going to require at least Java 21, and there's no word on when Domino will support that.

One option would be to stick with what I have now for the foreseeable future: a mildly-forked version of Jakarta NoSQL 1.0.0-b4. It's a workhorse and has been doing a good job, and it'd mean that app code wouldn't have to change. I'm not crazy about this for obvious reasons: I don't want to have one component stuck way behind while all the other parts get a nice jump forward, even if it works.

The other main option I'm considering is sliding forward to another beta release and landing there until Java 21 support shows up. The current development versions of the Data spec and JNoSQL with its implementation target Java 17, so I'll probably go with whatever the last beta is before the official switch to 21. Though it's tough to predict the future, that will probably end up being API-wise similar enough to the release forms of them that future jumps won't be difficult. We shall see, anyway.

Timeline

Anyway, the timeline for this is a little vague, and will mostly depend on when the Domino 14 betas come out and whether they contain anything show-stopping. My hope is to be able to have something that passes all the test cases ASAP with betas and then to have it continue to be stable through to the actual release.

I'm looking forward to leaving Java 8 behind for good, though, that much is certain.

Integrating External Java Apps With Keep And Keycloak

Wed May 03 09:43:59 EDT 2023

Last year, I wrote a post describing some early work on a Jakarta NoSQL driver for the Domino REST API (hereafter referred to as "Keep" to avoid ambiguity with the various other Domino REST APIs).

I've since picked back up on the project and similar aspects, and I figured it'd be useful to return to provide some more details.

OpenAPI

For starters, I mentioned in passing my configuration of the delightful openapi-generator tool, but didn't actually detail my configuration. It's changed a little since my first work, since I found where you can specify using the jakarta.* namespace.

I use a config.yaml file like:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
additionalProperties:
  library: microprofile
  dateLibrary: java8
  apiPackage: org.openntf.xsp.nosql.communication.driver.keep.client.api
  invokerPackage: org.openntf.xsp.nosql.communication.driver.keep.client
  modelPackage: org.openntf.xsp.nosql.communication.driver.keep.client.model
  useBeanValidation: true
  useRuntimeException: true
  openApiNullable: false
  microprofileRestClientVersion: "3.0"
  useJakartaEe: true

That will generate client interfaces that will mostly compile in a plain Jakarta EE project. The files have some references to an implementation-specific MIME class to work around JAX-RS's historical lack of one, but those imports can be safely deleted.

Keycloak/OIDC in Keep

I also mentioned only in passing that you could configure Keep to trust the Keycloak server's public keys with a link to the documentation. Things on the Keep side have expanded since then, and you can now configure Keep to reference Keycloak using Vert.x's internal OIDC support, and also skip the step of creating special fields in your person docs to house the Notes-format DN. For example, in a Keep JSON config file:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
{
	"oidc": {
		"my-keycloak": {
			"active": true,
			"providerUrl": "https://my.keycloak.server/auth/realms/myrealm",
			"clientId": "keep-app",
			"clientSecret": "<my secret>",
			"userIdentifierInLdapFormat": true
		}
	}
}

That will cause Keep to fetch much of the configuration information from the well-known endpoint Keycloak exposes, and also to map names from Keycloak from the LDAP-style format of "cn=Foo Fooson,o=SomeOrg" to Domino-style "CN=Foo Fooson/O=SomeOrg". This is useful even when using Domino as the Keycloak LDAP backend, since Domino does the translation in the other direction first.

Keycloak/OIDC in Jakarta EE

In the original post in the series, talking about configuring app authentication for the AppDev Pack, I talked about Open Liberty's openidConnectClient feature, which lets you configure OIDC at the server level. That's neat, and I remain partial to putting authentication at the server level when it makes sense, but it's no longer the only game in town. The version of Jakarta Security that comes with Jakarta EE 10 supports OIDC inside the app in a neat way, and so I've switched to using that.

To do that, you make a CDI bean that defines your OIDC configuration - this can actually be on a class that does other things as well, but I like putting it in its own place:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
package config;

import jakarta.enterprise.context.ApplicationScoped;
import jakarta.security.enterprise.authentication.mechanism.http.OpenIdAuthenticationMechanismDefinition;
import jakarta.security.enterprise.authentication.mechanism.http.openid.ClaimsDefinition;

@ApplicationScoped
@OpenIdAuthenticationMechanismDefinition(
	clientId="${oidc.clientId}",
	clientSecret="${oidc.clientSecret}",
	redirectURI="${baseURL}/app/",
	providerURI="${oidc.domain}",
	claimsDefinition = @ClaimsDefinition(
		callerGroupsClaim = "groups"
	)
)
public class AppSecurity {
}

There are a couple EL references here. baseURL is provided for "free" by the framework, allowing you to say "wherever the app is hosted" without having to hard-code it. oidc here refers to a bean I made that's annotated with @Named("oidc") and has getters like getClientId() and so forth. You can make a class like that to pull in your OIDC config and secrets from outside, such as a resource file, environment variables, or so forth. providerURI should be the same base URL as Keep uses above.

Once you do that, you can start putting @RolesAllowed annotations on resources you want protected. So far, I've been using @RolesAllowed("users"), since my Keycloak puts all authenticated users in that group, but you could mix it up with "admin" or other meaningful roles per endpoint. For example, inside a JAX-RS class:

1
2
3
4
5
6
7
@Path("superSecure")
@GET
@Produces(MediaType.TEXT_PLAIN)
@RolesAllowed("users")
public String getSuperSecure() {
	return "You're allowed in!";
}

When accessing that endpoint, the app will redirect the user to Keycloak (or your OIDC provider) automatically if they're not already logged in.

Accessing the Token

In my previous posts, I mentioned that I was able to access the OIDC token that the server used by setting accessTokenInLtpaCookie in the Liberty config, and then getting oidc_access_token from the Servlet request object's attributes, and that that only showed up on requests after the first.

The good news is that, with the latest Jakarta Security, there's a standardized way to do this. In a CDI bean, you can inject an OpenIdContext object to get the current user's token:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
package bean;

import jakarta.enterprise.context.RequestScoped;
import jakarta.inject.Inject;
import jakarta.security.enterprise.identitystore.openid.OpenIdContext;

@RequestScoped
public class OidcContextBean {
  
	@Inject
	private OpenIdContext context;
  
	public String getToken() {
		// Note: if you don't restrict everything in your app, do a null check here
		return context.getAccessToken().getToken();
	}
}

There are other methods on that OpenIdContext object, providing access to specific claims and information from the token, which would be useful in other situations. Here, I only really care about the token as a string, since that's what I'll send to Keep.

With that token in hand, you can build a MicroProfile Rest Client using the generated API interfaces. For example:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
public class SomeClass {
	/* snip */
	@Inject
	private OidcContextBean oidcContext;

	/* snip */

	private DataApi getDataApi() {
		return RestClientBuilder.newBuilder()
			.baseUri("http://your.keep.server:8880/api/v1/")
			.register((ClientRequestFilter) (ctx) -> {
				ctx.getHeaders().add(HttpHeaders.AUTHORIZATION, "Bearer " + oidcContext.getToken()); //$NON-NLS-1$
			})
			.build(DataApi.class);
	}
}

That will cascade the OIDC token used for your app login over to Keep, allowing your app to access data on behalf of the logged-in user smoothly.

I've been kicking the tires on some example apps and fleshing out the Jakarta NoSQL driver using this, and it's been going really smoothly so far. Eventually, my goal will be to make it so that you can take code using the JNoSQL driver for Domino inside an NSF using the XPages JEE project and move it with minimal changes over to a "normal" JEE app using Keep for access. There'll be a bit of rockiness in that the upstream JNoSQL API is changing a bit to adapt to Jakarta Data and will do so in time for JEE to require Java 21, but at least it won't be too painful an analogy.

In Development: Containerized Builds in NSF ODP

Sun Apr 30 11:46:46 EDT 2023

Most of my active development happens macOS-side - I'll periodically use Designer in Windows when necessary, but otherwise I'll jump through a tremendous number of hoops to keep things in the Mac realm. The biggest example of this is the NSF ODP Tooling, born from my annoyance with syncing ODPs in Designer and expanded to add some pleasantries for working with ODPs directly in normal Eclipse.

Over the last few years, though, the process of compiling NSFs on macOS has gotten kind of... melty. Apple's progressive locking-down of traditional native loading mechanisms and the general weirdness of the Notes package and its embedded non-JDK JVM have made things get a little weird. I always end up with a configuration that can work, but it's rough going for sure.

Switching to Remote

The switch to ARM for my workspace and the lack of an ARM-native macOS Notes client threw another wrench into the works, and I decided it'd be simpler to switch to remote compilation. Remote operations were actually the first mechanism I added in, since it was a lot easier to have a pre-made Domino+OSGi environment than spinning one up locally, and I've kept things up since.

My first pass at this was to install the NSF ODP bundles on my main dev server whenever I needed them. This worked, but it was annoying: I'd frequently need to uninstall whatever other bundles I was using for normal work, install NSF ODP, to my compilation/export, and then swap back. Not the best.

Basic Container

Since I had already gotten in the habit of using a remote x64 Docker host, I decided it'd make sense to make a container specifically to handle NSF ODP operations. Since I would just be feeding it ODPs and NSFs, it could be almost entirely faceless, listening only via HTTP and using an auto-generated server ID.

The tack I took for this was to piggyback on the work I had already done to make an IT-suite container for the XPages JEE project. I start with the baseline Domino container from the community script, feed it some basic auto-configure params to relax the HTTP upload-size limits, and add a current build of the NSF ODP OSGi plugins to the Domino server via the filesystem. Leaving out the specifics of the auto-config script, the Dockerfile looks like:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
FROM hclcom/domino:12.0.2

ENV LANG="en_US.UTF-8"
ENV SetupAutoConfigure="1"
ENV SetupAutoConfigureParams="/local/runner/domino-config.json"
ENV DOMINO_DOCKER_STDOUT="yes"

RUN mkdir -p /local/runner && mkdir -p /local/eclipse/eclipse/plugins

COPY --chown=notes:notes domino-config.json /local/runner/
COPY --chown=notes:notes container.link /opt/hcl/domino/notes/latest/linux/osgi/rcp/eclipse/links/container.link
COPY --chown=notes:notes staging/plugins/* /local/eclipse/eclipse/plugins/

The runner script copies the current NSF ODP build to "staging/plugins" and it all holds together nicely. Technically, I could skip the container.link bit - that's mostly an affectation because I prefer to take as light a touch as possible when modifying the Domino program directory in a container image.

Automating The Process

While this server has worked splendidly for me, it got me thinking about an idea I've been kicking around for a little while. Since the needs of NSF ODP are very predictable, there's no reason that I wouldn't automate the whole process in a Maven build, add a third option beyond local and remote operations where the plugin could spin up a temporary container to do the work. That would dramatically lower the requirements on the local environment, making it so that you just need a Docker-compatible environment with a Domino image.

And, as above, my experience writing integration tests with Testcontainers paid off. In fact, it paid off directly: though Testcontainers is clearly meant for testing, the work it does is exactly what I need, so I'm re-using it here. It has exactly the sort of API I want for this: I can specify that I want a container from a Dockerfile, I can add in resources from the current project and generate them on the fly, and the library's scaffolding will ensure that the container is removed when the process is complete.

The path I've taken so far is to start up a true Domino server and communicate with it via HTTP, piggybacking on the existing weird little line-delimited-JSON format I made. This is working really well, and I have it successfully building my most-complex NSFs nicely. I'm not fully comfortable with the HTTP approach, though, since it requires that you can contact the Docker host on an arbitrary port. That's fine for a local Docker runtime or, in my case, a VM on the same local network, where you don't have to worry about firewalls blocking off the random port it opens. I think I could do this by executing CLI commands in the container and copying a file out, which would happen all via the Docker socket, but that'll take some work to make sure I can reliably monitor the status. I have some ideas for that, but I may just ship it using HTTP for the first version so I can have a solid baseline.

Overall, I'm pleased with the experience, and continue to be very happy with Testcontainers even when I'm using it outside its remit. My plan for the short term is to clean the experience up a bit and ship it as part of 3.11.0.

XPages JEE 2.11.0 and the Javadoc Provider

Thu Apr 20 09:47:58 EDT 2023

  1. Updating The XPages JEE Support Project To Jakarta EE 9, A Travelogue
  2. JSP and MVC Support in the XPages JEE Project
  3. Migrating a Large XPages App to Jakarta EE 9
  4. XPages Jakarta EE Support 2.2.0
  5. DQL, QueryResultsProcessor, and JNoSQL
  6. Implementing a Basic JNoSQL Driver for Domino
  7. Video Series On The XPages Jakarta EE Project
  8. JSF in the XPages Jakarta EE Support Project
  9. So Why Jakarta?
  10. XPages Jakarta EE 2.5.0 And The Looming Java-Version Wall
  11. Adding Concurrency to the XPages Jakarta EE Support Project
  12. Adding Transactions to the XPages Jakarta EE Support Project
  13. XPages Jakarta EE 2.9.0 and Next Steps
  14. XPages JEE 2.11.0 and the Javadoc Provider
  15. The Loose Roadmap for XPages Jakarta EE Support
  16. XPages JEE 2.12.0: JNoSQL Views and PrimeFaces Support
  17. XPages JEE 2.13.0
  18. XPages JEE 2.14.0
  19. XPages JEE 2.15.0 and Plans for JEE 10 and 11

Yesterday, I put two releases up on OpenNTF, and I figure it'd be worth mentioning them here.

XPages Jakarta EE Support

The first is a new version of the XPages Jakarta EE Support project. As with the last few, this one is mostly iterative, focusing on consolidation and bug fixes, but it added a couple neat features.

The largest of those is the JPA support I blogged about the other week, where you can build on the JDBC support in XPages to add JPA entities. This is probably a limited-need thing, but it'd be pretty cool if put into practice. This will also pay off all the more down the line if I'm able to add in Jakarta Data support in future versions, which expands the Repository idiom currently in the NoSQL build I use to cover both NoSQL and RDBMS databases.

I also added the ability to specify a custom JsonbConfig object via CDI to customize the output of JSON in REST services. That is, if you have a service like this:

1
2
3
4
5
@GET
@Produces(MediaType.APPLICATION_JSON)
public SomeCustomObject get() {
	return findSomeObject();
}

In this case, the REST framework uses JSON-B to turn SomeCustomObject into JSON. The defaults are usually fine, but sometimes (either for personal preference or for migration needs) you'll want to customize it, particularly changing the behavior from using bean getters for properties to instead use object fields directly as Gson does.

I also expanded view support in NoSQL by adding a mechanism for querying views with full-text searches. This is done via the ViewQuery object that you can pass to a repository method. For example, you could have a repository like this:

1
2
3
4
public interface EmployeeRepository extends DominoRepository<Employee, String> {
	@ViewEntries("SomeView")
	Stream<Employee> listFromSomeView(Sorts sorts, ViewQuery query);
}

Then, you could perform a full-text query and retrieve only the matching entries:

1
2
3
4
5
Stream<Employee> result = repo.listFromSomeView(
	Sorts.sorts().asc("lastName"),
	ViewQuery.query()
		.ftSearch("Department = 'HR'", Collections.singleton(FTSearchOption.EXACT))
);

Down the line, I plan to add this capability for whole-DB queries, but (kind of counter-intuitively) that would get a bit fiddlier than doing it for views.

XPages Javadoc Provider

The second one is a new project, the XPages Javadoc Provider. This is a teeny-tiny project, though, not even containing any Java code. This is a plugin for either Designer or normal Eclipse and it provides Javadoc for some standard XPages classes - specifically, those covered in the official Javadoc for Designer and the XPages Extensibility APIs. This covers things like com.ibm.commons and the core stuff from com.ibm.xsp, but doesn't cover things like javax.faces.* or lotus.domino.

The way this works is that it uses Eclipse's Javadoc extension point to tell Designer/Eclipse that it can find Javadoc for a couple bundles via the hosted version, really just linking the IDE to the public HTML. I went this route (as opposed to embedding the Javadoc in the plugin) because the docs don't explicitly say they're redistributable, so I have to treat them as not. Interestingly, the docs are actually still hosted at public.dhe.ibm.com. If HCL publishes them on their site or makes them officially redistributable, I'll be able to update the project, but for now it's relying on nobody at IBM remembering that they're up there.

In any event, it's not a huge deal, but it's actually kind of nice. Being able to have Javadoc for things like XspLibrary removes a bit of the guesswork in using the API and makes the experience feel just a bit better.

Dipping My Feet Into DKIM and DMARC

Mon Apr 10 10:56:13 EDT 2023

Tags: admin

For a very long time now, I've had my mail set up in a grandfathered-in free Google Whatever-It's-Called-Now account, which, despite its creepiness, serves me well. It's readily supported by everything and it takes almost all of the mail-hosting hassle out of my hands.

Not all of the hassle, though, and over the past couple weeks I decided that I should look into configuring DKIM and DMARC, first for my personal mail and (if it doesn't blow up) for my company mail. I had set up SPF a couple years back, and I figured it was high time to finish the rest.

As with any admin-related post, keep in mind that I'm just tinkering with this stuff. I Am Not A Lawyer, and so forth.

The Standards

DKIM is a neat little standard. It's sort of like S/MIME's mail-signing capabilities, except less hierarchical and more commonly enforced on the server than on the client. That "sort of" does some heavy lifting, but it should suit to think of it like that. What you do is have your server generate a keypair (Google has a system for this), take the public key from that, and stick it in your DNS configuration. The sending server will then add a header to outgoing messages with a signature and a lookup key - in turn, the receiving server can choose to look up the key in the claimed DNS to verify it. If the key exists in DNS and the signature is valid, then the receiver can be fairly certain that the receiver can at least be confident that the sender is who they say they are (in the sense of having control of a sending server and DNS, anyway). Since this signing is server-based, it requires a lot less setup than S/MIME or GPG for mail users, though it also doesn't confer all the benefits. Neat, though.

DMARC is an interesting thing. It kind of sits on top of SPF and DKIM and allows an admin to define some requested handling of mail for their domain. You can explicitly state that you expect your SPF and DKIM records to be enforced and provide some guidance for recipient servers to do so. For example, you might own "foo.com" and go whole-hog: declare that your definitions are complete and that remote servers should outright reject 100% of email claiming to be from "foo.com" but either didn't come from a server enumerated in your SPF record or lack a valid DKIM signature. Most likely, at least when rolling it out, you'll start softer, maybe saying to not reject anything, or to quarantine some percentage of failing messages. It's a whole process, but it's good that gradual adoption is built in.

Interestingly, DMARC also lets you request that servers that received mail from "you" email you summaries from time to time. These generally (always?) take the form of a ZIP attachment containing an XML file. In there, you'll get a list of servers that contacted them claiming to be you and a summary of the pass/fail state of SPF and DKIM for them. This has been useful - I found that I had to do a little tweaking to SPF for known-good servers. This is vital for a slow roll-out, since it's very difficult to be completely sure you got everything when you first start setting this stuff up, and you don't want to too-eagerly poison your outgoing mail.

Configuring

Really, configuring this stuff wasn't bad. I mostly followed Google's guides for DKIM and DMARC, which are pretty clear and give you a good plan for a slow rollout.

Though Google is my main sender, I still have some older agents that might send out mail for my old ID from time to time from Domino, so I wanted to make sure that was covered too. Fortunately, Domino supports DKIM as well, and it wasn't too bad. Admittedly, the process is a little more "raw" than with Google's admin site, but it's not too bad. It's not like I'm uncomfortable with a CLI-based approach, and it's in line with other recent-era security additions using the keymgmt tool, like shared DAOS encryption.

It just came down to following the instructions in HCL's docs and it worked swimmingly. If you have a document in your cred store that matches an INI-configured "domain to ID" value for outgoing mail, Domino will use it. Like how DMARC has a slow-roll-out system built in, Domino lets you choose between signing mail just when available or being harsher about it, and refusing to send out any mail it doesn't know how to sign. I'll probably switch to the second option eventually, since it sounds like a good way to ensure that your server is being a good citizen across the board.

Conclusion

In any event, this is all pretty neat. It's outside my bailiwick, but it's good to know about it, and it also helps reinforce a pub-key mental model similar to things like OIDC. It also, as always, just feels good to check a couple more boxes for being a good modern server.

Quick Tip: Stashing Log Files From Domino Testcontainers

Tue Mar 28 11:36:53 EDT 2023

Tags: docker

I've been doing a little future-proofing in the XPages Jakarta EE project lately and bumped against a common pitfall in my test setup: since I create a fresh Domino Testcontainer with each run, diagnostic information like the XPages log files are destroyed at the end of each test-suite execution.

Historically, I've combatted this manually: if I make sure to not close the container and I kill the Ryuk watcher container the framework spawns before testing is over, then the Domino container will linger around. That's fine and all, but it's obviously a bit crude. Plus, other than when I want to make subsequent HTTP calls against it, I generally want the same stuff: IBM_TECHNICAL_SUPPORT and the Equinox logs dir.

Building on a hint from a GitHub issue reply, I modified my test container to add a hook to its close event to copy the log files into the IT module's target directory.

In my DominoContainer class, which builds up the container from my settings, I added an implementation of containerIsStopping:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
@SuppressWarnings("nls")
@Override
protected void containerIsStopping(InspectContainerResponse containerInfo) {
	super.containerIsStopping(containerInfo);
		
	try {
		// If we can see the target dir, copy log files
		Path target = Paths.get(".").resolve("target"); //$NON-NLS-1$ //$NON-NLS-2$
		if(Files.isDirectory(target)) {
			this.execInContainer("tar", "-czvf", "/tmp/IBM_TECHNICAL_SUPPORT.tar.gz", "/local/notesdata/IBM_TECHNICAL_SUPPORT");
			this.copyFileFromContainer("/tmp/IBM_TECHNICAL_SUPPORT.tar.gz", target.resolve("IBM_TECHNICAL_SUPPORT.tar.gz").toString());
				
			this.execInContainer("tar", "-czvf", "/tmp/workspace-logs.tar.gz", "/local/notesdata/domino/workspace/logs");
			this.copyFileFromContainer("/tmp/workspace-logs.tar.gz", target.resolve("workspace-logs.tar.gz").toString());
		}
	} catch(IOException | UnsupportedOperationException | InterruptedException e) {
		e.printStackTrace();
	}
}

This will tar/gzip up the logs en masse and drop them in my project's output:

Screenshot of the target directory with logs copied

Having this happen automatically should save me a ton of hassle in the cases where I need this, and I figured it was worth sharing in case it's useful to others.

JPA in the XPages Jakarta EE Project

Sat Mar 18 11:55:36 EDT 2023

For a little while now, I'd had an issue open to implement Jakarta Persistence (JPA) in the project.

JPA is the long-standing API for working with relational-database data in JEE and is one of the bedrocks of the platform, used by presumably most normal apps. That said, it's been a pretty low priority here, since the desire to write applications based on a SQL database but running on Domino could be charitably described as "specialized". Still, the spec has been staring me in the face, maybe it'd be useful, and I could pull a neat trick with it.

The Neat Trick

When possible, I like to make the XPages JEE project act as a friendly participant in the underlying stack, building on good use of the ComponentModule system, the existing app lifecycle, and so forth. This is another one of those areas: XPages (re-)gained support for relational data over a decade ago and I could use this.

Tucked away in the slide deck that ships with the old ExtLib is this tidbit:

Screenshot of a slide, highlighting 'Available using JNDI'

JNDI is a common, albeit creaky, mechanism used by app servers to provide resources to apps running on them. If you've done LDAP from Java, you've probably run into it via InitialContext and whatnot, but it's used for all sorts of things, DB connections included. What this meant is that I could piggyback on the existing mechanism, including its connection pooling. Given its age and lack of attention, I imagine that it's not necessarily the absolute best option, but it has the advantage of being built in to the platform, limiting the work I'd need to do and the scope of bugs I'd be responsible for.

Implementation

With one piece of the puzzle taken care for me, my next step was to actually get a JPA implementation working. The big, go-to name in this area is Hibernate (which, incidentally, I remember Toby Samples getting running in XPages long ago). However, it looks like Hibernate kind of skipped over the Jakarta EE 9 target with its official releases: the 5.x series uses the javax.persistence namespace, while the 6.x series uses jakarta.persistence but requires Java 11, matching Jakarta EE 10. Until Domino updates its creaky JVM, I can't use that.

Fortunately, while I might be able to transform it, Hibernate isn't the only game in town. There's also EclipseLink, another well-established implementation that has the benefits of having an official release series targeting JEE 9 and also using a preferable license.

And actually, there's not much more to add on that front. Other than writing a library to provide it to the NSF and a resolver to account for OSGi's separation, I didn't have to write a lot of code.

Most of what I did write was the necessary code and configuration for normal JPA use. There's a persistence.xml file in the normal format (referencing the source made by the XPages JDBC config file), a model class, and then access using the normal API.

In a normal full app server, the container would take care of some of the dirty work done by the REST resource there, and that's something I'm considering for the future, but this will do for now.

Writing Tests

One of the neat side effects is that, when I went to write the test case for this, I got to make better use of Testcontainers. I'm a huge fan of Testcontainers and I've used it for a good while for my IT suites, but I've always lost a bit by not getting to use the scaffolding it provides for common open-source projects. Now, though, I could add a PostgreSQL container alongside the Domino one:

1
2
3
4
5
6
postgres = new PostgreSQLContainer<>("postgres:15.2")
	.withUsername("postgres")
	.withPassword("postgres")
	.withDatabaseName("jakarta")
	.withNetwork(network)
	.withNetworkAliases("postgresql");

Here, I configure a basic Postgres container, and the wrapper class provides methods to specify the extremely-secure username and password to use, as well as the default database name. Here, I pass it a network object that lets it share the same container network space as the Domino server, which will then be able to refer to it via TCP/IP as the bare name "postgresql".

The remaining task was to write a method in the test suite to make sure the table exists. You can do this in other ways - Testcontainers lets you run init scripts via URL, for example - but for one table this suits me well. In the test class where I want to access the REST service I wrote, I made a @BeforeAll method to create the table:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
@BeforeAll
public static void createTable() throws SQLException {
	PostgreSQLContainer<?> container = JakartaTestContainers.instance.postgres;
		
	try(Connection conn = container.createConnection(""); Statement stmt = conn.createStatement()) {
		stmt.executeUpdate("CREATE TABLE IF NOT EXISTS public.companies (\n"
				+ "	id BIGSERIAL PRIMARY KEY,\n"
				+ "	name character varying(255) NOT NULL\n"
				+ ");");
	}
}

Testcontainers takes care of some of the dirty work of figuring out and initializing the JDBC connection for me. That's not particularly-onerous work, but it's one of the small benefits you get when you're doing the same sort of thing other users of the tool are doing.

With that, everything went swimmingly. Domino saw the Postgres container (thanks to copying the JDBC driver to the classpath) and the JPA access worked just the same as it does in my real environment.

Like with the implementation, there's not much there beyond "yep, do the things the docs say and it works". Though there were the usual hurdles that I've gotten used to with adding things like this to Domino, this all went pleasantly smoothly. I may build on this in the future - such as the aforementioned server-managed JPA bits - but that will depend on whether I or others have need. Regardless, I'm glad it's in there.

Moving Relative Date Text Client-Side

Sun Mar 12 10:57:29 EDT 2023

One of my main goals in the slow-moving OpenNTF home-page revamp project I'm doing (which I recently moved to a public repo, by the way) is to, like on this blog, keep things extremely simple. There's almost no JavaScript - just Hotwire Turbo so far - and the UI is done with very-low-key JSP pages and tags.

The Library

This also extends to the server side, where I use the XPages Jakarta EE project as the baseline but otherwise don't yet have any other dependencies to distribute. I have had a few Java dependencies kept as JARs within the project, though, and they're always a bit annoying. The other day, when Designer died again trying to sync one of them to the ODP, I decided to try to remove PrettyTime in favor of something slimmer.

PrettyTime is a handy little library that does the job of turning a moment in time into something human-friendly relative to the current time. OpenNTF uses this for, for example, the list of recent releases, where it'll say things like "19 hours ago" or "5 months ago". Technically the same information as if it showed the raw date, but it's a little nicer. It's also svelte, with the JAR coming in at about 160k. Still, nice as it is, it's a binary blob and a third-party dependency, so it was on the chopping block.

The Strategy

My antipathy for using JavaScript isn't so much about an objection to the language itself but to the sort of bloated monstosities it gives rise to. Using it in the "old" way - progressive enhancement - is great. Same goes for Web Components: I think they're the right tool a lot less frequently than a lot of JavaScript UI toolkits do, but they have their place, and this is one such place.

What I want is a way to send a "raw" ISO time value to the client and have it actually display something nice. Conveniently, just the other week, Stephan Wissel tipped me off to the existence of the Intl object in JavaScript, which handles the fiddly business of time formating, pluralization, and other locale-related needs on behalf of the user. Using this, I could write code that translates an ISO date into something friendly without also then being on the hook for coming up with translations for any future non-English languages that the OpenNTF app may support.

The Component

In the original form, when I was using PrettyTime, the code in JSP to emit relative times tended to look like this:

1
<c:out value="${temporalBean.timeAgo(release.releaseDate)}"/>

I probably could have turned that into a JSP tag like <t:timeAgo value="${release.releaseDate}"/>, but it was already svelte enough in the normal case. Either way, this already looks very close to what it would be with a web component, just happening server-side instead of on the client.

As it happens, Web Components aren't actually terribly difficult to write. A lot of JS toolkits optimize for the complex case - Shadow DOM and all that - but something like this can be readily written by hand. It can start with some normal JavaScript (since I'm writing this in Designer, I made this a File resource, since then the JS editor doesn't die on modern syntax):

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
class TimeAgo extends HTMLElement {
	constructor() {
		super();
	}

	connectedCallback() {
		/* Do the formatting */
	}
}

customElements.define("time-ago", TimeAgo);

Put that in a .js file included in the page and then you can use <time-ago/> at will! It won't do anything yet, but it'll be legal.

While the Intl library will help with this task, it doesn't inherently have all the same functionality as PrettyTime. Fortunately, there's a pretty reasonable idiom that basically everybody who has sought to solve this problem has ended up on. Plugging that into our component, we can get this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
class TimeAgo extends HTMLElement {
	static units = {
		year: 24 * 60 * 60 * 1000 * 365,
		month: 24 * 60 * 60 * 1000 * 365 / 12,
		day: 24 * 60 * 60 * 1000,
		hour: 60 * 60 * 1000,
		minute: 60 * 1000,
		second: 1000
	};
	static relativeFormat = new Intl.RelativeTimeFormat(document.documentElement.lang);
	static dateTimeFormat = new Intl.DateTimeFormat(document.documentElement.lang, { dateStyle: 'short', timeStyle: 'short' });
	static dateFormat = new Intl.DateTimeFormat(document.documentElement.lang, { dateStyle: 'medium' });

	constructor() {
		super();
	}

	connectedCallback() {
		let date = new Date(this.getAttribute("value"));

		this.innerText = this._relativize(date);
		if (this.getAttribute("value").indexOf("T") > -1) {
			this.title = TimeAgo.dateTimeFormat.format(date);
		} else {
			this.title = TimeAgo.dateFormat.format(date)
		}
	}

	_relativize(d1) {
		var elapsed = d1 - new Date();

		for (var u in TimeAgo.units) {
			if (Math.abs(elapsed) > TimeAgo.units[u] || u == 'second') {
				return TimeAgo.relativeFormat.format(Math.round(elapsed / TimeAgo.units[u]), u)
			}
		}
		return TimeAgo.dateTimeFormat.format(d1);
	}
}

customElements.define("time-ago", TimeAgo);

The document.documentElement.lang bit there means that it'll hew to being the professed language of the page - that could also use the default language of the user, but it'd probably be disconcerting if only the times were in one's native language.

With that in place, I could replace the server-side version with the Web Component:

1
<time-ago value="${release.releaseDate}" />

When that loads on the page, the browser will display basically the same thing as before, but the client side is doing the lifting here.

It's not perfect yet, though. Ideally, I'd want this to extend a standard element like <time/>, so you'd be able to write like <time is="time-ago" datetime="${release.releaseDate}">${release.releaseDate}</span> - that way, if the browser doesn't have JavaScript enabled or something goes awry, there'd at least be an ISO date visible on the page. That'd be the real progressive-enhancement path. When I tried that route, I wasn't able to properly remove the original text from the DOM like I'd like, but that's presumably something that I could do with some more investigation.

In the mean time, I'm pretty pleased with the switch. One fewer binary blob in the NSF, a bit of new knowledge of a browser technology, and the app's code remains clean and meaningful. I'll take it.

The Big Apple Silicon Switch, Followup

Thu Feb 16 13:17:49 EST 2023

Tags: docker java

Yesterday, I wrote a post detailing my recent switch to Apple Silicon, and in it I noted that the main remaining problem I was butting heads with was running Domino in Docker containers, specifically in the context of working with Java.

While my general solution of connecting to Docker running remotely is good, it's all the better to have this stuff working locally. I gave it another shot today and found that the solution is in the post: disable the JIT.

Java uses the JIT - its just-in-time compiler - to speed up execution dynamically when running a program, and it's part of what makes Java surprisingly fast in practice. It's a whole rabbit hole of performance implications and comparisons, but it will suffice to say "JIT = good". However, as I found when running test cases that load libnotes.dylib directly, J9's JIT doesn't play well with the x64-to-ARM JIT that these Macs use, either Rosetta's or qemu's.

So, long story short, the solution was to modify my testing container to disable the JIT when in such an environment (the referenced commit has since been followed up a few times).

First off, I added a bit in my Domino One-Touch Config file to point to a Java options file in the notes.ini:

1
2
3
4
5
6
7
8
{
	/* ... */
	"notesINI": {
		/* ... */
		"JavaUserOptionsFile": "/local/JavaOptionsFile.txt"
	}
	/* ... */
}

I modified the Dockerfile to bring this in from the build environment:

1
2
# ...
COPY --chown=notes:notes staging/JavaOptionsFile.txt /local/

Finally, I modified the class I use to build the container to optionally populate this file with the flag to disable the JIT:

1
2
3
4
5
6
String arch = DockerClientFactory.instance().getInfo().getArchitecture();
if(!"x86_64".equals(arch)) {
	withFileFromTransferable("staging/JavaOptionsFile.txt", Transferable.of("-Djava.compiler=NONE"));
} else {
	withFileFromTransferable("staging/JavaOptionsFile.txt", Transferable.of(""));
}

(Transferable is a Testcontainers-ism for any available source of a file to put into the build environment. Often, this will be something from the project classpath, but it can also be arbitrary data like this.)

With that in place, the test suite works! Eventually!

The thing to know about this is that it's slow. It's going to be naturally slow due to emulation, and I imagine the lack of Java's JIT doesn't help anything. Running the XPages JEE test suite takes about 520 seconds in this setup, as compared to 180 seconds when run via my remote native VM. Such a dramatic drop in speed is enough that I'm likely to continue using the remote VM for most things, and only use local emulation for things that significantly benefit from filesystem binds. For what it's worth, it seems like multithreading is what really kills it: most tests are slower by roughly 50%, while a big multithread test balloons from 20s to 160s. That's consistent with my experience with local unit tests, where the multithread ones were the ones that crashed the process.

For the record, I found that this currently only works at all with Docker Desktop's default (qemu-based) emulation. When I switch the emulator to Rosetta, I hit a StackOverflowError from the HTTP JVM during init without any useful other information. That's fair enough, since that emulation type is still flagged as Experimental in Docker Desktop anyway.

Anyway, it's good to know that there's a way to make it work. It's still no replacement for a native container, but at least it works at all. Similar techniques should work with other tasks, like setting the above option in the JAVA_OPTS environment variable.

The Big Apple Silicon Switch

Wed Feb 15 12:01:19 EST 2023

Tags: docker java

Almost exactly two years ago now, I wrote a post describing using Apple Silicon for work while my iMac was in the shop. That was an interesting experience, but too thorny for me to want to really stick with. Some of that was due to the limitations of the hardware - it was a pretty low-end MacBook Air - but most of that was due to the very-much-in-progress world of Java tooling for ARM Macs.

Since then, I'd been itching to find a machine to replace the iMac Pro and the recently-released Mac mini finally fit the bill. It, now dubbed Tethys, arrived on Friday, and I've been going through the process of re-carving-out my working environment, this time starting clean.

Eclipse

Compared to my first dive into ARM Macs, things have gotten a lot smoother. The biggest one for me is the stability of Java generally and Eclipse specifically. Last time, I was on the bleeding edge of that, and I was fortunate enough to at least help diagnose and report some of the things barring the port from being complete. The experience was very janky but the sheer snappiness of Eclipse on ARM really stuck with me, and it's still there in spades.

And, really, there's not even a lot of special things to know. If you download the aarch64 build of Eclipse, it just works like you'd want it to. I had to make sure to download some x64 builds of Java for when I'm running legacy native code, but that wasn't really different from my current collection of different Java versions anyway.

This is all a huge distinction from before, where I was going to weird lengths to try to run Eclipse via X11 remotely just to get something responsive.

Notes and libnotes

For my needs, Notes works fine. It's a shame that it's not native, but, as long as I run my test suites with an x64 JVM, things work about as well as they do on x64.

One immediate problem I ran into, though, was that some test cases that push multithreading would hard crash with a native error to do with the x64-to-ARM translation and JIT. Running with an OpenJ9-based JVM, I found that specifying -Djava.compiler=NONE in the launch args avoided this trouble. I'm sure that the tests run a bit slower now, but they were already emulated anyway, so it's not noticeable.

Domino and Docker

One of the sticking points with my half-transition years ago was the need to run my Domino Docker containers in emulation. Though HCL will have to port Domino to ARM eventually, they haven't yet released such a thing, and so that situation remains the same.

And, unfortunately, the segfault I encounter during my use remains. I had hoped that changed in qemu would fix it, but it appears to be unchanged. Recent versions of Docker can also take advantage of Apple's support for Rosetta in Linux, but for my needs that just creates a different segfault earlier in the process.

So, on this front, I'm doing the same thing I started doing before: running Docker on a VM on actual x64 hardware and using the DOCKER_HOST environment variable to point to it. Fortunately, I've taken steps in the intervening years to make this more practical. The main thing to work around with such a setup is the inability to use filesystem binds, and so I've changed a lot of my Dockerfiles and Testcontainers setups to instead copy more into the image at build. I still haven't solved every problem there, in particular a huge in-container build I'd like to do that points at a giant pool of dependencies that would be onerous to put into the image, but I'm working on that.

Conclusion

Overall, I'm pleased as punch with this thing. It's zippy all around, I haven't once heard the fan even when really hitting the CPU and GPU, and the state of the software ecosystem is such that I only use a very few x64-compiled processes during my daily work. I'd expected to have more of a checklist to work down to clean them all up, but it's really just the for-now-required bits. Not bad at all.