Replicating Domino to Azure Data Lake With Darwino

Mon May 10 10:06:09 EDT 2021

Tags: darwino

Though Darwino is an app-dev platform, one of the main ongoing uses it's had has been for reporting on Domino data. By default, when replicating with Domino, Darwino uses its table layout in its backing SQL database, making use of the various vendors' native JSON capabilities to have storage capabilities analogous to Domino. Once it's there, even if you don't actually build any apps on top of it, it's immediately useful for querying at scale with various reporting tools: BIRT, Power BI, Crystal Reports, what have you. Add in some views and, as needed, extra indexes and you have an extraordinarily-speedy way to report on the data.

Generic Replication

But one of the neat other uses comes in with that "by default" above. Darwino's replication engine is designed to be thoroughly generic, and that's how it replicates with Domino at all: since an adapter only needs to implement a handful of classes to expose the data in a Darwino-friendly way, the source or target's actual storage mechanism doesn't matter overmuch. Darwino's own storage is just "first among equals" as far as replication is concerned, and the protocol it uses to replicate with Domino is the same as it uses to replicate among multiple Darwino app servers.

From time to time, we get a call to make use of this adaptability to target a different backend. In this case, a customer wanted to be able to push data to Azure Data Lake, which is a large-scale filesystem-like storage service Microsoft offers. The idea is that you get your gobs of data into Data Lake one way or another, and then they have a suite of tools to let you report on and analyze it to your heart's content. It's the sort of thing that lets businesspeople make charts to point to during meetings, so that's nice.

This customer had already been using Azure's SQL Server services for "normal" Darwino replication from Domino, but wanted to avoid doing something like having a script to transform the SQL data into Data-Lake-friendly formats after the fact. So that's where the custom adapter came in.

The Implementation

Since the requirement here is just going one way from Domino to Data Lake, that took a bit of the work off our plate. It wouldn't be particularly conceptually onerous to write a mechanism to go the other way - mostly, it'd be finding an efficient way to identify changed documents - but the "loose" filesystem concept of Data Lake would make identifying changes and dealing with arbitrary user-modified data weird.

The only real requirements for a Darwino replication target are that you can represent the data in JSON in some way and that you are able to identify changes for delta replication. That latter one is technically a soft requirement, since an adapter could in theory re-replicate the entire thing every time, but it's worlds better to be able to do continuous small replications rather than nightly or weekly data dumps. In Darwino's own storage, this is handled by indexed columns to find changes, while with Domino it uses normal NSFSearch-type capabilities to find modifications since a specific date.

Data Lake is a little "metadata light" in this way, since it represents itself primarily as a filesystem, but the lack of need to replicate changes back meant I didn't have to worry about searching around for an efficient call. I settled on a basic layout:

Data Lake layout

Within the directory for NSFs, there are a few entities:

  • darwino.json keeps track of the last time the target was replicated to, so I can pick up on that for delta replication in the future
  • docs houses the documents themselves, named like "(unid).json" and containing the converted JSON content of the Domino documents
  • attachments has subfolders named for the UNIDs of the documents referenced, followed by the attachment and embedded images, names prefixed with the fields they're from

Back on the Domino side, I can set this up in the Sync Admin database the same way I do for traditional Darwino targets, where the Data Lake extension is picked up:

Data Lake in Sync Admin

Once it's set up, I can turn on the replication schedule and let it do its thing in the background and Data Lake will stay in step with the NSFs.

Conclusion

Now, immediately, this is really of interest just to our client who wanted to do it, but I felt like it was a neat-enough trick to warrant an overview. It's also satisfying seeing the layers working together: the side that reads the Domino data needed no changes to work with this entirely-new target, and similarly the core replication engine needed no tweaks even though it's pointing at a fully-custom destination.

Implementing Custom Token-Based Auth on Liberty With Domino

Sat Apr 24 12:31:00 EDT 2021

This weekend, I decided to embark on a small personal side project: implementing an RSS sync server I can use with NetNewsWire. It's the delightful sort of side project where the stakes are low and so I feel no pressure to actually complete it (I already have what I want with iCloud-based syncing), but it's a great learning exercise.

Fair warning: this post is essentially a travelogue of not-currently-public code for an incomplete side app of mine, and not necessarily useful as a tutorial. I may make a proper example project out of these ideas one day, but for the moment I'm just excited about how smoothly this process has gone.

The Idea

NetNewsWire syncs with a number of services, and one of them is FreshRSS, a self-hosted sync tool that uses PHP backed by an RDBMS. The implementation doesn't matter, though: what matters is that that means that NNW has the ability to point at any server at an arbitrary URL implementing the same protocol.

As for the protocol itself, it turns out it's just the old Google Reader protocol. Like Rome, Reader rose, transformed the entire RSS ecosystem, and then crumbled, leaving its monuments across the landscape like scars. Many RSS sync services have stuck with that language ever since - it's a bit gangly, but it does the job fine, and it lowers the implementation toll on the clients.

So I figured I could find some adequate documentation and make a little webapp implementing it.

Authentication

My starting point (and all I've done so far) was to get authentication working. These servers mimic the (I assume antiquated) Google ClientLogin endpoint, where you POST "Email" and "Passwd" and get back a token in a weird little properties-ish format:

1
2
3
4
POST /accounts/ClientLogin HTTP/1.1
Content-Type: application/x-www-form-urlencoded

Email=ffooson&Passwd=secretpassword

Followed by:

1
2
3
4
5
6
HTTP/1.1 200 OK
Content-Type: text/html; charset=UTF-8

SID=null
LSID=null
Auth=somename/8e6845e089457af25303abc6f53356eb60bdb5f8

The format of the "Auth" token doesn't matter, I gather. I originally saw it in that "name/token" pattern, but other cases are just a token. That makes sense, since there's no need for the client to parse it - it just needs to send it back. In practice, it shouldn't have any "=" in it, since NNW parses the format expecting only one "=", but otherwise it should be up to you. Specifically, it will send it along in future requests as the Authorization header:

1
2
GET /reader/api/0/stream/items/ids?n=1000&output=json&s=user/-/state/com.google/starred HTTP/1.1
Authorization: GoogleLogin auth=somename/8e6845e089457af25303abc6f53356eb60bdb5f8

This is pretty standard stuff for any number of authentication schemes: often it'll start with "Bearer" instead of "GoogleLogin", but the idea is the same.

Implementing This

So how would one go about implementing this? Well, fortunately, the Jakarta EE spec includes a Security API that allows you to abstract the specifics of how the container authenticates a user, providing custom user identity stores and authentication mechanisms instead of or in addition to the ones provided by the container itself. This is as distinct from a container like Domino, where the HTTP stack handles authentication for all apps, and the only way to extend how that works is by writing a native library with the C-based DSAPI. Possible, but cumbersome.

Identity Store

We'll start with the identity store. Often, a container will be configured with its own concept of what the pool of users is and how they can be authenticated. On Domino, that's generally the names.nsf plus anything configured in a Directory Assistance database. On Liberty or another JEE container, that might be a static user list, an LDAP server, or any number of other options. With the Security API, you can implement your own. I've been ferrying around classes that look like this for a couple of years now:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
/* snip */

import javax.security.enterprise.credential.Credential;
import javax.security.enterprise.credential.UsernamePasswordCredential;
import javax.security.enterprise.identitystore.CredentialValidationResult;
import javax.security.enterprise.identitystore.IdentityStore;

@ApplicationScoped
public class NotesDirectoryIdentityStore implements IdentityStore {
    @Inject AppConfig appConfig;

    @Override public int priority() { return 70; }
    @Override public Set<ValidationType> validationTypes() { return DEFAULT_VALIDATION_TYPES; }

    public CredentialValidationResult validate(UsernamePasswordCredential credential) {
        try {
            try(DominoClient client = DominoClientBuilder.newDominoClient().build()) {
                String dn = client.validateCredentials(appConfig.getAuthServer(), credential.getCaller(), credential.getPasswordAsString());
                return new CredentialValidationResult(null, dn, dn, dn, getGroups(dn));
            }
        } catch (NameNotFoundException e) {
            return CredentialValidationResult.NOT_VALIDATED_RESULT;
        } catch (AuthenticationException | AuthenticationNotSupportedException e) {
            return CredentialValidationResult.INVALID_RESULT;
        }
    }

    @Override
    public Set<String> getCallerGroups(CredentialValidationResult validationResult) {
        String dn = validationResult.getCallerDn();
        return getGroups(dn);
    }

    /* snip */
}

There's a lot going on here. To start with, the Security API goes hand-in-hand with CDI. That @ApplicationScoped annotation on the class means that this IdentityStore is an app-wide bean - Liberty picks up on that and registers it as a provider for authentication. The AppConfig is another CDI bean, this one housing the Domino server I want to authenticate against if not the local runtime (handy for development).

The IdentityStore interface definition does a little magic for identifying how to authenticate. The way it works is that the system uses objects that implement Credential, an extremely-generic interface to represent any sort of credential. When the default implementation is called, it looks through your implementation class for any methods that can handle the specific credential class that came in. You can see above that validate(UsernamePasswordCredential credential) isn't tagged with @Override - that's because it's not implementing an existing method. Instead, the core validate looks for other methods named validate to take the incoming class. UsernamePasswordCredential is one of the few stock ones that comes with the API and is how the container will likely ask for authentication if using e.g. HTTP Basic auth.

Here, I use some Domino API to check the username+password combination against the Domino directory and inform the caller whether the credentials match and, if so, what the user's distinguished name and group memberships are (with some implementation removed for clarity).

Token Authentication

That's all well and good, and will allow a user to log into the app with HTTP Basic authentication with a Domino username and password, but I'd also like the aforementioned GoogleLogin tokens to count as "real" users in the system.

To start doing that, I created a JAX-RS resource for the expected login URL:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
@Path("accounts")
public class AccountsResource {
    @Inject TokenBean tokens;
    @Inject IdentityStore identityStore;

    @PermitAll
    @Path("ClientLogin")
    @POST
    @Consumes(MediaType.APPLICATION_FORM_URLENCODED)
    @Produces(MediaType.TEXT_HTML)
    public String post(@FormParam("Email") @NotEmpty String email, @FormParam("Passwd") String password) {
        CredentialValidationResult result = identityStore.validate(new UsernamePasswordCredential(email, password));
        switch(result.getStatus()) {
        case VALID:
            Token token = tokens.createToken(result.getCallerDn());
            String mangledDn = result.getCallerDn().replace('=', '_').replace('/', '_');
            return MessageFormat.format("SID=null\nLSID=null\nAuth={0}\n", mangledDn + "/" + token.token()); //$NON-NLS-1$ //$NON-NLS-2$
        default:
            // TODO find a better exception
            throw new RuntimeException("Invalid credentials");
        }
    }

}

Here, I make use of the IdentityStore implementation above to check the incoming username/password pair. Since I can @Inject it based on just the interface, the fact that it's authenticating against Domino isn't relevant, and this class can remain blissfully unaware of the actual user directory. All it needs to know is whether the credentials are good. In any event, if they are, it returns the weird little format in the response and the RSS client can then use it in the future.

The TokenBean class there is another custom CDI bean, and its job is to create and look up tokens in the storage NSF. The pertinent part is:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
@ApplicationScoped
public class TokenBean {
    @Inject @AdminUser
    Database adminDatabase;

    public Token createToken(String userName) {
        Token token = new Token(UUID.randomUUID().toString().replace("-", ""), userName); //$NON-NLS-1$ //$NON-NLS-2$
        adminDatabase.createDocument()
            .replaceItemValue("Form", "Token") //$NON-NLS-1$ //$NON-NLS-2$
            .replaceItemValue("Token", token.token()) //$NON-NLS-1$
            .replaceItemValue("User", token.user()) //$NON-NLS-1$
            .save();
        return token;
    }

    /* snip */
}

Nothing too special there: it just creates a random token string value and saves it in a document. The token could be anything; I could have easily gone with the document's UNID, since it's basically the same sort of value.

I'll save the @Inject @AdminUser bit for another day, since we're already far enough into the CDI weeds here. Suffice it to say, it injects a Database object for the backing data DB for the designated admin user - basically, like opening the current DB with sessionAsSigner in XPages. The @AdminUser is a custom annotation in the app to convey this meaning.

Okay, so great, now we have a way for a client to log in with a username and password and get a token to then use in the future. That leaves the next step: having the app accept the token as an equivalent authentication for the user.

Intercepting the incoming request and analyzing the token is done via another Jakarta Security API interface: HttpAuthenticationMechanism. Creating a bean of this type allows you to look at an incoming request, see if it's part of your custom authentication, and handle it any way you want. In mine, I look for the "GoogleLogin" authorization header:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
@ApplicationScoped
public class TokenAuthentication implements HttpAuthenticationMechanism {
    @Inject IdentityStore identityStore;
    
    @Override
    public AuthenticationStatus validateRequest(HttpServletRequest request, HttpServletResponse response,
            HttpMessageContext httpMessageContext) throws AuthenticationException {
        
        String authHeader = request.getHeader("Authorization"); //$NON-NLS-1$
        if(StringUtil.isNotEmpty(authHeader) && authHeader.startsWith(GoogleAccountTokenHandler.AUTH_PREFIX)) {
            CredentialValidationResult result = identityStore.validate(new GoogleAccountTokenHeaderCredential(authHeader));
            switch(result.getStatus()) {
            case VALID:
                httpMessageContext.notifyContainerAboutLogin(result);
                return AuthenticationStatus.SUCCESS;
            default:
                return AuthenticationStatus.SEND_FAILURE;
            }
        }
        
        return AuthenticationStatus.NOT_DONE;
    }

}

Here, I look for the "Authorization" header and, if it starts with "GoogleLogin auth=", then I parse it for the token, create an instance of an app-custom GoogleAccountTokenHeaderCredential object (implementing Credential) and ask the app's IdentityStore to authorize it.

Returning to the IdentityStore implementation, that meant adding another validate override:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
@ApplicationScoped
public class NotesDirectoryIdentityStore implements IdentityStore {
    /* snip */

    public CredentialValidationResult validate(GoogleAccountTokenHeaderCredential credential) {
        try {
            try(DominoClient client = DominoClientBuilder.newDominoClient().build()) {
                String dn = client.validateCredentialsWithToken(appConfig.getAuthServer(), credential.headerValue());
                return new CredentialValidationResult(null, dn, dn, dn, getGroups(dn));
            }
        } catch (NameNotFoundException e) {
            return CredentialValidationResult.NOT_VALIDATED_RESULT;
        } catch (AuthenticationException | AuthenticationNotSupportedException e) {
            return CredentialValidationResult.INVALID_RESULT;
        }
    }
}

This one looks similar to the UsernamePasswordCredential one above, but takes instances of my custom Credential class - automatically picked up by the default implementation. I decided to be a little extra-fancy here: the particular Domino API in question supports custom token-based authentication to look up a distinguished name, and I made use of that here. That takes us one level deeper:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
public class GoogleAccountTokenHandler implements CredentialValidationTokenHandler<String> {
    public static final String AUTH_PREFIX = "GoogleLogin auth="; //$NON-NLS-1$
    
    @Override
    public boolean canProcess(Object token) {
        if(token instanceof String authHeader) {
            return authHeader.startsWith(AUTH_PREFIX);
        }
        return false;
    }

    @Override
    public String getUserDn(String token, String serverName) throws NameNotFoundException, AuthenticationException, AuthenticationNotSupportedException {
        String userTokenPair = token.substring(AUTH_PREFIX.length());
        int slashIndex = userTokenPair.indexOf('/');
        if(slashIndex >= 0) {
            String tokenVal = userTokenPair.substring(slashIndex+1);
            Token authToken = CDI.current().select(TokenBean.class).get().getToken(tokenVal)
                .orElseThrow(() -> new AuthenticationException(MessageFormat.format("Unable to find token \"{0}\"", token)));
            return authToken.user();
        }
        throw new AuthenticationNotSupportedException("Malformed token");
    }

}

This is the Domino-specific one, inspired by the Jakarta Security API. I could also have done this lookup in the previous class, but this way allows me to reuse this same custom authentication in any API use.

Anyway, this class uses another method on TokenBean:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
@ApplicationScoped
public class TokenBean {    
    @Inject @AdminUser
    Database adminDatabase;

    /* snip */

    public Optional<Token> getToken(String tokenValue) {
        return adminDatabase.openCollection("Tokens") //$NON-NLS-1$
            .orElseThrow(() -> new IllegalStateException("Unable to open view \"Tokens\""))
            .query()
            .readColumnValues()
            .selectByKey(tokenValue, true)
            .firstEntry()
            .map(entry -> new Token(entry.get("Token", String.class, ""), entry.get("User", String.class, ""))); //$NON-NLS-1$ //$NON-NLS-2$ //$NON-NLS-3$ //$NON-NLS-4$
    }
}

There, it looks up the requested token in the "Tokens" view and, if present, returns a record indicating that token and the user it was created for. The latter is then returned by the above Domino-custom GoogleAccountTokenHandler as the authoritative validated user. In turn, the JEE NotesDirectoryIdentityStore considers the credential validation successful and returns it back to the auth mechanism. Finally, the TokenAuthentication up there sees the successful validation and notifies the container about the user that the token mapped to.

Summary

So that turned into something of a long walk at the end there, but the result is really neat: as far as my app is concerned, the "GoogleLogin" tokens - as looked up in an NSF - are just as good as username/password authentication. Anything that calls httpServletRequest.getUserPrincipal() will see the username from the token, and I also use this result to spawn the Domino session object for each request.

Once all these pieces are in place, none of the rest of the app has to have any knowledge of it at all. When I implement the API to return the actual RSS feed entries, I'll be able to just use the current user, knowing that it's guaranteed to be properly handled by the rest of the system beforehand.

Bonus: Java 16

This last bit isn't really related to the above, but I just want to gush a bit about newer techs. My plan is to deploy this app using my Open Liberty Runtime, which means I can use any Open Liberty and Java version I want. Java 16 came out recently, so I figured I'd give that a shot. Though I don't think Liberty is officially supported on it yet, it's worked out just fine for my needs so far.

This lets me use the features that have come into Java in the last few years, a couple of which moved from experimental/incubating into finalized forms in 16 specifically. For example, I can use records, a specialized type of Java class intended for immutable data. Token is a perfect case for this:

1
2
public record Token(String token, String user) {
}

That's the entirety of the class. Because it's a record, it gets a constructor with those two properties, plus accessor methods named after the properties (as used in the examples above). Neat!

Another handy new feature is pattern matching for instanceof. This allows you to simplify the common idiom where you check if an object is a particular type, then cast it to that type afterwards to do something. With this new syntax, you can compress that into the actual test, as seen above:

1
2
3
4
5
6
7
@Override
public boolean canProcess(Object token) {
    if(token instanceof String authHeader) {
        return authHeader.startsWith(AUTH_PREFIX);
    }
    return false;
}

Using this allows me to check the incoming value's type while also immediately creating a variable to treat it as such. It's essentially the same thing you could do before, but cleaner and more explicit now. There's more of this kind of thing on the way, and I'm looking forward to the future additions eagerly.

The Joyful Utility of Optionals in Java

Fri Apr 23 11:11:22 EDT 2021

Tags: java
  1. The Cleansing Flame of Null Analysis
  2. Quick Tip: JDK Null Annotations for Eclipse
  3. The Joyful Utility of Optionals in Java

A while back, I talked about how I had embraced nullness annotations in several of my projects. However, that post predated Domino's laggardly move to Java 8 and so didn't discuss one of the tools that came to Java core in that version: java.util.Optional.

The Concept

Java's Optional is a passable implementation of the Option type concept that's been floating around programming circles for a good long time. It's really come to the fore with the proliferation of large-platform languages like Swift and Kotlin that have the concept built in to the syntax.

Java's implementation doesn't go that far - there's no special syntax for them, at least not yet - but the concept remains the same. The idea is that, when you embrace Optional use, your code will no longer return null, with the goal of cutting down on the pernicious NullPointerException. While you may still return an empty value, you will be doing so in a way that allows (and forces) the downstream programmer to check for that case more cleanly and adapt it into their code.

In Practice

For an example of where this sort of thing is well-suited, take a look at this snippet of code, which is likely to be pretty universal in Domino code:

1
2
3
4
5
View users = db.getView("Users");
Document user = users.getDocumentByKey(userName, true);
if(user != null) {
	// ...
}

Most of the time, this will run fine. However, if, say, the "Users" view is unavailable (if it was replaced in a design refresh, or another developer removed it, or it's reader-inaccessible to the current user), you'll end up with a NullPointerException in the second line. When you have the code in front of you, the problem is obvious quickly, but that will require you to crack open the app and look into what's going on before you can even start actually fixing the trouble. That's also the "good" version of the case - if you're using code that separates the #getView from the call to #getDocumentByKey with a bunch of other code, it'll be harder to track down.

Imagine instead if the Domino API used Optional, and returned an Optional<View> in #getView and similar for #getDocumentByKey. That could look more like this:

1
2
3
4
5
View users = db.getView("Users")
	.orElseThrow(() -> new IllegalStateException("Unable to open view \"Users\""));
users.getDocumentByKey(userName, true).ifPresent(user -> {
	// ...
});

The idea is the same, and you'll still get an exception if "Users" is unavailable, but it will be immediately obvious in the error message what it is that you need to fix.

This also forces the programmer to conceptualize that case in a way that they wouldn't necessarily have without the need to "unwrap" the Optional. Maybe it's actually okay if "Users" doesn't exist - in that case, you could just return early and not even run the risk of an exception at all. Or maybe there's a way to recover from that - maybe look up the user another way, or create the view on the fly.

Implementing It

When spreading Optional across an existing codebase or writing a new one around it, I've found that there are some important things to keep in mind.

First, since Optional is implemented as just a class and not special syntax, I've found that the best way to implement it in your code is to go all or nothing: if you decide you want to use Optional, do it everywhere. The trouble if you mix-and-match is that you'll run into some cases where you still do if(foo != null) { ... }; since an empty Optional is non-null, that habit will bite you.

Usually, though, that's not too much trouble: when you start to change your code, you'll run into tons of type-related problems around code like that, so you'll be cued in to change it while you're working anyway. Just make sure to not leave yourself null-returning method traps elsewhere.

Another fun gotcha you'll hit early is that Optional.of(foo) will throw a NullPointerException if foo is null. That's the JDK being (reasonably) pedantic: if you want to wrap a potentially-null value, you have to instead do Optional.ofNullable(foo). While irksome at first, it drives home the point that one of the virtues of Optional is that it forces you, the programmer, to consider the null case much more than you did previously.

Unwrapping

Optional also provides a number of ways to "unwrap" the value or deal with it, and it's useful to know about them for different situations.

The first one is just someOptional.get(), which will return the contained value or immediately throw a NullPointerException if it's null. I've found that this is best for when you're very confident that the value is non-null, either because you already checked with isPresent or if it being null is a sign that the system is so fubar already that there's no virtue in even customizing the exception.

Somewhat safer than that, though, is what I had above: someOptional.orElseThrow(() -> ...), which will either return the wrapped value or throw a customized exception if it's null. This is ideal for either halting execution with a message for how the developer/admin can fix it or for throwing a useful exception declared in your documentation for a downstream programmer to catch.

There's also someOptional.orElse(someOtherValue). For example, take this case where you have a configuration API that returns an Optional<String> for a given lookup key:

1
String emailTarget = getConfig("EmailTarget").orElse("admin@company.com");

That's essentially the "get-or-default" idiom. Now, you can actually do .orElse(null) if you want, though that's often not the best idea. Still, that can be handy if you're adapting existing code that does a null check immediately already, or if you're writing to another existing API that does use null.

Optional also has a #map method, which may seem a bit weird at first, but can be useful on its own, and is particularly well-suited to use with results from the Stream API like #findFirst. It lets you transform a non-null value into something else and thereby change your Optional to then be an Optional of whatever the transformed value type is. For example:

1
2
3
String userName = findLoggedInUser()
	.map(UserObj::getUserName)
	.orElse("Anonymous");

In this case, findLoggedInUser returns a Optional<UserObj>. Then, the #map call gets the username if present and returns an Optional<String>, which is thereby either unwrapped or turned into "Anonymous".

Optional As A Parameter

Up until this point, I've been talking about Optional as a method return value, but what about using it as a parameter? Well, you can, but the consensus is that you probably shouldn't.

At first, I chafed against this advice - after all, Optional is finally an in-JDK way to express an optional or nullable parameter, so why not use it? The arguments about how it's less efficient for the compiler, while true, didn't sway me much - after all, compilers improve, and it's generally better to do something correct than something microscopically faster.

However, the real thing that convinced me was realizing that, if you have Optional as a method parameter, you're still going to have to null-check it anyway, since you don't control who might be calling your code. So, not only will you have to check and unwrap the Optional, you'll still have to have a null guard for the Optional itself, defeating much of the point.

I do think that there's some utility in Optional parameters in your own in-implementation code, not exposed to the outside. It can be useful to indicate that a parameter value is intended to be ignored outright, for example. As one comment on that SO thread mentions, an Optional parameter allows for three states: null, an empty value, or a present value. You could use null to indicate that you don't want that value checked at all, and then have an empty value mean something particular in the code. But that kind of fiddly hair-splitting is exactly why it should be of limited use and even then very-clearly documented for yourself.

If Java ever gets syntax sugar for Optional (say, declaring the parameter as Object? and having calls passing null auto-wrap them into Optional or something), then this could change.

Interaction With Null Analysis

Finally, I'll mention how using Optional interacts with null annotation analysis.

To begin with, if you're using Eclipse, the immediate answer will be "poorly". Because Eclipse doesn't ship with nullness hints for the core JDK, it won't know that Optional.of returns a non-null value, defeating the entire point. For that, you'll want the lastNPE.org nullness annotations, which will provide such hints. I've found that they're still not perfect here, causing Eclipse to frequently complain about someOptional.orElse(null), but the experience becomes good enough.

IntelliJ, for the record, has such hints built-in, so you don't need to worry there.

Once you have that sorted out, though, they go together really well. For example, pairing the two can help you find cases where, in your Optional translation journey, you're checking whether the Optional itself is null: with null-annotated code, the compiler can see that it will never be null as such and will tell you to change it.

So I advise pairing the two: use Optional for return values everywhere, especially if you're making an API for downstream consumption, and then pair them with null annotations to make sure your own implementation code is correct and to provide hints for opting-in users.

Goodbye, Nathan

Mon Apr 12 21:26:07 EDT 2021

It would be difficult to overstate Nathan T. Freeman's impact on the Domino community. I can only imagine he was similarly impactful in other parts of his life, but that's how I knew him.

Back when I was first getting involved in Domino, he was one of the main people to follow. I'd read blogs and comments and know that with his name came something very much worth reading. His assertiveness came across clearly, but was immediately followed by the justification.

When I started to get involved in the community, he was the one who brought me into the XPages Skype channel. Over the years, whether it'd be something I did for one of my own projects or something as part of our collaboration on the OpenNTF Domino API, there were plenty of times when I wrote something and thought "man, I bet Nathan will like this". Generally, it was followed by a - quieter - "at least, I hope so".

A couple years back, there was an opening on the OpenNTF board, and I remember that I was talking to Nathan about it at some user-group soir?e, and he pitched me on the idea of running for the seat on the premise of "world domination". Though the OpenNTF server-infrastructure plans in question at the time didn't quite rise to that level, you can't fault the ambition.

He aggressively sought to be ahead of the curve, and was more often right than not. He was deep into XPages before most of us, and was out of it similarly early. It's a thoroughly-admirable trait, particularly when paired with the technical acumen to back it up.

I consider myself fortunate to have had the chance to work with him on several occasions, and it's a shame that it was only those few. The community will be lessened for his loss, but we're all a lot better for him having been here.

But there are brass tacks involved here too: his family can use whatever help you can give, and there is a GoFundMe page set up for this purpose. Please, if you can, donate something.

Using Server-Sent Events on Domino

Tue Mar 30 08:57:20 EDT 2021

Tags: jakartaee java

Though Domino's HTTP stack infamously doesn't support WebSocket, WebSocket isn't the only game in town when it comes to getting push-type information to HTTP clients. HTML5 also brought with it the less-famous Server-Sent Events standard, which is basically half of WebSocket: it allows the server to push events to the client, but it's still a one-way communication channel.

The Standard

The technique that SSE uses is almost ludicrously simple: the client makes a request and the server replies that it will provide text/event-stream content and keeps the connection open. Then, it starts emitting events delimited by blank lines:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
HTTP/1.1 200 OK
Content-Type: text/event-stream;charset=UTF-8



event: timeline
data: hello

event: timeline
data: hello

Unlike WebSocket, there's no Upgrade header, no two-way communication, and thereby no special requirements on the server. It's so simple that you don't even really need a server-side library to use it, though it still helps.

In Practice

I've found that, though SSE is intentionally far less capable than WebSocket, it actually provides what I want in almost all cases: the client can receive messages instantaneously from the server, while the server can receive messages from the client by traditional means like POST requests. Though this is less efficient and flexible than WebSocket, it suits perfectly the needs of apps like server monitors, chat rooms, and so forth.

Using SSE on Domino

JAX-RS, the Java REST service framework, provides a mechanism for working with server-sent events pretty nicely. Baeldung, as usual, has a splendid tutorial covering the API, and a chunk of what I say here will be essentially rehashing that.

However, though Domino ships with JAX-RS by way of the ExtLib, the library only implements JAX-RS 1.x, which predates SSE support. Fortunately, newer JAX-RS implementations work pretty well on Domino, as long as you bring them in in a compatible way. In my XPages Jakarta EE Support project, I did this by way of RESTEasy, and there did the legwork to make it work in Domino's OSGi environment. For our example today, though, I'm going to skip that and build a small webapp using the com.ibm.pvc.webcontainer.application extension point. In theory, this should also work XPages-side with my project, though I haven't tested that; it might require messing with the Servlet response cache.

The Example

I've uploaded my example to GitHub, so the code is available there. I've aimed to make it pretty simple, though there's always some extra scaffolding to get this stuff working on Domino. The bulk of the "pom.xml" file is devoted to two main things: packaging an app as an OSGi bundle (with RESTEasy embedded) and generating an update site with site.xml to import into Domino.

Server Side

The real work happens in TimeStreamResource, the JAX-RS resource that manages client connections and also, in this case, happens to emit the messages as well.

This resource, when constructed, spawns two threads. The first one monitors a BlockingQueue for new messages and passes them along to the SseBroadcaster:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
try {
    String message;
    while((message = messageQueue.take()) != null) {
        // The producer below may send a message before setSse is called the first time
        if(this.sseBroadcaster != null) {
            this.sseBroadcaster.broadcast(this.sse.newEvent("timeline", message)); //$NON-NLS-1$
        }
    }
} catch(InterruptedException e) {
    // Then we're shutting down
} finally {
    this.sseBroadcaster.close();
}

Here, I'm using the Sse#newEvent convenience method to send a basic text message. In practice, you'll likely want to use the builder you get from Sse#newEventBuilder to construct more-complicated events with IDs and structured data types (usually JSON).

A BlockingQueue implementation (such as LinkedBlockingDeque) is ideal for this task, as it provides a simple API to add objects to the queue and then wait for new ones to arrive.

The second one emits a new message every 10 seconds. This is just for the example's sake, and would normally be actually looking something up or would itself be a listener for events it would like to broadcast.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
try {
    while(true) {
        String eventContent = "- At the tone, the Domino time will be " + OffsetDateTime.now();
        messageQueue.offer(eventContent);

        // Note: any sleeping should be short enough that it doesn't block HTTP restart
        TimeUnit.SECONDS.sleep(10);
    }
} catch(InterruptedException e) {
    // Then we're shutting down
}

Browsers can register as listeners just by issuing a GET request to the API endpoint:

1
2
3
4
5
@GET
@Produces(MediaType.SERVER_SENT_EVENTS)
public void get(@Context SseEventSink sseEventSink) {
    this.sseBroadcaster.register(sseEventSink);
}

That will register them as an available listener when broadcast events are sent out.

Additionally, to simulate something like a chat room, I added a POST endpoint to send new messages beyond the periodic ten-second broadcast:

1
2
3
4
5
6
@POST
@Produces(MediaType.TEXT_PLAIN)
public String sendMessage(String message) throws InterruptedException {
    messageQueue.offer(message);
    return "Received message";
}

That's really what there is to it as far as "business logic" goes. There's some scaffolding in the Servlet implementation to get RestEasy working nicely and manage the ExecutorService and the obligatory "plugin.xml" to register the app with Domino and "web.xml" to account for Domino's old Servlet spec, but that's about it.

Client Side

On the client side, everything you need is built into every modern browser. In fact, the bulk of "index.html" is CSS and basic HTML. The JavaScript involved in blessedly slight:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
function sendMessage() {
    const cmd = document.getElementById("message").value;
    document.getElementById("message").value = "";
    fetch("api/time", {
        method: "POST",
        body: cmd
    });
    return false;
}
function appendLogLine(line) {
    const output = document.getElementById("output");
    output.innerText += line + "\n";
    output.scrollTop = output.scrollHeight;
}
function subscribe() {
    const eventSource = new EventSource("api/time");
    eventSource.addEventListener("timeline",  (event) => {
        appendLogLine(event.data);
    });
    eventSource.onerror = function (err) {
        console.error("EventSource failed:", err);
    };
}

window.addEventListener("load", () => subscribe());

The EventSource object is the core of it and is a standard browser component. You give it a path to watch and then listen for events and errors. fetch is also standard and is a much-nicer API for dealing with HTTP requests. In a real app, things might get a bit more complicated if you want to pass along credentials and the like, but this is really it.

Gotchas

The biggest thing to keep in mind when working with this is that you have to be very careful to not block Domino's HTTP task from restarting. If you don't keep everything in an ExecutorService and account for InterruptedExceptions as I do here, you're highly likely to run into a situation where a thread will keep chugging along indefinitely, leading to the dreaded "waiting for session to finish" loop. The ExecutorService's shutdownNow method helps you manage this - as long as your threads have escape hatches for the InterruptedException they'll receive, you should be good.

I also, admittedly, have not yet tested this at scale. I've tried it out here and there for clients, but haven't pulled the trigger on actually shipping anything with it. It should work fine, since it's using standard JAX-RS stuff, but there's always the chance that, say, the broadcaster registry will fill up with never-ending requests and will eventually bloat up. The stack should handle that properly, but you never know.

Beyond any worries about the web container, it's also just a minefield of potential threading and duplicated-work trouble. For example, when I first wrote the example, I found that messages weren't shared, and then that the time messages could get doubled up. That's because JAX-RS, by default, creates a new instance of the resource class for each request. Moving the declaration from the Application class's getClasses() method (which creates new objects) to getSingletons() (which reuses single objects) fixed the first problem. After that, I found that the setSse method was called multiple times even for the singleton, and so I moved the thread spawning to the constructor to ensure that they're only launched once.

Once you have the threading sorted out, though, this ends up being a pretty-practical path to accomplishing the bulk of what you would normally do with WebSocket, even with an aging HTTP stack like Domino's.

What To Do With All This XSP Markup?

Mon Mar 29 14:51:29 EDT 2021

Tags: xpages

In some previous posts, I've started talking about some steps one can take to make a complicated XPages app more platform-independent. There's a lot to be done there, refactoring code to bridge differences between runtime environments and to lessen dependencies on XPages-specifics things, but there's a huge elephant in the room: all that XSP markup.

Even if you have a cleanly-structured application where all of your logic is in Java and all of that code doesn't make expectations about the UI, there's still bound to be a big pile of XPages XML markup around, and that's not going anywhere. That's the best case, too: most XPages apps, even Java-based ones, are riddled with all sorts of expectations about the UI, from FacesContext to ExtLibUtil to the DominoDocument model layer.

This is a sticky problem, made all the moreso by the fact that, although XPages is a fork of JSF underneath, the XSP layer is its own special language and isn't really how stock JSF pages were ever written.

There's no really-great answer, but I've never been one to shy away from writing a list of possibilities. These range from actual things one can do right now to hypothetical speculation about what one could build to deal with it. This all starts from the assumption that you want to do something to lessen or remove your XPages dependency. You can instead choose, I suppose, to keep chugging along with it.

Practical Steps

Throw It All Out

This scenario is pretty straightforward: dump your XPages code and never look back. While this could take the form of dumping the stack entirely, I think in practice it will generally take the form of first refactoring your logic (if you haven't already) and then exposing it with REST services. Then, you let your XPages app chug along as-is while you build a new app in whatever else you want, and then swap over when your new app is complete enough.

Throw It All Out, But Slowly

This is similar to above, but you rebuild your app piecemeal, either in place or by sending users to a different app for some parts. This is a very-practical route for large, sprawling applications, and it's what we're doing with one of my clients.

The way it's specifically taking form there is that, when it comes time to write a new module or rewrite an existing one, we build that individual component as an Angular app using REST services and served from an OSGi bundle, and then host it in an <iframe> inside the XPage. So the app continues on as it is, but every once in a while a big chunk of it is deleted and replaced. The use of an <iframe> means that the JS app doesn't have to worry about clashes with the surrounding JavaScript libraries included on the XPage, but gets to share the authentication session. Over time, the XPages app will become essentially a master of ceremonies for the individual modules, and then one day we'll probably swap out that shell too.

Run It In A Webapp

This is a path that would really best be combined with something else, and, admittedly, is essentially specific to me personally. In this case, you use the xpages-runtime project to run your XPages inside a normal WAR container on a good server, and then use that as your base of operations for rebuilding.

My instinct with this project is always to say "well, it's really just an experimental thing", but I use it as my primary means of XPages development and as part of my client's CI chain to host testing builds deployed by Jenkins. There are some minor down sides involved in that you have to really know the innards of the stack inside and out if something goes wrong, and it's also absolutely unsupported by anyone. So... your mileage may vary.

That all said, it makes transforming your XPages app into a modern Java app a dream. You get the full Maven experience for dependencies, and you can use newer technologies without the hassle inherent in trying to cram them onto Domino. And, practicality-wise, it'd really just take a small amount of "abetted but not supported" tweaks on HCL's side to make it less me-specific.

Hypothetical Projects

These hypothetical approaches are naturally on a much-larger scale, and aren't really the sort of thing that one would do to solve their dilemma for an individual project. Really, they'd be HCL-led product decisions, and I'm spitballing even more than usual here.

Transform It To JSF

So I mentioned earlier that JSF markup isn't the same as XSP. The immediate difference between the two is the starting conceit: where an XPage is a fully-composed entity starting at xp:view, JSF syntax evolved from JSP and takes an "embedded in XHTML" approach, like this "hello world":

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" 
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml"
      xmlns:f="http://java.sun.com/jsf/core"      
      xmlns:h="http://java.sun.com/jsf/html">
    
    <h:head>
        <title>JSF 2.0 Hello World</title>
    </h:head>
    <h:body>
    	<h2>JSF 2.0 Hello World Example - hello.xhtml</h2>
    	<h:form>
    	   <h:inputText value="#{helloBean.name}"></h:inputText>
    	   <h:commandButton value="Welcome Me" action="welcome"></h:commandButton>
    	</h:form>
    </h:body>
</html>

If you're starting from an equivalent XPage, it wouldn't be too difficult to get here, and you might even be able to do it with XSLT. Take the xp:view pageTitle and move it to the <title> element, swap out xp:inputText for h:inputText, and so forth, and you're good to go.

That is... not what your average XPage looks like, though, and it doesn't take long for the notion of a clean transformation to crash and burn. SSJS aside, there are all sorts of gotchas: themes, custom controls, xp:eventHandler, any component outside of the core, on-page data sources, and so forth. You'd constantly hit things that are either too different in JSF or don't have equivalents at all.

Though I'm not a JSF master, I expect that it'd be essentially impractical to transform the source fully in this way. That said, you could use it as a starting point: auto-convert what you can and leave commented-out versions of the rest as TODOs for the developer.

Write A Driver For JSF

The other route would be to essentially re-implement XSP on top of JSF. All the XSP is there to do is to describe the XPage as a tree of components, and something could certainly interpret the XML into components slightly more easily than a source translation.

Still, though, this would essentially be equivalent in effort to the "update JSF" requests that the community has been making for years. That's easy to say, but much harder to actually do. Additionally, it'd be more implementation work than the above: while components like h:inputText and xp:inputText share a common ancestor, they're not perfectly compatible, and so there'd have to be a parallel component tree in the JSF runtime.

A Mix of Both

By this, I mean that you could take the "transform the XML to normal JSF" approach as above, converting compatible components over to their stock equivalents, but then re-basing the truly-XPages-specific parts into jakarta.faces classes and including them as a component package so that they'd coexist. This is essentially the "dominoFaces" idea.

While I'm skeptical of the value that this would provide to the larger world, it would be a practical hybrid approach, limiting the amount of code that would break to the stuff that really gets into the weeds of XPages-specific assumptions.

And maybe this is how I'd do it if I was tasked with the job. This would run into more-explicable edge cases than trying to transform the source and wouldn't implicitly encourage writing more pure XSP markup like the second option would.

Transform It To Something Else

Of course, JSF isn't the only game in town, so one could hypothetically try to convert these apps to something else entirely. I'm a little skeptical of the options here, admittedly. An approach that would try to split it to be more client-side than XPages is now would essentially require running the stack on the server anyway to handle all the server-side bindings, so I'm not sure what you'd gain. Moving it to a non-JSF server-side framework would avoid some of that trouble, but I'm not sure what you'd gain that would be worth the nightmare of edge cases.

Still, I want to give the option a mention, since it wouldn't be impossible to do something very clever and functional in this way. I just have my doubts about how worth it it would be. In my mind, moving back to mainline JSF on a good app container would be simpler to do while also leaving the door fully open for working with other tools alongside it much more easily than Domino has offered to date.

The Rest of the Work

This is all musing about the task of dealing with XSP markup specifically, and presupposes that you're willing to at least rewrite a bunch of logic as REST services or (more enjoyably) move to a non-Domino app server. While I have my various projects to make this sort of thing easier, I recognize that (for some reason) there's a big difference between "Jesse said this is possible" and "my company is investing heavily into doing this". Just getting a viable, supportable deployment environment that isn't another dead end would be a project of its own.

One big chunk of the work outside of the XSP markup itself and its relation to JSF is the way that "XPages" as such really represents a whole application stack, not just a UI framework. While there is a slice of it that remains essentially a distinct UI kit, there's a tremendous amount of stuff that lives nebulously in the realm between a root web server and the application framework. The HttpService stuff that I've talked about recently is one such part, sitting below the "web container" portion but being (at this point) an XPages-specific thing. Not all of that would need to come along for the ride, but some of it would, or at least some apps would have to account for it going missing.

Anyway, it's admittedly all a big ball of wax, and no option is really perfect. Still, I think it's important to consider and, ideally, execute on something.

Domino HttpService and the NSF Router Project

Thu Mar 18 15:27:04 EDT 2021

Tags: domino java

In my last post and its predecessor, I talked about my tinkering at the XspCmdManager level of Domino's HTTP stack and then more specifically about the com.ibm.designer.runtime.domino.adapter.HttpService class.

The Stack

Now, HttpService is about as generic a name as you can get for this sort of thing, and it doesn't really tell you what it represents. You can think of Domino's HTTP stack since at least the 8.5 era as having two cooperating parts: the core native portion that handles HTTP requests in basically the same way as Domino always did, plus the Java layer as organized by XspCmdManager. The Java layer gets "right of first refusal" for any incoming request that wasn't handled by a DSAPI plugin: before routing the request to the legacy HTTP code, Domino asks XspCmdManager if it'd like to handle it, and only takes care of it at the native layer if Java says no.

XspCmdManager on its own doesn't do much. It accepts the JNI calls from the native side, but otherwise quickly passes the buck to LCDEnvironment (I assume the "LCD" here stands for "Lotus Component Designer"). LCDEnvironment, in turn, really just aggregates registered handlers and dispatches requests. It does a little work to handle exception cases more cleanly than XspCmdManager would, but it's mostly just a dispatcher.

The things that it dispatches to, though, are the HttpServices. These are registered by using the com.ibm.xsp.adapter.serviceFactory IBM Commons extension point, such as here in the plugin.xml form:

1
2
3
<extension point="com.ibm.commons.Extension">
  <service type="com.ibm.xsp.adapter.serviceFactory" class="org.openntf.nsfrouter.NSFRouterServiceFactory" />
</extension>

The class you register there is an implementation of IServiceFactory, which supplies zero or more HttpService implementations on request.

As a side note, I've been using this extension point for years and years, but never before to actually handle HTTP requests. It's extremely convenient in that it's something you can register that is loaded up immediately when the HTTP task starts and is notified as it's terminating, giving you a useful lifecycle without having to wait for a request to come in. I learned about it from the OpenNTF Domino API team and it's been a regular part of my toolkit since.

The HttpService

So that brings us to the HttpService implementation classes themselves. Once LCDEnvironment has gathered them all together, it asks each one in turn (via #isXspUrl) if it can handle a given URL. If any of them say that they can, then it calls the #doService method on each in turn (based on the #getPriority method's return value) until one says that it handled it.

There are a few main HttpService implementations in action on Domino:

  • com.ibm.domino.xsp.module.nsf.NSFService, which handles in-NSF XPages and resources
  • com.ibm.domino.xsp.adapter.osgi.OSGIService, which handles OSGi-registered servlets and webapps
  • com.ibm.domino.xsp.module.nsf.StaticResourcesService, which helps serve static resources

These services also tend to go another layer deeper, passing actual requests off to ComponentModule implementations like NSFComponentModule. That's beyond the scope of what I'm talking about today, but it's interesting to see just how much the Domino stack is basically one giant webapp that contains progressively smaller bounded webapps, like a Matryoshka doll.

For those keeping track, we're about here on a typical XPages call stack:

     at com.ibm.domino.xsp.module.nsf.NSFComponentModule.doService(NSFComponentModule.java:1336)
     at com.ibm.domino.xsp.module.nsf.NSFService.doServiceInternal(NSFService.java:662)
     at com.ibm.domino.xsp.module.nsf.NSFService.doService(NSFService.java:482)
     at com.ibm.designer.runtime.domino.adapter.LCDEnvironment.doService(LCDEnvironment.java:357)
     at com.ibm.designer.runtime.domino.adapter.LCDEnvironment.service(LCDEnvironment.java:313)
     at com.ibm.domino.xsp.bridge.http.engine.XspCmdManager.service(XspCmdManager.java:272)

For our purposes this week, the #isXspUrl and #doService methods on HttpService are our stopping points.

NSF Router Service

In a Twitter conversation yesterday, Per Lausten gave me the idea of using this low level of access to implement improved in-NSF routing. That is to say, if you want "foo.nsf/some/nice/url/here" to actually load up "index.xsp?path=nice/url/here" or the like. Generally, if you want to do this, you either have to set up Web Site rules in names.nsf or settle for next-best options like "index.xsp/nice/url/here".

Since an HttpService comes in at a low-enough level to tackle this, though, it's entirely doable to improve this situation there. So, this morning, I did just that. This new project is a pretty simple one, with all of the action going on in one class.

The way it works is that it looks for a ".nsf" URL and, when it finds one, attempts to load a file or classpath resource named "nsfrouter.properties". The contents of this is a Java Properties file enumerating regex-based routing you'd like. For example:

1
2
foo/(\\w+)=somepage.xsp?bit=bar
baz=somepage.xsp

When found, the class loads up the rules and then uses them to check incoming URLs.

The #doService method then picks up that URL, does a String#replaceAll call to map it to the target, and then redirects the browser over:

NSF Router in action

The user still ends up at the "uglier" URL, but that's the safest way to do it without breaking on-page references.

I felt like that was a neat little exercise, and one that's not only potentially useful on its own but also serves as a good way to play around with these somewhat-lower-level Domino components.

Rapid Progress in Open-Liberty-Runtime Land

Tue Mar 16 18:07:30 EDT 2021

Tags: liberty
  1. Options for the Future of the Domino Open Liberty Runtime
  2. Next Steps With the Open Liberty Runtime
  3. Rapid Progress in Open-Liberty-Runtime Land

After my work implementing a reverse proxy the other day, my mental gears kept churning, and I've made some great progress on some new ideas and some ones I had had kicking around for a while.

Domino-Hosted Reverse Proxy

In my last post, I described the new auto-configuring reverse proxy I added, which uses Undertow on a separate port, supporting HTTP/2 and WebSocket. This gives you a unified layout that points to your configured webapps first and then, for all other URLs, points to Domino.

After that, though, I realized that there'd be some convenience value in doing that kind of thing in Domino's HTTP stack itself. The HttpService classes that hook into the XspCmdManager class are designed for just this sort of purpose: listen for designated URLs and handle them in a custom way. I realized that I could watch for incoming requests in the webapps' context roots and direct to them from Domino itself. So that's just what I did. When enabled, you can go to a URL for a configured webapp path (say, "/exampleapp") right on Domino's HTTP/HTTPS port like normal and it'll proxy transparently to the backing app. Better still, it picks up on the mechanisms that Liberty provides to work with X-Forwarded-* headers and $WS* headers to pass along incoming request information and authenticated-user context.

The way I'm describing this may sound a bit dry and abstract, but I think this has a lot of potential, at least when you don't need HTTP/2. With this setup, you can attach fully-modern WAR files in an NSF, configure a server with the very latest Java server technologies and any Java version of your choosing, and have it appear like any other web app on Domino. /foo.nsf goes to your NSF, /fancyapp goes to a modern Java app. Proper webapps, no OSGi dependency nightmare, no Domino-toolchain miasma (well, less of one), deployed seamlessly via NSF - I think it's pretty neat.

Mix-and-Match Runtimes and Java Versions

Historically, the project has had a single configuration document where you specify the version of Open Liberty and your Java version and flavor of choice for all configured apps. Now, though, I've added the ability to pick those on a per-server basis. This can come in handy if you want to use Java 11 (the current LTS version) for complicated apps, but try out the just-released Java 16 for a new app.

Progress on Genericizing the Tooling

Though the project is named after Open Liberty, there's not really anything about the concept that's specific to Liberty as such. Liberty is extremely good and it's particularly well-suited to this purpose, but there's no reason I couldn't adapt this to run any app server, or really any generic process.

Actually supporting anything else is a big task - every server or task would have its own concept of what an "app" is, how configuration is done, how to monitor logs, how to identify open ports, etc. - but the first step is to at least lay the groundwork. So that I did: I've embarked on the path of separating the core runtime loop (start/stop/restart/refresh/etc.) from the specifics of Liberty.

There's still tons of work to do there, and I'm not fully convinced that it'd be worth it (since you should really be writing Java webapps anyway), but that potential future path is smoother now.

Overall

I think this is getting close to the point where it'll be a proper 3.0 release, and it's also getting to a point where the "why is this good?" pitch should be an easier sell for people who aren't already me. I still have vague plans to do a video or webcast on this, and this should make for a less-arcane time of that. So, we'll see! In the mean time, this should all make my own uses all the better.

Next Steps With the Open Liberty Runtime

Fri Mar 12 11:37:39 EST 2021

Tags: liberty
  1. Options for the Future of the Domino Open Liberty Runtime
  2. Next Steps With the Open Liberty Runtime
  3. Rapid Progress in Open-Liberty-Runtime Land

About a year and a half ago, I wrote a post musing about my options with the future of the Domino Open Liberty Runtime project. It's been serving me well - I still use it here and with a client CI setup - but it hasn't quite hit its full potential yet.

Its short-term goal was easy enough to accomplish: I wanted a good way to run modern Servlet apps using an active Domino runtime, and that works great. Its long-term goal takes more work, though: becoming the clear best way to do "web stuff" on Domino. There are a lot of definitions for what that might be, and that "on Domino" bit may not even be the way one would want to go about doing it. Still, I think there's potential there.

So, this week, I decided to go back in and see if I could spruce it up a bit.

The Core of Domino Java HTTP

This started with me musing a bit on Twitter about the true lowest-level entrypoint in Domino's HTTP stack is, and where the border between native and Java lies. After overthinking it a bit, I found that the answer was obvious from any stack trace: XspCmdManager.

From what I gather, the native HTTP task (which is much more opaque than the Java part) loads up its JVM, uses the code in xsp.http.bootstrap.jar to initialize the OSGi environment, asks that environment for the com.ibm.domino.xsp.bridge.http bundle, and uses XspCmdManager in there to handle the layering.

That class has a couple public methods, but two are of immediate interest: isXspUrl and service. The isXspUrl is called for each incoming HTTP request. If that returns false, then nHTTP goes about its normal business like it always did; if it returns true, then nHTTP calls service with a bunch of handle parameters and lets the Java runtime take it from there.

That got me to tinkering. Since that class is in an OSGi bundle, you can readily "outrank" it by having another bundle with the same name and a higher version available. Then, since the class and its methods are just called by strings (more or less), you can have other classes with the same names and APIs in place to do whatever you want. And, such as it is, that works well: you can pretty readily inject whatever code you want into the isXspUrl and service methods and have it take over.

However, that doesn't actually buy you much. What I'd really want to do would be to improve on the actual HTTP server - HTTP/2 support, web sockets, all that - and the Java layer only comes into play after nHTTP has received and started interpreting the connection as an HTTP request. You're not given the raw incoming stream. Additionally, there's not actually any real need to override this low level: the HttpService classes you can register via the com.ibm.xsp.adapter.serviceFactory extension point can choose to handle any incoming URL directly at essentially the same low level as XspCmdManager.

So, while that was fun to poke around with, I don't think there's anything really to be gained there.

Reverse Proxy Improvements

So I went back to an older idea I had kicking around for spawning an all-encompassing reverse proxy. The project has had a lesser version of this for a good while, originally as a WAR file you could add to a Liberty server and then later as a lower-level Liberty feature. However, the way that worked was limited: it would allow you to proxy non-matched requests to a Liberty server to Domino, but didn't do anything to coordinate multiple servers beyond that. Additionally, being a Liberty feature, it limited my future options, such as genericizing the project to work with other app servers.

For my next swing at the problem, I went with Undertow, which is an embeddable Java web server in many ways similar to Jetty, and which is (I gather) the core HTTP part of Wildfly. What made Undertow appealing to me was its modern standards compliance, its relatively-low dependency footprint, and its built-in reverse proxy handler. Additionally, since it's Java, that meant I could embed it in the running JVM without spawning yet another process, hopefully making things all the more reliable.

To go with this, the config DB sprouted some more configuration options:

Reverse proxy config

Along with configuration you explicitly set there and in the individual Liberty server configurations, I have the proxy pick up Domino connection information from names.nsf, allowing it to avoid inconvenient extra environment variables or flags.

And, so far, this has been working splendidly. Undertow's configuration is pretty straightforward, and it wasn't too bad to configure it with prefix matching for the context roots of opted-in apps.

The Next Overall Goal

There's more work to do, beyond just finishing the basic implementation here. I'd really like to get it to a point where you can use this to deploy (at least) WAR-based apps "to Domino" without having to think too much about it, like how you don't have to think about deploying an NSF-based app. It should be thoroughly doable to have the reverse proxy pick up its certificate chain from Domino if desired (especially with the revamped capabilities coming in V12), and some recent changes I made make app deployment noticeably smoother than previously.

Certainly, this sort of project has some inherent limitations compared to nHTTP, but this feels like it's getting a lot closer to a direct upgrade and less like a janky proof-of-concept.

Carving Out A Workspace On Apple Silicon

Wed Feb 17 11:24:19 EST 2021

Last month, I mentioned my particular computer trouble, in that my trusty iMac Pro has been afflicted by an ever-worsening fan noise problem. I'd just been toughing it out, since there's never a good time to lose your main machine for a week or two, and my traveler MacBook Escape wasn't up to the task of being a full replacement.

After about a month's delay, my fresh new M1 MacBook Air arrived a few weeks ago and I've been putting it through its paces.

The Basics

As pretty much anyone who has one of these computers has said, the performance is outstanding. For the most part, even with emulation, most of the tasks I do during the day feel the same as they did on my wildly-more-expensive iMac Pro. On top of that, the fact that this thing doesn't even have a fan is both a technical marvel and a godsend as far as ambient room noise is concerned.

For continuity's sake, I used Migration Assistant to bring over my iMac's environment, and everything there went swimmingly. The good-citizen apps I use like MarsEdit and Tower were already ported to ARM, while the laggards (unsurprisingly, the ones made by larger companies with more resources) remain Intel-only but run just fine in emulation.

Hardware

For a good while now, I've had the iMac screen flanked by a pair of similarly-sized but far-inferior Asus screens. With the iMac's lovely screen out of the setup for now, I've switched to using those two Asus screens as my primary ones, with the pretty-but-tiny laptop screen sitting beneath them. It works well enough, though I do miss the retina resolution and general brightness of the iMac.

The second external screen itself was a bit of an issue. Of themselves, these M1 Macs, either for good reason or to mark them as low end, support only two screens total, the laptop screen included. So I ended up ordering one of the StarTech DisplayLink adapters. I expected it to be a crappy experience overall, with noticeable lag, but it actually works much more smoothly than I'd have expected. Other than the fact that it doesn't support Night Shift and some wake-from-sleep slowness that I attribute to it, it actually feels just like a normally-attached monitor.

I also, in order to regain my precious Ethernet connection and (sort of) clean up the dongle situation, I got one of these Anker USB-C docks. I've only had it for a day, but it seems to be working like you'd want so far. So that's nice.

Eclipse and Java

Here's where I've hit my first bout of jankiness, though it's not too surprising. In general, Eclipse and Java work just fine through emulation, and I can even keep running tests and web servers using the libnotes.dylib from the Notes client as I want.

I've found times where tests lag or fail now when they didn't before, though, and that's a little ominous. Compiling locally with NSF ODP, which spawns a sub-process that loads the Notes libraries, usually works, though now I've set up another Domino server on my network to handle that reliably.

I've also noticed some trouble in one of my Eclipse workspaces where it periodically spends a long time (10+ minutes) "Building" without explaining what exactly it's doing, and this is new behavior since the switch. I can't say what the core trouble is there. It's my largest active workspace, so it could be that file polling or other system-call-intensive work is just slower, or it could be an artifact of moving it from machine to machine. I'll probably scrap it and make a new workspace with the same projects to see if it alleviates it.

This all should improve in time, though, when Eclipse, AdoptOpenJDK, and HCL all release macOS ARM ports. IntelliJ has an experimental ARM port out, and I'm curious how that does its thing. I'll probably spend some time kicking the tires on that, though I still find Eclipse's UI much more conducive to the "lots of semi-related projects" working style I have. Visual Studio Code is in a similar boat, so that'll be good for the JavaScript development I do (under protest).

In the mean time, I've done some tinkering with how I could get a fully-native Eclipse environment running and showing up on my Mac, including firing up the venerable XQuartz to run Eclipse as an X client from a Linux VM in the basement. While that technically works, the experience is... well, I'll charitably call it "not Mac-like". Still, it's kind of neat and would in theory push aside any number of concerns.

Docker

Here's the real trouble I'm butting my head against. I've taken to using Docker more and more for various reasons: running app servers with a Domino runtime, running Domino outright, and (where my trouble is now) performing cross-compilation and other native-specific compilation tasks. For example, for one of my clients, I have a script that mounts the project directory to a Docker container to perform a full Maven build with NSF compilation and compile-time tests, without having to worry about the user's particular Notes or Domino installation.

However, while Docker is doing Hurculean work to smooth the process, most of the work I do ends up hitting one of the crashing snags in poor qemu, which crop up particularly with Java compilation tasks. Since compiling Java is basically all I do all day, that leaves me hoping either for improvements in future versions or a Linux/aarch64 port of Domino (or at least libnotes.so).

In the mean time, I'm making use of Docker's network transparency to run Docker on an x64 VM and set DOCKER_HOST locally to point to it. For about half of what I need, this works great: I can run Domino servers and Notes-enabled webapps this way, and I just change which address I'm pointing to to interact with them. However, it naturally removes the possibility of connecting with the local filesystem, at least without pairing it with some file-share jankiness, so it's not a replacement all around. It also topples quickly into the bizarre inner Docker world: for example, I wanted to set up Codewind to work remotely, but the instructions I found for getting started with your own server were not helpful.

Future Use

Still, despite the warts, I'd say this laptop is performing admirably, and better than one would normally expect. Plus, it's a useful exercise in finding more ways to make my workflow less machine-specific. Though I still bristle at the thought of going full Eclipse Che and working out of a web browser, at least moving some more aspects of my workspace to float above the rough seas is just good practice.

I'll probably go back to using the iMac Pro as my main machine once I get it fixed, even if only for the display, but this humble, low-end M1 has planted its flag more firmly than a MacBook Air normally has any right to.