Jakarta NoSQL Driver for the AppDev Pack, Part 2

Mon Sep 26 09:55:05 EDT 2022

Tags: java jnosql
  1. Jakarta NoSQL Driver for the AppDev Pack, Part 1
  2. Jakarta NoSQL Driver for the AppDev Pack, Part 2

In my last post, I talked about how I implemented a partial Jakarta NoSQL driver using the AppDev Pack as a back end instead of the Notes.jar classes used by the primary implementation. Though the limitations in the ADP mean that it lacks a number of useful features compared to the primary one, it was still an interesting experiment and has the nice side effect of working with essentially any Java app server and Java version 8 or above.

Beyond the Proton API calls, the driver brought up the interesting topic of handling authentication. Proton has three ways of working in this regard:

  • Anonymous, which is what you might expect based on how that works elsewhere in Domino. This is easy but not particularly useful except in specific circumstances.
  • Client certificate authentication, where you create a TLS keychain for a given user and associate it with a Directory user (e.g. CN=My Proton App/O=MyOrg), and then your app performs all operations as that user. This is basically like if you ran a remote app with NRPC using a client Notes ID.
  • Act-as-User, which builds on the above authentication by configuring an OAuth broker service that can hand out OIDC tokens on behalf of named users. This is sort of like server-to-server communication with the "Trusted Servers" config field in the server doc, but different in key ways.

Client Certificate Authentication

When doing app development, the middle route makes sense as your starting point, since most of your actual work will likely be the same regardless of whether you later then add on act-as-user support. For that, you'll follow the guide to set up your TLS keychain and then feed those files to the com.hcl.domino.db.model.Server object:

1
2
3
4
5
6
7
Path base = Paths.get(BASE_PATH);
		
File ca = base.resolve("rootcrt.pem").toFile();
File cert = base.resolve("clientcrt.pem").toFile();
File key = base.resolve("clientkey.pem").toFile();

Server server = new Server("ceres.frostillic.us", 3003, ca, cert, key, null, null, Executors.newSingleThreadExecutor());

The use of java.io.File classes here is a bit of a shame, but not the end of the world. In practice, you'd likely store your keychain somewhere on the filesystem anyway and then feed the BASE_PATH property to your app via an environment variable. Otherwise, if they were pulled from some other source, you could use Files.createTempFile to store them on the filesystem while your app is running. Those nulls are for the passphrases for the certificates, so they might be populated for you. For the last parameter, making a new executor is fine, but you might want to hand it a ManagedExecutorService.

You can initialize this connection basically anywhere - I put it in a ServletRequestListener to init and term the object per-request, but I think it would actually be fine to do it in a ServletContextListener and keep it app-wide. I did it out of habit from Notes.jar and its heavy requirements on threads and the way Session objects are different per-user, but that's not how Proton connections work.

This form of authenticate is a prerequisite for Act-as-User below, but it might suffice for your needs anyway. For example, if you have a "utility" app, like a bot that looks up data and posts messages to Slack or something, you can call it good here.

Act-as-User

But though the above may suffice sometimes, the integration of user identity with data access is one of the hallmarks of Domino, so Domino apps that wouldn't need Act-as-User support are few and far between.

Act-as-User is a bit daunting to set up, though. In traditional server-to-server communication in Domino, it suffices to just add a server's name or group to the "Trusted Servers" list in the server doc of the server being accessed. Then, it will trust any old name that the app-housing server sends along. Generally, this will be a user that was already authenticated with Domino, like using an XPages-supplied Session object to access a remote server, but it doesn't have to be.

Act-as-User, though, uses OpenID Connect with an authentication server to do the actual authentication, and then the Proton task is told to accept those tokens as legal for acting on behalf of a given user. While you could in theory write your own OIDC server that dispenses tokens for any user name willy-nilly, in practice you'll almost definitely use an existing implementation. In the default case, that implementation will in turn almost definitely be IAM, which is an OAuth broker service with the AppDev Pack that stores its configuration data in an NSF but and reads users from Domino (or elsewhere) via LDAP.

IAM, though, isn't special in this regard. It's packaged with the ADP, sure, but the way it deals with tokens is entirely standards-based. That means that any compatible implementation can fill this role, and, since I'd heard great things about Keycloak, I figured I'd give that a shot. With some gracious assistance from Heiko Voigt, I was able to get this working - I don't want to steal his future thunder by going into too much detail, but honestly the main hurdles for me were just around learning how Keycloak works. Once you have the concepts down, you basically plug in the Keycloak client details in for the IAM ones in the same configuration.

With that set up, you can feed your user token from the web app into your Proton API calls, and then your actions will be running as your user in the same way as if you were authenticated in an on-server XPages or other app. The way this manifests in the Java API is a little weird, but it works well enough: almost all Proton calls have a varargs portion at the end of their method signature that takes OptionalArg instances. One such type is OptionalAccessToken, which takes your auth token as a String. I have a method that will stitch in an access token when present. That gets passed in when making calls, such as to read documents:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
OptionalItemNames itemNamesArg = new OptionalItemNames(itemNames);
OptionalStart startArg = new OptionalStart((int)skip);
OptionalCount countArg = new OptionalCount(limit < 1 ? Integer.MAX_VALUE : (int)limit);
			
List<Document> docs = database.readDocuments(
	dql,
	composeArgs(
		itemNamesArg,
		startArg,
		countArg
	)
).get();

App Authentication

Okay, so that's what you do when you have a setup and a token, but that leaves the process of the user actually acquiring the token. From the user's perspective, this will generally take the form of doing an "OAuth dance" where, when the user tries to access a protected resource, they're sent over to Keycloak to authenticate, which then sends them back to the app with token in hand.

There are a lot of ways one might accomplish this, varying language-to-language, framework-to-framework, and server-to-server. You will be shocked to learn that I'm using Open Liberty for my app here, and that comes with built-in support for OIDC.

Before I go further, I should put forth a big caveat: I'm really muddling through with this one for the time being. The setup I have only kind of works, and is clearly not the ideal one, but it was enough to make the connection happen. I'm not sure if the right path long-term is to keep using this built-in feature or to switch to either a different built-in option or another library entirely. So... absolutely do not take anything here as advice in the correct way to do this.

Anyway, with that out of the way, you can configure your Liberty server to talk to your Keycloak server (or IAM, probably, but I didn't do that):

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
<openidConnectClient id="client01"
    clientId="liberty-tester"
    clientSecret="some-client-secret"
    discoveryEndpointUrl="https://some.keycloak.server/auth/realms/master/.well-known/openid-configuration"
    signatureAlgorithm="RS256"
    sslRef="httpSsl"
    accessTokenInLtpaCookie="true"
    userIdentifier="preferred_username"
    groupIdentifier="groups">
</openidConnectClient>
<ssl id="httpSsl" trustDefaultCerts="true" keyStoreRef="myKeyStore" trustStoreRef="OIDCTrustStore" />
<keyStore id="myKeyStore" password="super-secure-password1" type="PKCS12" location="${server.config.dir}/BasicKeyStore.p12" />
<keyStore id="OIDCTrustStore" password="super-secure-password2" type="PKCS12" location="${server.config.dir}/OIDCTrustStore.p12" />

The keyStores there contain appropriate certificate chains for the TLS connection to your Keycloak server, while the clientId and clientSecret match what you configure/generate on Keyclaok for this new client app.

What got me able to actually use this token for downstream access was the accessTokenInLtpaCookie property. If you set that, then your HttpServletRequest objects after the initial one will have an oidc_access_token property on them containing your token in the format that Proton needs. So that's where the ContextDatabaseSupplier in the previous post got it:

1
2
3
4
5
6
7
@Produces
public AccessTokenSupplier getAccessToken() {
	return () -> {
		HttpServletRequest request = CDI.current().select(HttpServletRequest.class).get();
		return (String)request.getAttribute("oidc_access_token");
	};
}

This is also one of the parts that makes me think I'm not quite doing this ideally. It's weird that the token shows up in only requests after the first, though that wouldn't be an impediment in a lot of app types. It's also very unfortunate to have the app use a server-specific property like that.

Fortunately, Jakarta Security 3.0 sprouted official OIDC support, though the build of Open Liberty I was using didn't quite have all the pieces in place for that - reasonable, considering Jakarta EE 10 only officially came out yesterday and this was weeks ago. It looks like that may provide the token in a contextual object, so I'll have to give that a shot once support settles in.

With my setup in place, janky as it may be, I'm able to access a resource (e.g. a JAX-RS endpoint) marked with @RolesAllowed("uma_authorization") and the server will automatically kick me over to Keycloak and then accept the token when I get back. Then, I can pick that up from the request attributes and use it for Domino data access. Keycloak is getting its user directory from Domino via LDAP in the same way as IAM usually would, but, like IAM with AD, it could be configured to use different user directories. I don't know that I'll want to do that, but it's good to know.

Conclusion

Like the original driver itself, this was mostly an educational exercise for me. I don't currently have any requirements to use the AppDev Pack or OIDC/Keycloak, but I'd wanted to dip my toes in both for a while now, and I'm pleased that I came out successful. I imagine that I'll have an occasion to implement something like this eventually. It may not be the same specific parts, but the core concepts are common, like in Keep's JWT and OAuth support. It's a neat setup, and it's definitely worth doing something similar if you have some experimentation time on your hands.

Jakarta NoSQL Driver for the AppDev Pack, Part 1

Fri Sep 09 10:20:28 EDT 2022

Tags: java jnosql
  1. Jakarta NoSQL Driver for the AppDev Pack, Part 1
  2. Jakarta NoSQL Driver for the AppDev Pack, Part 2

Though the bulk of the work I've been doing for the XPages Jakarta EE project is to bring JEE technologies to Domino, the NoSQL driver has been designed to lead a double life: it works in an XPages context, but it's written to not have any XPages dependencies. One reason for this is that I want it to be usable if you use, for example, the Open Liberty runtime project to side-car apps on a Domino server but still use Notes.jar for data access.

Another reason for its organization, though, is that I intended for the driver to be portable across implementations. The driver itself is split into a main bundle and a ".lsxbe" implementation bundle. My original thought was to make that ready for a JNX or Domino JNA implementation, but it's pretty flexible.

Proton

And that brings me to the topic of this post: the AppDev Pack. I'd actually not used the AppDev Pack at all until I started in on this little project. While I like the idea, it at first only had a JS client, and then even with a Java client lacked some important capabilities I'd need for most of what I do. That's still the case, unfortunately, but my general annoyance with the problems of developing on top of a local Notes runtime tipped the scales to get me to investigate it.

In essence, the core part of the AppDevPack - the Proton addin - is kind of like a modern take on the CORBA driver for Notes.jar, in that it provides a remote way to make API calls on the server that are "low-level-ish" and ideally higher-performance than REST APIs. Since it's remote, it imposes no requirements on the environment of the app making the calls, and could in theory work with any language that has a gRPC library (though in practice HCL has only ever shipped JS and Java clients without providing the gRPC spec to generate others).

The API that Proton provides is heavily geared around batch operations, focusing on using DQL for querying. This suits my needs well, as Jakarta NoSQL essentially assumes you'll have something like DQL available to do arbitrary document queries. It's also geared around requesting specific items from documents to return, which further suits me well - for efficiency purposes, I wrote code into the LSXBE driver to only read desired items anyway, so that adapted naturally.

The Driver

So, over the long weekend here, I set out into writing a driver suitable for use in standalone JEE applications to access Domino via Proton. The goal here is to make it so that code written to target the LSXBE driver will work the same way with this Proton-based one, with the only differences being limitations in what Proton provides and a switch to providing a lotus.domino.Database context to providing a com.hcl.domino.db.model.Database one.

And, though the limitation list there is a bit lengthy, I accomplished the main part of my goal there. Since it shares a lot of code in common with the LSXBE driver, the implementation here is pretty small. The bulk of the code happens in ProtonDocumentCollectionManager, which is the implementation of all the raw JNoSQL primitives - insert, update, select, etc. - and then ProtonEntityConverter, which is the utility class that translates between Proton's concept of Document and JNoSQL's concept of DocumentEntity.

The Proton API is... very weird from a Java perspective. Some of it bears a little similarity to the NIO Files API if you squint, but otherwise it's just an odd duck. Fortunately, a consumer of this driver doesn't have to care about that beyond the initial connection to the server. The specifics of actually connecting to a server and opening a database are the same as with normal Proton use - you do that and then provide a CDI bean to hand off the Database object to the driver:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
@RequestScoped
public class ContextDatabaseSupplier {
	// This type of supplier is required
	@Produces
	public DatabaseSupplier get() {
		// Here, the `Database` object is set in a `ServletRequestListener`, though it likely could be app-wide
		return () -> {
			HttpServletRequest request = CDI.current().select(HttpServletRequest.class).get();
			return (Database)request.getAttribute(DatabaseContextListener.ATTR_DB);
		};
	}
	
	// This supplier is optional - leaving it out or returning null will cause the driver to skip performing Act-As-User
	@Produces
	public AccessTokenSupplier getAccessToken() {
		return () -> {
			HttpServletRequest request = CDI.current().select(HttpServletRequest.class).get();
			return (String)request.getAttribute("oidc_access_token");
		};
	}
}

Next Steps

Now, admittedly, I'm not sure how much more time I'm going to put into this driver. Though there are a few enhancements that could be made - attachments and rich text, namely - I'm probably not going to use this myself in the near future. The immediate cases I have where I'd want to use this would end up involving things like views that aren't currently accessible over Proton, and so it's be really just speculative development. Still, I wanted to make this to kick the tires a bit and see what's possible, and it's got my gears turning.

There'll also be a bit more I want to go over in the next post. In the above code block, I mention Act-As-User tokens; the driver will use those when provided, and the driver itself doesn't care how you acquire them, but it's an interesting topic on its own. During my testing, I was able to get a Liberty-hosted app performing OIDC authentication for its own login and then using that token for access to Proton, and I think that that will deserve some expansion on its own.

In the mean time, if you're curious, the driver is available up on GitHub:

https://github.com/OpenNTF/jnosql-driver-proton

The artifact is published to OpenNTF's Maven repository, though you'll need to follow the instructions in the README there to add the non-redistributable Proton client library to your local Maven repo.

Code-First REST APIs Followup: OpenAPI

Fri Aug 26 11:04:08 EDT 2022

Tags: jakartaee
  1. Code-First REST APIs With XPages Jakarta EE Support
  2. Code-First REST APIs Followup: OpenAPI
  3. XAgents to Jakarta REST Services

In yesterday's post, I gave a two-file example of writing a basic CRUD REST API for NSF documents. In that post, I casually mentioned that one of the side benefits of this approach would have to wait until I fixed an open bug.

Well, I fixed that bug not long after making that post, so now I can detail what that is.

But just before I do that, I should mention that I added an "examples" directory to the project repository, where I plan to put examples like this in on-disk-project form, without the baggage of the test-suite example NSFs in the main tree: https://github.com/OpenNTF/org.openntf.xsp.jakartaee/tree/develop/examples. Anyway, back to what I fixed up here.

One of the neat little side features that the framework brings in is MicroProfile OpenAPI, which automatically generates OpenAPI specifications for your REST services based on your code. Depending on your workflow, this can be tremendously convenient. OpenAPI, being a widely-supported spec, has tons of tools available, and you can use this when integrating other applications with yours, or when working in a multi-tiered development team. For example, if the UI portion of your app is being developed separately from the back end, you could hand off the OpenAPI file to the other developer(s) and they will have the information they need to write against your services. And, since it's generated from the code and not manually, it has the benefit of being inherently consistent with the current design of the app.

Default Output

By default, based on how the app worked when we left it yesterday, going to "foo.nsf/xsp/app/openapi.yaml" will get you this output:

  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
---
openapi: 3.0.3
info:
  title: Jakarta Code-First REST
servers:
- url: http://some.server/foo.nsf/xsp/app
paths:
  /employees:
    get:
      responses:
        "200":
          description: OK
          content:
            application/json:
              schema:
                type: array
                items:
                  $ref: '#/components/schemas/Employee'
    post:
      requestBody:
        content:
          application/json:
            schema:
              $ref: '#/components/schemas/Employee'
      responses:
        "200":
          description: OK
          content:
            application/json:
              schema:
                $ref: '#/components/schemas/Employee'
  /employees/{id}:
    get:
      parameters:
      - name: id
        in: path
        required: true
        schema:
          type: string
      responses:
        "200":
          description: OK
          content:
            application/json:
              schema:
                $ref: '#/components/schemas/Employee'
    put:
      parameters:
      - name: id
        in: path
        required: true
        schema:
          type: string
      requestBody:
        content:
          application/json:
            schema:
              $ref: '#/components/schemas/Employee'
      responses:
        "200":
          description: OK
          content:
            application/json:
              schema:
                $ref: '#/components/schemas/Employee'
    delete:
      parameters:
      - name: id
        in: path
        required: true
        schema:
          type: string
      responses:
        "204":
          description: No Content
components:
  schemas:
    Employee:
      required:
      - name
      - title
      - department
      type: object
      properties:
        id:
          type: string
        name:
          minLength: 1
          type: string
          nullable: false
        title:
          minLength: 1
          type: string
          nullable: false
        department:
          minLength: 1
          type: string
          nullable: false
        age:
          format: int32
          minimum: 1
          type: integer

This includes all of the operations we defined in the rest.EmployeesResource class as well as the definition of the model.Employee entity class. Additionally, it picked up on our Bean Validation annotations, and so all of the @NotEmpty properties are marked as being non-null and non-empty strings, while the age has a minimum of 1, as coded.

Expanding the Definition

That, on its own, is pretty useful, and it will automatically adapt to any code changes you make. However, you can go further.

Versions

For example, while having the file as it is will work for development, you'll want to give it a version when it goes into production, so that any API consumers can know when it's expected that the API changed. There are two ways to do this with this project. If you have a $TemplateBuild shared field with a template version, then the code will pick up on that. Alternatively, you can specify configuration properties via MicroProfile Config. To do that, create a new file in the "Code/Java" directory of the project in the Package Explorer view in Designer named "microprofile-config.properties" within a directory named "META-INF":

Creating a microprofile-config.properties file

If you don't have a Package Explorer pane, you can add it by going to Window and then either switching to the "XPages" perspective or going to "Show Eclipse Views" and picking it from there. To add the folder and then the file, you can right-click the "Code/Java" folder there and then going to "New" - "Other..." and picking each in turn.

Once you have that file open, add a line like this:

1
mp.openapi.extensions.smallrye.info.version=1.0.1

("SmallRye" is the name of several MicroProfile spec implementations)

Then save. Now, when you open the OpenAPI spec, it'll start like this:

1
2
3
4
5
---
openapi: 3.0.3
info:
  title: Jakarta Code-First REST
  version: 1.0.1

Now, as long as you update this for API changes or use a $TemplateBuild field, your OpenAPI will be nicely versioned. As a nice bonus, if you build your NSF using the NSF ODP Tooling project, it can add the Maven version to $TemplateBuild by default, so you don't have to worry about manual updates.

Endpoint Descriptions

Next, while all of our endpoints are listed, it'd be good to add some additional detail. While they're more-or-less clear now, it'll get less so as the app grows. This is generally done via annotations - there are a bunch of them, but we'll focus on just a few for now.

We'll start with a basic one: adding a description to the endpoint that lists all Employees. To do this, go back to rest.EmployeeResource and add an annotation of type org.eclipse.microprofile.openapi.annotations.Operation:

1
2
3
4
5
6
@GET
@Produces(MediaType.APPLICATION_JSON)
@Operation(description="Retrieves a list of all employee entities in the data store")
public List<Employee> get() {
	return employees.findAll(Sorts.sorts().asc("name")).collect(Collectors.toList());
}

Once you add that, then that part of the OpenAPI spec will read:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
paths:
  /employees:
    get:
      description: Retrieves a list of all employee entities in the data store
      responses:
        "200":
          description: OK
          content:
            application/json:
              schema:
                type: array
                items:
                  $ref: '#/components/schemas/Employee'

Better by one step. The @Operation annotation itself has a couple more properties, which let you specify an operationId (very useful for code generated from the spec, so I advise doing it in a fully-fledged app) or marking the operation as "hidden", so it won't show up in the output at all.

Model Annotations

Next up, we'll add some descriptive information to the Employee model itself. For this, we'll go back to model.Employee and start adding annotations of type org.eclipse.microprofile.openapi.annotations.media.Schema:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
@Schema(description = "Represents an individual employee within the system")
@Entity
public class Employee {
	/* snip */
	
	private @Id String id;
	@Schema(description="The employee's full name", example="Foo Fooson")
	private @Column @NotEmpty String name;
	@Schema(description="The employee's job title", example="CTO")
	private @Column @NotEmpty String title;
	@Schema(description="The name of the employee's current department within the company", example="IT")
	private @Column @NotEmpty String department;
	@Schema(description="The employee's current age", example="80")
	private @Column @Min(1) int age;

	/* snip */
}

The @Schema annotation is usable in a lot of situations and has a lot of options, but these will suffice for now. Once we add these, our OpenAPI spec expands in the components section to this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
components:
  schemas:
    Employee:
      description: Represents an individual employee within the system
      required:
      - name
      - title
      - department
      type: object
      properties:
        id:
          type: string
        name:
          description: The employee's full name
          minLength: 1
          type: string
          example: Foo Fooson
          nullable: false
        title:
          description: The employee's job title
          minLength: 1
          type: string
          example: CTO
          nullable: false
        department:
          description: The name of the employee's current department within the company
          minLength: 1
          type: string
          example: IT
          nullable: false
        age:
          format: int32
          description: The employee's current age
          minimum: 1
          type: integer
          example: 80

Now, anyone reading this (or interpreting it with a tool) will have just a bit more information about it. In this case, the descriptions don't add much, but you can imagine expanding this to cover your business specific business rules for formatting, internal codes, etc..

Swagger

Though there's a lot more that you can do to expand the OpenAPI generation, I'll leave it there for now. I'll finish up here with one of the more straightforward benefits you get from this: using Swagger UI. Swagger UI is a tremendously-popular tool for visualizing (and, to an extent, working with) OpenAPI specifications. You can download Swagger UI yourself (to run locally or put in your NSF) or use the live demo, which runs in your browser.

If you want to use the live demo, you can enable CORS in the microprofile-config.properties file created earlier, setting rest.cors.enable to true and rest.cors.allowedOrigins to *.

Once you have it accessible, you can point Swagger UI to your URL, like "http://some.server/foo.nsf/xsp/app/openapi.yaml", and it'll generate a nice summary:

Screenshot of Swagger UI pointing at our app

You can imagine either handing that off to your front-end developer or using it yourself when working on the client part of your system. As you expand your spec - say, adding @Tag to categorize your resources - the UI will expand to reflect it as well.

Conclusion

I'm quite fond of the MicroProfile OpenAPI spec here - it's easy to use and you don't have to worry about the fiddly work of actually generating the spec. Additionally, it's an excellent example of the kind of benefits you get from building on top of Jakarta and MP specifications: because they're built by the active involvement and many companies and with an eye towards interoperability, you automatically get to use tools like Swagger UI or OpenAPI Generator that have no knowledge of Domino. You're rowing in the same direction as lots of others.

In the short term, I plan to update the examples section of the project Git repo with the newer version of these classes, and then follow up by putting the "GitHub issues" client code I wrote for my recent OpenNTF presentation in as another example. Once I do the latter, I'll make sure to post about it as well.

Code-First REST APIs With XPages Jakarta EE Support

Thu Aug 25 11:43:50 EDT 2022

Tags: jakartaee
  1. Code-First REST APIs With XPages Jakarta EE Support
  2. Code-First REST APIs Followup: OpenAPI
  3. XAgents to Jakarta REST Services

Today, I'd like to do a bit of a demonstration post. Specifically, I'd like to demonstrate the basics of making a basic CRUD (Create, Read, Update, Delete) REST API using the XPages Jakarta EE Support project, storing data in the NSF of the app. This will kind of act like a condensed version of the longer series on rewriting the OpenNTF site.

I think it will be a good example of how you can design an API starting from the data level up, with all of the pieces fitting together the whole time in a cohesive whole. There are some bugs for me to address stopping it from also being the source of an OpenAPI spec you could give to front-end developers, but that will come along for the ride once I fix that.

In any event, the core here will be simple: it will be an NSF that will have one document type - "Employee" - and the ability to manipulate those documents in a type-safe way from a REST client. I won't be going over how to actually use this in a browser or remote app, just because that's essentially an infinite rabbit hole. As it is, the code involved is written entirely in an NSF using Designer. This assumes you have a recent build installed and that your NSF has all of the libraries from this checked in Xsp Properties.

The Data Model

We'll be starting with defining the data model. While you could make a Form design element for this too, you don't need to. Our model will be pretty bare-bones: an ID and four scalar properties, without worrying in this exercise about relationships with other model objects. The class (with getters/setters snipped) looks like this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
package model;

import java.util.stream.Stream;

import org.openntf.xsp.nosql.mapping.extension.DominoRepository;

import jakarta.nosql.mapping.Column;
import jakarta.nosql.mapping.Entity;
import jakarta.nosql.mapping.Id;
import jakarta.nosql.mapping.Sorts;
import jakarta.validation.constraints.Min;
import jakarta.validation.constraints.NotEmpty;

@Entity
public class Employee {
	public interface Repository extends DominoRepository<Employee, String> {
		Stream<Employee> findAll(Sorts sorts);
	}
	
	private @Id String id;
	private @Column @NotEmpty String name;
	private @Column @NotEmpty String title;
	private @Column @NotEmpty String department;
	private @Column @Min(1) int age;
	
	/* (snip) "Source" -> "Generate Getters and Setters..." */
}

This will cover all of our data-access needs. The Repository interface there is a Jakarta NoSQL repository that has built-in knowledge for CRUD and query operations that we'll need. In a larger app, you might also add some view-backed sources or other complexities, but we don't need it here.

Beyond the NoSQL annotations - @Entity, @Id, and @Column - this model also uses Jakarta Bean Validation annotations to ensure that the data being stored meets our requirements. The string all have to be non-empty and the age has to be an integer greater than zero (labor laws are lax in this imagined country, apparently). Those annotations will be enforced by Jakarta NoSQL, and will also be used when we get to the REST services. Having this sort of thing is a huge relief: since this is the only way our app will deal with data storage, there's inherently no path in the codebase that can store invalid data.

REST Services

Next, we'll start on the REST services. For a basic CRUD app like this, we'll have a few to define: listing all of them, creating a new one, and then reading, updating, and deleting an individual Employee. We'll start with the "list all" operation. This is in a second class:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
package rest;

import java.util.List;
import java.util.stream.Collectors;

import jakarta.inject.Inject;
import jakarta.nosql.mapping.Sorts;
import jakarta.ws.rs.GET;
import jakarta.ws.rs.Path;
import jakarta.ws.rs.Produces;
import jakarta.ws.rs.core.MediaType;
import model.Employee;

@Path("employees")
public class EmployeeResource {
	
	@Inject
	private Employee.Repository employees;
	
	@GET
	@Produces(MediaType.APPLICATION_JSON)
	public List<Employee> get() {
		return employees.findAll(Sorts.sorts().asc("name")).collect(Collectors.toList());
	}
}

This class will listen at foo.nsf/xsp/app/employees for a GET and provide back a JSON array of Employee objects. Behold, in all its glory:

Call to the employees list with no entries returned

Okay, well, we haven't created anything, so the fact that the JSON result over on the right is empty is correct. It's returning JSON - that JSON just happens to be [].

Create

So we'd better add a method to actually create a new Employee document. REST-idiom-wise, this should be POST to the same path that gets the list of employees, to create a new entity:

1
2
3
4
5
6
7
@POST
@Consumes(MediaType.APPLICATION_JSON)
@Produces(MediaType.APPLICATION_JSON)
public Employee create(@Valid Employee employee) {
	employee.setId(null);
	return employees.save(employee);
}

Now we're getting somewhere. Compared to the previous method, this one sprouts a @Consumes annotation to indicate that it expects a valid JSON form of an Employee, which it then takes as a method parameter. That parameter is annotated with jakarta.validation.Valid, which tells JAX-RS that it should perform bean validation on the incoming object before even calling the method. This isn't strictly necessary, but it's nice to have: without it, the method call would still fail, but the failure would show up as a stack trace from Jakarta NoSQL's innards and would have a 500 status code. We'll see in a bit what it looks like instead with this.

But first, for the normal case:

Call to create a valid employee entity

Shown here, the REST client POSTs valid JSON to this new endpoint (which is the same URL as previously) and receives back the new state of the entity with a 200 OK response. Because we don't have any extra computation going on here, it's just the same value but with the UNID filled in from having saved the document to the NSF.

If I mangle the data - say, by removing a property or, in this case, making an invalid age - I'll instead get back a 400 Bad Request response with some descriptive text:

Trying to create an invalid Employee entity

There are two minor things of note here. The first is that the response isn't in JSON. While this isn't wholly wrong per se, it's not ideal. There's an open issue to improve this. The second is something that can be changed readily within the app: the method parameter name here is arg0 instead of employee. While this again isn't wrong, since the important information is still conveyed, it'd be nice to improve this. Fortunately, we can: in Package Explorer, right-click the NSF project and go to properties. There, you can enable custom compilation settings to store the method parameter names.

Setting custom project compiler settings

I don't know why this is disabled by default.

Once you set that, the message will say employee instead of arg0, which is a bit nicer, and the better name will come along in other content types when that improves in the project too.

Query (redux) and Get

Now that we've created a document, we can re-run the base GET request and see a single-entry array:

Call to the employees list with one entry returned

That's more like it. We'll also want the ability to retrieve an individual entry by UNID, though, so we'll go back and add the method to our EmployeeResource class:

1
2
3
4
5
6
7
@Path("{id}")
@GET
@Produces(MediaType.APPLICATION_JSON)
public Employee getEmployee(@PathParam("id") String id) {
	return employees.findById(id)
		.orElseThrow(() -> new NotFoundException(MessageFormat.format("Could not find employee for ID {0}", id)));
}

Compared to our previous method, this adds a few new tricks:

  • The @Path("{id}") bit specifies a next level of path below employees, and the brackets indicate that it's an arbitrary value that can be picked up as a parameter.
  • The @PathParam("id") annotation indicates that the id method argument will be populated with the variable part of the path.
  • The orElseThrow(() -> new NotFoundException(...)) bit uses the orElseThrow method of Optional to handle the case where no document can be found with that UNID, and then throws the JAX-RS-specific NotFoundException to trigger a proper 404 Not Found response to the client.

The results of calling this are what you might expect, returning a single JSON object representing the Employee:

Call to get a single entity

Modification

Next up is the "U" part of CRUD: updating an existing document. This method essentially composes the "create new" and "read single" methods above. In REST verbiage, this should be a PUT to the same URL as the individual GET:

1
2
3
4
5
6
7
8
@Path("{id}")
@PUT
@Consumes(MediaType.APPLICATION_JSON)
@Produces(MediaType.APPLICATION_JSON)
public Employee update(@PathParam("id") String id, @Valid Employee employee) {
	employee.setId(id);
	return employees.save(employee);
}

The only new concept here is the @PUT annotation - the rest is a re-composition of earlier operations. Defining this allows the caller to send a new version of an Employee entity to replace the existing one:

Call to update an existing entity

With some more work, you could also make an PATCH method that would take an unvalidated Employee and update only changed fields, but that's out of scope for this for now. That'd be a good addition for a fully-fleshed-out REST endpoint, though.

Deletion

Finally, we'll get to the last part of CRUD: deleting documents. This actually ends up being the simplest method of all:

1
2
3
4
5
@Path("{id}")
@DELETE
public void delete(@PathParam("id") String id) {
	employees.deleteById(id);
}

This listens at the same path as the last two, but for DELETE verbs. Then, all it does is delete the entity and return no content. If you chose, you could return JSON like {"success":true} or something - it's always kind of arbitrary what you respond with on DELETE beyond the success status code.

Call to delete an entity

In that screenshot, you can see that it returns 204 No Content, which is the HTTP way to say "yep, that worked, and I don't have anything else to tell you".

Conclusion

This was two classes (and a nested interface) in total, and it allowed us to create a type- and validation-safe REST API for NSF documents. Beyond just the relatively-small amount of code, there are a few things that make this foundation important.

First of all, the code is (as long as you're comfortable with Java and some of the concepts) eminently readable. This is code that you could hand off to another team member or come back to in five years and be able to very-quickly comprehend. This is a critical distinction from less-declarative frameworks like traditional XPages.

Secondly, this is a capable basis for future development. You can come back in and expand this app - more entities, additional methods, etc. - and this original code will still hold strong. You can scale the app up to medium-sized (like the OpenNTF site) or all the way to monstrosity and your framework will be consistent the whole time. This contrasts from frameworks that are either too limited to scale up or (like XPages) turn into an unmaintainable mess above a basic level.

I could go on, but I'll leave it there for now. I continue to find this environment quite pleasant to develop for, and it's always satisfying to see how several of the specs tie together like this.

August OpenNTF Webinar - XPages Jakarta EE Support In Practice

Tue Aug 16 08:16:56 EDT 2022

This Thursday (two days from now), I'll be presenting for OpenNTF's webinar series on the topic of the XPages Jakarta EE Support project. From our summary:

The XPages Jakarta EE Support project on OpenNTF adds an array of modern capabilities to NSF-based Java development. These improvements can be used for wholly-new applications or added incrementally to existing ones.

In this webinar Jesse Gallagher will demonstrate how to use this project to perform common tasks in better ways, such as creating and consuming REST services, writing managed beans with CDI, and using new EL features in XPages. Though these examples will largely use Java, they do not require any knowledge of OSGi or extension library development, nor any tools other than Designer.

This webinar will take place on August 18, 2022 at 11:00AM (New York Time) to 12:30PM.

Register for this webinar at: https://register.gotowebinar.com/register/6878765070462193675

My intent for this is to show the most-common components used with some examples of how I'm using them in practice. I hope it will also be an opportunity for anyone who (reasonably) balks at the opaque monolith to ask questions and get a better idea for whether it'd be helpful for them.

Adding Transactions to the XPages Jakarta EE Support Project

Wed Jul 20 16:03:55 EDT 2022

Tags: jakartaee
  1. Updating The XPages JEE Support Project To Jakarta EE 9, A Travelogue
  2. JSP and MVC Support in the XPages JEE Project
  3. Migrating a Large XPages App to Jakarta EE 9
  4. XPages Jakarta EE Support 2.2.0
  5. DQL, QueryResultsProcessor, and JNoSQL
  6. Implementing a Basic JNoSQL Driver for Domino
  7. Video Series On The XPages Jakarta EE Project
  8. JSF in the XPages Jakarta EE Support Project
  9. So Why Jakarta?
  10. XPages Jakarta EE 2.5.0 And The Looming Java-Version Wall
  11. Adding Concurrency to the XPages Jakarta EE Support Project
  12. Adding Transactions to the XPages Jakarta EE Support Project
  13. XPages Jakarta EE 2.9.0 and Next Steps
  14. XPages JEE 2.11.0 and the Javadoc Provider
  15. The Loose Roadmap for XPages Jakarta EE Support
  16. XPages JEE 2.12.0: JNoSQL Views and PrimeFaces Support
  17. XPages JEE 2.13.0
  18. XPages JEE 2.14.0
  19. XPages JEE 2.15.0 and Plans for JEE 10 and 11

As my work of going down the list of JEE specs is hitting dwindling returns, I decided to give a shot to implementing the Jakarta Transactions spec.

Implementation Oddities

This one's a little spicy for a couple of reasons, one of which is that it's really a codified implementation of another spec, the X/Open XA standard, which is an old standard for transaction processing. As is often the case, "old" here also means "fiddly", but fortunately it's not too bad for this need.

Another reason is that, unlike with a lot of the specs I've implemented, all of the existing implementations seem a bit too heavyweight for me to adapt. I may look around again later: I could have missed one, and eventually GlassFish's implementation may spin off. In the mean time, I wrote a from-scratch implementation for the XPages JEE project: it doesn't cover everything in the spec, and in particular doesn't support suspend/resume for transactions, but it'll work for normal cases in an NSF.

The Spec In Use

Fortunately, while implementations are generally complex (as befits the problem space), the actual spec is delightfully simple for users. There are two main things for an app developer to know about: the jakarta.transaction.UserTransaction interface and the @jakarta.transaction.Transactional annotation. These can be used separately or combined to add transactional behavior. For example:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
@Transactional
public void createExampleDocAndPerson() {
	Person person = new Person();
	/* do some business logic */
	person = personRepository.save(person);
	
	ExampleDoc exampleDoc = new ExampleDoc();
	/* do some business logic */
	exampleDoc = repository.save(exampleDoc);
}

The @Transactional annotation here means that the intent is that everything in this method either completes or none of it does. If something to do with ExampleDoc fails, then the creation of the Person doc shouldn't take effect, even though code was already executed to create and save it.

You can also use UserTransaction directly:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
@Inject
private UserTransaction transaction;

public void createExampleDocAndPerson() throws Exception {
	transaction.begin();
	try {
		Person person = new Person();
		/* do some business logic */
		person = personRepository.save(person);
	
		ExampleDoc exampleDoc = new ExampleDoc();
		/* do some business logic */
		exampleDoc = repository.save(exampleDoc);

		transaction.commit();
	} catch(Exception e) {
		transaction.rollback();
	}
}

There's no realistic way to implement this in a general way, so the way the Transactions API takes shape is that specific resources - databases, namely - can opt in to transaction processing. While this won't necessarily save you from, say, sending out an errant email, it can make sure that related records are only ever updated together.

Domino Implementation

This support is fairly common among SQL databases and other established resources, and, fortunately for us, Domino's transaction support became official in 12.0.

So I modified the JNoSQL driver to hook into transactions when appropriate. When it's able to find an open transaction, it begins a native Domino transaction and then registers itself as a javax.transaction.xa.XAResource (a standard Java SE interface) to either commit or roll back that DB transaction as appropriate.

From what I can tell, Domino's transaction support is a bit limited compared to what the spec can do. Apparently, Domino's support is based around thread-specific stacks - while this has the nice attribute of supporting nested transactions, it doesn't look to support suspending transactions or propagating them across threads.

Fortunately, the normal case will likely not need those more-arcane capabilities. Considering that few if any Domino applications currently make use of transactions at all, this should be a significant boost.

Next Steps

As I mentioned above, I'll likely take another look around for svelte implementations of the spec, since I'd be more than happy to cast my partial implementation to the wayside. While my version seems to check out in practice, I'd much sooner rely on a proven implementation from elsewhere.

Beyond that, any next steps may come in the form of supporting other databases via JPA. While JDBC drivers may hook into this as it is, I haven't tried that, and this could be a good impetus to adding JPA to the supported stack. That'll come post-2.7.0, though - this release has enough big-ticket items and should now be in a cool-off period before I call it solid and move to the next one.

Adding Concurrency to the XPages Jakarta EE Support Project

Mon Jul 11 13:37:46 EDT 2022

Tags: jakartaee
  1. Updating The XPages JEE Support Project To Jakarta EE 9, A Travelogue
  2. JSP and MVC Support in the XPages JEE Project
  3. Migrating a Large XPages App to Jakarta EE 9
  4. XPages Jakarta EE Support 2.2.0
  5. DQL, QueryResultsProcessor, and JNoSQL
  6. Implementing a Basic JNoSQL Driver for Domino
  7. Video Series On The XPages Jakarta EE Project
  8. JSF in the XPages Jakarta EE Support Project
  9. So Why Jakarta?
  10. XPages Jakarta EE 2.5.0 And The Looming Java-Version Wall
  11. Adding Concurrency to the XPages Jakarta EE Support Project
  12. Adding Transactions to the XPages Jakarta EE Support Project
  13. XPages Jakarta EE 2.9.0 and Next Steps
  14. XPages JEE 2.11.0 and the Javadoc Provider
  15. The Loose Roadmap for XPages Jakarta EE Support
  16. XPages JEE 2.12.0: JNoSQL Views and PrimeFaces Support
  17. XPages JEE 2.13.0
  18. XPages JEE 2.14.0
  19. XPages JEE 2.15.0 and Plans for JEE 10 and 11

For a little while, I've had a task open for me to investigate the Jakarta Concurrency and MP Context Propagation specs, and this weekend I decided to dive into that. While I've shelved the MicroProfile part for now, I was successful in implementing Concurrency, at least for the most part.

The Spec

The Jakarta Concurrency spec deals with extending Java's default multithreading services - Threads, ExecutorServices, and ScheduledExecutorServices - in a couple ways that make them more capable in Jakarta EE applications. The spec provides Managed variants of these executors, though they extend the base Java interfaces and can be treated the same way by user code.

While the extra methods here and there for task monitoring are nice, and I may work with them eventually, the big-ticket item for my needs is propagating context from the initializer to the thread. By "context" here I mean things like knowledge of the running NSF, its CDI environment, the user making the HTTP request, and so forth. As it shakes out, this is no small task, but the spec makes it workable.

Examples

In its basic form, an ExecutorService lets you submit a task and then either let it run on its own time or use get() to synchronously wait for its execution - sort of like async/await but less built-in. For example:

1
2
ExecutorService exec = /* get an ExecutorService */;
String basic = exec.submit(() -> "Hello from executor").get();

A ScheduledExecutorService extends this a bit to allow for future-scheduled and repeating tasks:

1
2
3
4
5
String[] val = new String[1];
ScheduledExecutorService exec = /* get a ScheduledExecutorService */;
exec.schedule(() -> { val[0] = "hello from scheduler"; }, 250, TimeUnit.MILLISECONDS);
Thread.sleep(300);
// Now val[0] is "hello from scheduler"

Those examples aren't exactly useful, but hopefully you can get some further ideas. With an ExecutorService, you can spin up multiple concurrent tasks and then wait for them all - in a client project, I do this to speed up large view reading by divvying it up into chunks, for example. Alternatively, you could accept an interactive request from a user and then kick off a thread to do hefty work while returning a response before it's done.

There's an example of this sort of thing in XPages up on OpenNTF from about a decade ago. It uses Eclipse Jobs as its concurrency tool of choice, but the idea is largely the same.

The Basic Implementation

For the core implementation code, I grabbed the GlassFish implementation, which was the Reference Implementation back when JEE had Reference Implementations. With that as the baseline, I was responsible for just a few tasks:

The devil was in the details, but the core lifecycle wasn't too bad.

JNDI

One intriguing and slightly vexing thing about this API is that the official way to access these executors is to use JNDI, the "Java Naming and Directory Interface", which is something of an old and weird spec. It's also one of the specs that remains in standard Java while being in the javax.* namespace, and those always feel weird nowadays.

Anyway, JNDI is used for a bunch of things (like ruining Thanksgiving), but one of them is to provide named objects from a container to an application - kind of like managed beans.

One common use for this is to allow an app container (such as Open Liberty) to manage a JDBC connection to a relational database, allowing the app to just reference it by name and not have to manage the driver and connection specifics. I use that for this blog, in fact.

XPages apps don't really do this, but Domino does include a com.ibm.pvc.jndi.provider.java OSGi bundle that handles JNDI basics. I'm sure there are some proper ways to go about registering services with this, but I couldn't be bothered: in practice, I just call context.rebind(...) and call it a day.

Ferrying the Context

The core workhorse of this is the ContextSetupProvider implementation. It's the part that's responsible for being notified when context is going to be shunted around and then doing the work of grabbing what's needed in a portable way and setting it up for threaded code. For my needs, I set up an extension interface that can be used to register different participants, so that the Concurrency bundle doesn't have to retain knowledge about everything.

So far, there are a few of these.

Notes Context

The first of these is the NSFNotesContextParticipant, which takes on the job of identifying the current Notes/XPages environment and preparing an equivalent one in the worker thread. The "Threads and Jobs" project above does something like this using the ThreadSessionExecutor class provided with the runtime, but that didn't really suit my needs.

What this class does is grab the current com.ibm.domino.xsp.module.nsf.NotesContext, pulls the NSFComponentModule and HttpServletRequest from it, and then uses that information to set up the new thread before tearing it down when the task is done.

This process initializes the thread like a NotesThread and also sets appropriate Session and Database objects in the context, so code running in an NSF can use those in their threaded tasks without having to think about the threading:

1
String userName = exec.submit(() -> "Username is: " + NotesContext.getCurrent().getCurrentSession().getEffectiveUserName()).get();

This class does some checking to make sure it's in an NSF-specific request. I may end up also writing an equivalent one for OSGi Servlet/Web Container requests as well.

CDI

Outside of Notes-runtime specifics, the most important context to retain is the CDI context. Fortunately, that one's not too difficult:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
@Override
public void saveContext(ContextHandle contextHandle) {
	if(contextHandle instanceof AttributedContextHandle) {
		if(LibraryUtil.isLibraryActive(CDILibrary.LIBRARY_ID)) {
			((AttributedContextHandle)contextHandle).setAttribute(ATTR_CDI, CDI.current());
		}
	}
}

@Override
public void setup(ContextHandle contextHandle) throws IllegalStateException {
	if(contextHandle instanceof AttributedContextHandle) {
		CDI<Object> cdi = ((AttributedContextHandle)contextHandle).getAttribute(ATTR_CDI);
		ConcurrencyCDIContainerLocator.setCdi(cdi);
	}
}

@Override
public void reset(ContextHandle contextHandle) {
	if(contextHandle instanceof AttributedContextHandle) {
		ConcurrencyCDIContainerLocator.setCdi(null);
	}
}

This checks to make sure CDI is enabled for the current app and, if so, uses the standard CDI.current() method to find the container. This is the same method used everywhere and ends up falling to the NSFCDIProvider class to actually locate it. That bounces back to a locator service that returns the thread-set one, and thus the executor is able to find it. Then, threaded code is able to use CDI.current().select(...) to find beans from the application:

1
2
String databasePath = exec.submit(() -> "Database is: " + CDI.current().select(Database.class).get()).get();
String applicationGuy = exec.submit(() -> "applicationGuy is: " + CDI.current().select(ApplicationGuy.class).get().getMessage()).get();

Next Steps

To flesh this out, I have some other immediate work to do. For one, I'll want to see if I can ferry over the current JAX-RS application context - that will be needed for using the MicroProfile Rest Client, for example.

Beyond that, I'm considering implementing the MicroProfile Context Propagation spec, which provides some alternate capabilities to go along with this functionality. It may be a bit more work than it's worth for NSF use, but I like to check as many boxes as I can. Those concepts look to have made it into Concurrency 3.0, but, as that version is targeted for Jakarta EE 10, it's almost certain that final implementation builds will require Java 11.

Finally, though, and along similar lines, I'm pondering backporting the @Asynchronous annotation from Concurrency 3.0, which is a CDI extension that allows you to implicitly make a method asynchronous:

1
2
3
4
5
@Asynchronous
public CompletableFuture<Object> someExpensiveOperation() {
	Object result = /* do something expensive */;
	return Asynchronous.Result.complete(result);
}

With that, it's similar to submitting the task specifically, but CDI will do all the work of actually making it async when called. We'll see - I've avoided backporting much, but that one is tempting.

WebAuthn/Passkey Login With JavaSapi

Tue Jul 05 14:05:23 EDT 2022

Tags: java webauthn
  1. Poking Around With JavaSapi
  2. Per-NSF-Scoped JWT Authorization With JavaSapi
  3. WebAuthn/Passkey Login With JavaSapi

Over the weekend, I got a wild hair to try something that had been percolating in my mind recently: get Passkeys, Apple's term for WebAuthn/FIDO technologies, working with the new Safari against a Domino server. And, though some aspects were pretty non-obvious (it turns out there's a LOT of binary-data stuff in JavaScript now), I got it working thanks to a few tools:

  1. JavaSapi, the "yeah, I know it's not supported" DSAPI Java peer
  2. webauthn.guide, a handy resource
  3. java-webauthn-server, a Java implementation of the WebAuthn server components

Definition

First off: what is all this? All the terms - Passkey, FIDO, WebAuthn - for our purposes here deal with a mechanism for doing public/private key authentication with a remote server. In a sense, it's essentially a web version of what you do with Notes IDs or SSH keys, where the only actual user-provided authentication (a password, biometric input, or the like) happens locally, and then the local machine uses its private key combined with the server's knowledge of the public key to prove its identity.

WebAuthn accomplishes this with the navigator.credentials object in the browser, which handles dealing with the local security storage. You pass objects between this store and the server, first to register your local key and then subsequently to log in with it. There are a lot of details I'm fuzzing over here, in large part because I don't know the specifics myself, but that will cover it.

While browsers have technically had similar cryptographic authentication for a very long time by way of client SSL certificates, this set of technologies makes great strides across the board - enough to make it actually practical to deploy for normal use. With Safari, anything with Touch ID or Face ID can participate in this, while on other platforms and browsers you can use either a security key or similar cryptographic storage. Since I have a little MacBook Air with Touch ID, I went with that.

The Flow

Before getting too much into the specifics, I'll talk about the flow. The general idea with WebAuthn is that the user gets to a point where they're creating a new account (where password doesn't yet matter) or logged in via another mechanism, and then have their browser generate the keypair. In this case, I logged in using the normal Domino login form.

Once in, I have a button on the page that will request that the browser create the keypair. This first makes a request to the server for a challenge object appropriate for the user, then creates the keypair locally, and then POSTs the results back to the server for verification. To the user, that looks like:

WebAuthn key creation in Safari

That keypair will remain in the local device's storage - in this case, Keychain, synced via iCloud in upcoming versions.

Then, I have another button that performs a challenge. In practice, this challenge would be configured on the login page, but it's on the same page for this proof-of-concept. The button causes the browser to request from the server a list of acceptable public keys to go with a given username, and then prompts the user to verify that they want to use a matching one:

WebAuthn assertion in Safari

The implementation details of what to do on success and failure are kind of up to you. Here, I ended up storing active authentication sessions on a server in a Map keyed the SessionID cookie value to user name, but all options are open.

Dance Implementation

As I mentioned above, my main tool on the server side was java-webauthn-server, which handles a lot of the details of the processs. For this experiment, I cribbed their InMemoryRegistrationStorage class, but a real implementation would presumably store this information in an NSF.

For the client side, there's a npm module to handle the specifics, but I was doing this all on a single HTML page and so I just borrowed from it in pieces (particularly the Base64/ByteArray stuff).

On the server, I created an OSGi plugin that contained a com.ibm.pvc.webcontainer.application webapp as well as the JavaSapi service: since they're both in the same OSGi bundle, that meant I could share the same classes and memory space, without having to worry about coordination between two very-distinct parts (as would be the case with DSAPI).

The webapp part itself actually does more of the work, and does so by way of four Servlets: one for the "create keys" options, one for registering a created key, one for the "I want to start a login" pre-flight, and finally one for actually handling the final authentication. As an implementation note, getting this working involved removing guava.jar from the classpath temporarily, as the library in question made heavy use of a version slightly newer than what Domino ships with.

Key Creation

The first Servlet assumes that the user is logged in and then provides a JSON object to the client describing what sort of keypair it should create:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
Session session = ContextInfo.getUserSession();
			
PublicKeyCredentialCreationOptions request = WebauthnManager.instance.getRelyingParty()
	.startRegistration(
		StartRegistrationOptions.builder()
			// Creates or retrieves an in-memory object with an associated random "handle" value
		    .user(WebauthnManager.instance.locateUser(session))
		    .build()
	);

// Store in the HTTP session for later verification. Could also be done via cookie or other pairing
req.getSession(true).setAttribute(WebauthnManager.REQUEST_KEY, request);
	
String json = request.toCredentialsCreateJson();
resp.setStatus(200);
resp.setHeader("Content-Type", "application/json"); //$NON-NLS-1$ //$NON-NLS-2$
resp.getOutputStream().write(String.valueOf(json).getBytes());

The client side retrieves these options, parses some Base64'd parts to binary arrays (this is what the npm module would do), and then sends that back to the server to create the registration:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
fetch("/webauthn/creationOptions", { "credentials": "same-origin" })
	.then(res => res.json())
	.then(json => {
		json.publicKey.challenge = base64urlToBuffer(json.publicKey.challenge, c => c.charCodeAt(0));
		json.publicKey.user.id = base64urlToBuffer(json.publicKey.user.id, c => c.charCodeAt(0));
		if(json.publicKey.excludeCredentials) {
			for(var i = 0; i < json.publicKey.excludeCredentials.length; i++) {
				var cred = json.publicKey.excludeCredentials[i];
				cred.id = base64urlToBuffer(cred.id);
			}
		}
		navigator.credentials.create(json)
			.then(credential => {
				// Create a JSON-friendly payload to send to the server
				const payload = {
					type: credential.type,
					id: credential.id,
					response: {
						attestationObject: bufferToBase64url(credential.response.attestationObject),
						clientDataJSON: bufferToBase64url(credential.response.clientDataJSON)
					},
					clientExtensionResults: credential.getClientExtensionResults()
				}
				fetch("/webauthn/create", {
					method: "POST",
					body: JSON.stringify(payload)
				})
			})
	})

The code for the second call on the server parses out the POST'd JSON and stores the registration in the in-memory storage (which would properly be an NSF):

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
String json = StreamUtil.readString(req.getReader());
PublicKeyCredential<AuthenticatorAttestationResponse, ClientRegistrationExtensionOutputs> pkc = PublicKeyCredential
	.parseRegistrationResponseJson(json);

// Retrieve the request we had stored in the session earlier
PublicKeyCredentialCreationOptions request = (PublicKeyCredentialCreationOptions) req.getSession(true)
	.getAttribute(WebauthnManager.REQUEST_KEY);

// Perform registration, which verifies that the incoming JSON matches the initiated request
RelyingParty rp = WebauthnManager.instance.getRelyingParty();
RegistrationResult result = rp
	.finishRegistration(FinishRegistrationOptions.builder().request(request).response(pkc).build());

// Gather the registration information to store in the server's credential repository
DominoRegistrationStorage repo = WebauthnManager.instance.getRepository();
CredentialRegistration reg = new CredentialRegistration();
reg.setAttestationMetadata(Optional.ofNullable(pkc.getResponse().getAttestation()));
reg.setUserIdentity(request.getUser());
reg.setRegistrationTime(Instant.now());
RegisteredCredential credential = RegisteredCredential.builder()
	.credentialId(result.getKeyId().getId())
	.userHandle(request.getUser().getId())
	.publicKeyCose(result.getPublicKeyCose())
	.signatureCount(result.getSignatureCount())
	.build();
reg.setCredential(credential);
reg.setTransports(pkc.getResponse().getTransports());

Session session = ContextInfo.getUserSession();
repo.addRegistrationByUsername(session.getEffectiveUserName(), reg);

Login/Assertion

Once the client has a keypair and the server knows about the public key, then the client can ask the server for what it would need if one were to log in as a given name, and then uses that information to make a second call. The dance on the client side looks like:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
var un = /* fetch the username from somewhere, such as a login form */
fetch("/webauthn/assertionRequest?un=" + encodeURIComponent(un))
	.then(res => res.json())
	.then(json => {
		json.publicKey.challenge = base64urlToBuffer(json.publicKey.challenge, c => c.charCodeAt(0));
		if(json.publicKey.allowCredentials) {
			for(var i = 0; i < json.publicKey.allowCredentials.length; i++) {
				var cred = json.publicKey.allowCredentials[i];
				cred.id = base64urlToBuffer(cred.id);
			}
		}
		navigator.credentials.get(json)
			.then(credential => {
				const payload = {
					type: credential.type,
					id: credential.id,
					response: {
						authenticatorData: bufferToBase64url(credential.response.authenticatorData),
						clientDataJSON: bufferToBase64url(credential.response.clientDataJSON),
						signature: bufferToBase64url(credential.response.signature),
						userHandle: bufferToBase64url(credential.response.userHandle)
					},
					clientExtensionResults: credential.getClientExtensionResults()
				}
				fetch("/webauthn/assertion", {
					method: "POST",
					body: JSON.stringify(payload),
					credentials: "same-origin"
				})
			})
	})

That's pretty similar to the middle code block above, really, and contains the same sort of ferrying to and from transport-friendly JSON objects and native credential objects.

On the server side, the first Servlet - which looks up the available public keys for a user name - is comparatively simple:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
String userName = req.getParameter("un"); //$NON-NLS-1$
AssertionRequest request = WebauthnManager.instance.getRelyingParty()
	.startAssertion(
		StartAssertionOptions.builder()
			.username(userName)
			.build()
	);

// Stash the current assertion request
req.getSession(true).setAttribute(WebauthnManager.ASSERTION_REQUEST_KEY, request);

String json = request.toCredentialsGetJson();
resp.setStatus(200);
resp.setHeader("Content-Type", "application/json"); //$NON-NLS-1$ //$NON-NLS-2$
resp.getOutputStream().write(String.valueOf(json).getBytes());

The final Servlet handles parsing out the incoming assertion (login) and stashing it in memory as associated with the "SessionID" cookie. That value could be anything that the browser will send with its requests, but "SessionID" works here.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
String json = StreamUtil.readString(req.getReader());
PublicKeyCredential<AuthenticatorAssertionResponse, ClientAssertionExtensionOutputs> pkc =
	    PublicKeyCredential.parseAssertionResponseJson(json);

// Retrieve the request we had stored in the session earlier
AssertionRequest request = (AssertionRequest) req.getSession(true).getAttribute(WebauthnManager.ASSERTION_REQUEST_KEY);

// Perform verification, which will ensure that the signed value matches the public key and challenge
RelyingParty rp = WebauthnManager.instance.getRelyingParty();
AssertionResult result = rp.finishAssertion(FinishAssertionOptions.builder()
	.request(request)
	.response(pkc)
	.build());

if(result.isSuccess()) {
	// Keep track of logins
	WebauthnManager.instance.getRepository().updateSignatureCount(result);
	
	// Find the session cookie, which they will have by now
	String sessionId = Arrays.stream(req.getCookies())
		.filter(c -> "SessionID".equalsIgnoreCase(c.getName()))
		.map(Cookie::getValue)
		.findFirst()
		.get();
	WebauthnManager.instance.registerAuthenticatedUser(sessionId, result.getUsername());
}

Trusting the Key

At this point, there's a dance between the client and server that results in the client being able to perform secure, password-less authentication with the server and the server knowing about the association, and so now the remaining job is just getting the server to actually trust this assertion. That's where JavaSapi comes in.

Above, I used the "SessionID" cookie as a mechanism to store an in-memory association between a browser cookie (which is independent of authentication) to a trusted user. Then, I made a JavaSapi service that looks for this and tries to find an authenticated user in its authenticate method:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
@Override
public int authenticate(IJavaSapiHttpContextAdapter context) {
	Cookie[] cookies = context.getRequest().getCookies();
	if(cookies != null) {
		Optional<Cookie> cookie = Arrays.stream(context.getRequest().getCookies())
			.filter(c -> "SessionID".equalsIgnoreCase(c.getName())) //$NON-NLS-1$
			.findFirst();
		if(cookie.isPresent()) {
			String sessionId = cookie.get().getValue();
			Optional<String> user = WebauthnManager.instance.getAuthenticatedUser(sessionId);
			if(user.isPresent()) {
				context.getRequest().setAuthenticatedUserName(user.get(), "WebAuthn"); //$NON-NLS-1$
				return HTEXTENSION_REQUEST_AUTHENTICATED;
			}
		}
	}
	
	return HTEXTENSION_EVENT_DECLINED;
}

And that's all there is to it on the JavaSapi side. Because it shares the same active memory space as the webapp doing the dance, it can use the same WebauthnManager instance to read the in-memory association. You could in theory do this another way with DSAPI - storing the values in an NSF or some other mechanism that can be shared - but this is much, much simpler to do.

Conclusion

This was a neat little project, and it was a good way to learn a bit about some of the native browser objects and data types that I haven't had occasion to work with before. I think this is also something that should be in the product; if you agree, go vote for the ideas from a few years ago.

Rewriting The OpenNTF Site With Jakarta EE: UI

Mon Jun 27 15:06:20 EDT 2022

  1. Rewriting The OpenNTF Site With Jakarta EE, Part 1
  2. Rewriting The OpenNTF Site With Jakarta EE: UI

In what may be the last in this series for a bit, I'll talk about the current approach I'm taking for the UI for the new OpenNTF web site. This post will also tread ground I've covered before, when talking about the Jakarta MVC framework and JSP, but it never hurts to reinforce the pertinent aspects.

MVC

The entrypoint for the UI is Jakarta MVC, which is a framework that sits on top of JAX-RS. Unlike JSF or XPages, it leaves most app-structure duties to other components. This is due both to its young age (JSF predates and often gave rise to several things we've discussed so far) and its intent. It's "action-based", where you define an endpoint that takes an incoming HTTP request and produces a response, and generally won't have any server-side UI state. This is as opposed to JSF/XPages, where the core concept is the page you're working with and the page state generally exists across multiple requests.

Your starting point with MVC is a JAX-RS REST service marked with @Controller:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
package webapp.controller;

import java.text.MessageFormat;

import bean.EncoderBean;
import jakarta.inject.Inject;
import jakarta.mvc.Controller;
import jakarta.mvc.Models;
import jakarta.ws.rs.GET;
import jakarta.ws.rs.NotFoundException;
import jakarta.ws.rs.Path;
import jakarta.ws.rs.PathParam;
import jakarta.ws.rs.Produces;
import jakarta.ws.rs.core.MediaType;
import model.home.Page;

@Path("/pages")
public class PagesController {
    
    @Inject
    Models models;
    
    @Inject
    Page.Repository pageRepository;
    
    @Inject
    EncoderBean encoderBean;

    @Path("{pageId}")
    @GET
    @Produces(MediaType.TEXT_HTML)
    @Controller
    public String get(@PathParam("pageId") String pageId) {
        String key = encoderBean.cleanPageId(pageId);
        Page page = pageRepository.findBySubject(key)
            .orElseThrow(() -> new NotFoundException(MessageFormat.format("Unable to find page for ID: {0}", key)));
        models.put("page", page); //$NON-NLS-1$
        return "page.jsp"; //$NON-NLS-1$
    }
}

In the NSF, this will respond to requests like /foo.nsf/xsp/app/pages/Some_Page_Name. Most of what is going on here is the same sort of thing we saw with normal REST services: the @Path, @GET, @Produces, and @PathParam are all normal JAX-RS, while @Inject uses the same CDI scaffolding I talked about in the last post.

MVC adds two things here: @Inject Models models and @Controller.

The Models object is conceptually a Map that houses variables that you can populate to be accessible via EL on the rendered page. You can think of this like viewScope or requestScope in XPages and is populated in something like the beforePageLoad phase. Here, I use the Models object to store the Page object I look up with JNoSQL.

The @Controller annotation marks a method or a class as participating in the MVC lifecycle. When placed on a class, it applies to all methods on the class, while placing it on a method specifically allows you to mix MVC and "normal" REST resources in the same class. Doing that would be useful if you want to, for example, provide HTML responses to browsers and JSON responses to API clients at the same resource URL.

When a resource method is marked for MVC use, it can return a string that represents either a page to render or a redirection in the form "redirect:some/resource". Here, it's hard-coded to use "page.jsp", but in another situation it could programmatically switch between different pages based on the content of the request or state of the app.

While this looks fairly clean on its own, it's important to bear in mind both the strengths and weaknesses of this approach. I think it will work here, as it does for my blog, because the OpenNTF site isn't heavy on interactive forms. When dealing with forms in MVC, you'll have to have another endpoint to listen for @POST (or other verbs with a shim), process that request from scratch, and return a new page. For example, from the XPages JEE example app:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
@Path("create")
@POST
@Consumes(MediaType.APPLICATION_FORM_URLENCODED)
@Controller
public String createPerson(
        @FormParam("firstName") @NotEmpty String firstName,
        @FormParam("lastName") String lastName,
        @FormParam("birthday") String birthday,
        @FormParam("favoriteTime") String favoriteTime,
        @FormParam("added") String added,
        @FormParam("customProperty") String customProperty
) {
    Person person = new Person();
    composePerson(person, firstName, lastName, birthday, favoriteTime, added, customProperty);
    
    personRepository.save(person);
    return "redirect:nosql/list";
}

That's already fiddlier than the XPages version, where you'd bind fields right to bean/document properties, and it gets potentially more complicated from there. In general, the more form-based your app is, the better a fit XPages/JSF is.

JSP

While MVC isn't intrinsically tied to JSP (it ships with several view engine hooks and you can write your own), JSP has the advantage of being built in to all Java webapp servers and is very well fit to purpose. When writing JSPs for MVC, the default location is to put them in WEB-INF/views, which is beneath WebContent in an NSF project:

Screenshot of JSPs in an NSF

The "tags" there are the general equivalent of XPages Custom Controls, and their presence in WEB-INF/tags is convention. An example page (the one used above) will tend to look something like this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
<%@page contentType="text/html" pageEncoding="UTF-8" trimDirectiveWhitespaces="true" session="false" %>
<%@taglib prefix="t" tagdir="/WEB-INF/tags" %>
<%@taglib prefix="c" uri="http://java.sun.com/jsp/jstl/core" %>
<%@taglib prefix="fn" uri="http://java.sun.com/jsp/jstl/functions" %>
<t:layout>
    <turbo-frame id="page-content-${page.linkId}">
        <div>
            ${page.html}
        </div>
        
        <c:if test="${not empty page.childPageIds}">
            <div class="tab-container">
                <c:forEach items="${page.cleanChildPageIds}" var="pageId" varStatus="pageLoop">
                    <input type="radio" id="tab${pageLoop.index}" name="tab-group" ${pageLoop.index == 0 ? 'checked="checked"' : ''} />
                    <label for="tab${pageLoop.index}">${fn:escapeXml(encoder.cleanPageId(pageId))}</label>
                </c:forEach>
                    
                <div class="tabs">
                    <c:forEach items="${page.cleanChildPageIds}" var="pageId">
                        <turbo-frame id="page-content-${pageId}" src="xsp/app/pages/${encoder.urlEncode(pageId)}" class="tab" loading="lazy">
                        </turbo-frame>
                    </c:forEach>
                </div>
            </div>
        </c:if>
    </turbo-frame>
</t:layout>

There are, by shared lineage and concept, a lot of similarities with an XPage here. The first four lines of preamble boilerplate are pretty similar to the kind of stuff you'd see in an <xp:view/> element to set up your namespaces and page options. The tag prefixing is the same idea, where <t:layout/> refers to the "layout" custom tag in the NSF and <c:forEach/> refers to a core control tag that ships with the standard tag library, JSTL. The <turbo-frame/> business isn't JSP - I'll deal with that later.

The bits of EL here - all wrapped in ${...} - are from Expression Language 4.0, which is the current version of XPages's aging EL. On this page, the expressions are able to resolve variables that we explicitly put in the Models object, such as page, as well as CDI beans with the @Named annotation, such as encoderBean. There are also a number of implicit objects like request, but they're not used here.

In general, this is safely thought of as an XPage where you make everything load-time-bound and set viewState="nostate". The same sorts of concepts are all there, but there's no concept of a persistent component that you interact with. Any links, buttons, and scripts will all go to the server as a fresh request, not modifying an existing page. You can work with application and session scopes, but there's no "view" scope.

Hotwired Turbo

Though this app doesn't have much need for a lot of XPages's capabilities, I do like a few components even for a mostly "read-only" app. In particular, the <xe:djContentPane/> and <xe:djTabContainer/> controls have the delightful capability of deferring evaluation of their contents to later requests. This is a powerful way to speed up initial page load and, in the case of the tab container, skip needing to render parts of the page the user never uses.

For this and a couple other uses, I'm a fan of Hotwired Turbo, which is a library that grew out of 37 Signals's Rails-based development. The goal of Turbo and the other Hotwired components is to keep the benefits of server-based HTML rendering while mixing in a lot of the niceties of JS-run apps. There are two things that Turbo is doing so far in this app.

The first capability is dubbed "Turbo Drive", and it's sort of a freebie: you enable it for your app, tell it what is considered the app's base URL, and then it will turn any in-app links into "partial refresh" links: it downloads the page in the background and replaces just the changed part on the page. Though this is technically doing more work than a normal browser navigation, it ends up being faster for the user interface. And, since it also updates the URL to match the destination page and doesn't require manual modification of links, it's a drop-in upgrade that will also degrade gracefully if JavaScript isn't enabled.

The second capability is <turbo-frame/> up there, and it takes a bit more buy-in to the JS framework in your app design. The way I'm using Turbo Frames here is to support the page structure of OpenNTF, which is geared around a "primary" page as well as zero or more referenced pages that show up in tabs. Here, I'm buying in to Turbo Frames by surrounding the whole page in a <turbo-frame/> element with an id using the page's key, and then I reference each "sub-page" in a tab with that same ID. When loading the frame, Turbo makes a call to the src page, finds the element with the matching id value, and drops it in place inside the main document. The loading="lazy" parameter means that it defers loading until the frame is visible in the browser, which is handy when using the HTML/CSS-based tabs I have here.

I've been using this library for a while now, and I've been quite pleased. Though it was created for use with Rails, the design is independent of the server implementation, and the idioms fit perfectly with this sort of Java app too.

Conclusion

I think that wraps it up for now. As things progress, I may have more to add to this series, but my hope is that the app doesn't have to get much more complicated than the sort of stuff seen in this series. There are certainly big parts to tackle (like creating and managing projects), but I plan to do that by composing these elements. I remain delighted with this mode of NSF-based app development, and look forward to writing more clean, semi-declarative code in this vein.