Showing posts for tag "jnosql"

Jakarta NoSQL Driver for the AppDev Pack, Part 2

Mon Sep 26 09:55:05 EDT 2022

Tags: java jnosql
  1. Jakarta NoSQL Driver for the AppDev Pack, Part 1
  2. Jakarta NoSQL Driver for the AppDev Pack, Part 2

In my last post, I talked about how I implemented a partial Jakarta NoSQL driver using the AppDev Pack as a back end instead of the Notes.jar classes used by the primary implementation. Though the limitations in the ADP mean that it lacks a number of useful features compared to the primary one, it was still an interesting experiment and has the nice side effect of working with essentially any Java app server and Java version 8 or above.

Beyond the Proton API calls, the driver brought up the interesting topic of handling authentication. Proton has three ways of working in this regard:

  • Anonymous, which is what you might expect based on how that works elsewhere in Domino. This is easy but not particularly useful except in specific circumstances.
  • Client certificate authentication, where you create a TLS keychain for a given user and associate it with a Directory user (e.g. CN=My Proton App/O=MyOrg), and then your app performs all operations as that user. This is basically like if you ran a remote app with NRPC using a client Notes ID.
  • Act-as-User, which builds on the above authentication by configuring an OAuth broker service that can hand out OIDC tokens on behalf of named users. This is sort of like server-to-server communication with the "Trusted Servers" config field in the server doc, but different in key ways.

Client Certificate Authentication

When doing app development, the middle route makes sense as your starting point, since most of your actual work will likely be the same regardless of whether you later then add on act-as-user support. For that, you'll follow the guide to set up your TLS keychain and then feed those files to the com.hcl.domino.db.model.Server object:

1
2
3
4
5
6
7
Path base = Paths.get(BASE_PATH);
		
File ca = base.resolve("rootcrt.pem").toFile();
File cert = base.resolve("clientcrt.pem").toFile();
File key = base.resolve("clientkey.pem").toFile();

Server server = new Server("ceres.frostillic.us", 3003, ca, cert, key, null, null, Executors.newSingleThreadExecutor());

The use of java.io.File classes here is a bit of a shame, but not the end of the world. In practice, you'd likely store your keychain somewhere on the filesystem anyway and then feed the BASE_PATH property to your app via an environment variable. Otherwise, if they were pulled from some other source, you could use Files.createTempFile to store them on the filesystem while your app is running. Those nulls are for the passphrases for the certificates, so they might be populated for you. For the last parameter, making a new executor is fine, but you might want to hand it a ManagedExecutorService.

You can initialize this connection basically anywhere - I put it in a ServletRequestListener to init and term the object per-request, but I think it would actually be fine to do it in a ServletContextListener and keep it app-wide. I did it out of habit from Notes.jar and its heavy requirements on threads and the way Session objects are different per-user, but that's not how Proton connections work.

This form of authenticate is a prerequisite for Act-as-User below, but it might suffice for your needs anyway. For example, if you have a "utility" app, like a bot that looks up data and posts messages to Slack or something, you can call it good here.

Act-as-User

But though the above may suffice sometimes, the integration of user identity with data access is one of the hallmarks of Domino, so Domino apps that wouldn't need Act-as-User support are few and far between.

Act-as-User is a bit daunting to set up, though. In traditional server-to-server communication in Domino, it suffices to just add a server's name or group to the "Trusted Servers" list in the server doc of the server being accessed. Then, it will trust any old name that the app-housing server sends along. Generally, this will be a user that was already authenticated with Domino, like using an XPages-supplied Session object to access a remote server, but it doesn't have to be.

Act-as-User, though, uses OpenID Connect with an authentication server to do the actual authentication, and then the Proton task is told to accept those tokens as legal for acting on behalf of a given user. While you could in theory write your own OIDC server that dispenses tokens for any user name willy-nilly, in practice you'll almost definitely use an existing implementation. In the default case, that implementation will in turn almost definitely be IAM, which is an OAuth broker service with the AppDev Pack that stores its configuration data in an NSF but and reads users from Domino (or elsewhere) via LDAP.

IAM, though, isn't special in this regard. It's packaged with the ADP, sure, but the way it deals with tokens is entirely standards-based. That means that any compatible implementation can fill this role, and, since I'd heard great things about Keycloak, I figured I'd give that a shot. With some gracious assistance from Heiko Voigt, I was able to get this working - I don't want to steal his future thunder by going into too much detail, but honestly the main hurdles for me were just around learning how Keycloak works. Once you have the concepts down, you basically plug in the Keycloak client details in for the IAM ones in the same configuration.

With that set up, you can feed your user token from the web app into your Proton API calls, and then your actions will be running as your user in the same way as if you were authenticated in an on-server XPages or other app. The way this manifests in the Java API is a little weird, but it works well enough: almost all Proton calls have a varargs portion at the end of their method signature that takes OptionalArg instances. One such type is OptionalAccessToken, which takes your auth token as a String. I have a method that will stitch in an access token when present. That gets passed in when making calls, such as to read documents:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
OptionalItemNames itemNamesArg = new OptionalItemNames(itemNames);
OptionalStart startArg = new OptionalStart((int)skip);
OptionalCount countArg = new OptionalCount(limit < 1 ? Integer.MAX_VALUE : (int)limit);
			
List<Document> docs = database.readDocuments(
	dql,
	composeArgs(
		itemNamesArg,
		startArg,
		countArg
	)
).get();

App Authentication

Okay, so that's what you do when you have a setup and a token, but that leaves the process of the user actually acquiring the token. From the user's perspective, this will generally take the form of doing an "OAuth dance" where, when the user tries to access a protected resource, they're sent over to Keycloak to authenticate, which then sends them back to the app with token in hand.

There are a lot of ways one might accomplish this, varying language-to-language, framework-to-framework, and server-to-server. You will be shocked to learn that I'm using Open Liberty for my app here, and that comes with built-in support for OIDC.

Before I go further, I should put forth a big caveat: I'm really muddling through with this one for the time being. The setup I have only kind of works, and is clearly not the ideal one, but it was enough to make the connection happen. I'm not sure if the right path long-term is to keep using this built-in feature or to switch to either a different built-in option or another library entirely. So... absolutely do not take anything here as advice in the correct way to do this.

Anyway, with that out of the way, you can configure your Liberty server to talk to your Keycloak server (or IAM, probably, but I didn't do that):

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
<openidConnectClient id="client01"
    clientId="liberty-tester"
    clientSecret="some-client-secret"
    discoveryEndpointUrl="https://some.keycloak.server/auth/realms/master/.well-known/openid-configuration"
    signatureAlgorithm="RS256"
    sslRef="httpSsl"
    accessTokenInLtpaCookie="true"
    userIdentifier="preferred_username"
    groupIdentifier="groups">
</openidConnectClient>
<ssl id="httpSsl" trustDefaultCerts="true" keyStoreRef="myKeyStore" trustStoreRef="OIDCTrustStore" />
<keyStore id="myKeyStore" password="super-secure-password1" type="PKCS12" location="${server.config.dir}/BasicKeyStore.p12" />
<keyStore id="OIDCTrustStore" password="super-secure-password2" type="PKCS12" location="${server.config.dir}/OIDCTrustStore.p12" />

The keyStores there contain appropriate certificate chains for the TLS connection to your Keycloak server, while the clientId and clientSecret match what you configure/generate on Keyclaok for this new client app.

What got me able to actually use this token for downstream access was the accessTokenInLtpaCookie property. If you set that, then your HttpServletRequest objects after the initial one will have an oidc_access_token property on them containing your token in the format that Proton needs. So that's where the ContextDatabaseSupplier in the previous post got it:

1
2
3
4
5
6
7
@Produces
public AccessTokenSupplier getAccessToken() {
	return () -> {
		HttpServletRequest request = CDI.current().select(HttpServletRequest.class).get();
		return (String)request.getAttribute("oidc_access_token");
	};
}

This is also one of the parts that makes me think I'm not quite doing this ideally. It's weird that the token shows up in only requests after the first, though that wouldn't be an impediment in a lot of app types. It's also very unfortunate to have the app use a server-specific property like that.

Fortunately, Jakarta Security 3.0 sprouted official OIDC support, though the build of Open Liberty I was using didn't quite have all the pieces in place for that - reasonable, considering Jakarta EE 10 only officially came out yesterday and this was weeks ago. It looks like that may provide the token in a contextual object, so I'll have to give that a shot once support settles in.

With my setup in place, janky as it may be, I'm able to access a resource (e.g. a JAX-RS endpoint) marked with @RolesAllowed("uma_authorization") and the server will automatically kick me over to Keycloak and then accept the token when I get back. Then, I can pick that up from the request attributes and use it for Domino data access. Keycloak is getting its user directory from Domino via LDAP in the same way as IAM usually would, but, like IAM with AD, it could be configured to use different user directories. I don't know that I'll want to do that, but it's good to know.

Conclusion

Like the original driver itself, this was mostly an educational exercise for me. I don't currently have any requirements to use the AppDev Pack or OIDC/Keycloak, but I'd wanted to dip my toes in both for a while now, and I'm pleased that I came out successful. I imagine that I'll have an occasion to implement something like this eventually. It may not be the same specific parts, but the core concepts are common, like in Keep's JWT and OAuth support. It's a neat setup, and it's definitely worth doing something similar if you have some experimentation time on your hands.

Jakarta NoSQL Driver for the AppDev Pack, Part 1

Fri Sep 09 10:20:28 EDT 2022

Tags: java jnosql
  1. Jakarta NoSQL Driver for the AppDev Pack, Part 1
  2. Jakarta NoSQL Driver for the AppDev Pack, Part 2

Though the bulk of the work I've been doing for the XPages Jakarta EE project is to bring JEE technologies to Domino, the NoSQL driver has been designed to lead a double life: it works in an XPages context, but it's written to not have any XPages dependencies. One reason for this is that I want it to be usable if you use, for example, the Open Liberty runtime project to side-car apps on a Domino server but still use Notes.jar for data access.

Another reason for its organization, though, is that I intended for the driver to be portable across implementations. The driver itself is split into a main bundle and a ".lsxbe" implementation bundle. My original thought was to make that ready for a JNX or Domino JNA implementation, but it's pretty flexible.

Proton

And that brings me to the topic of this post: the AppDev Pack. I'd actually not used the AppDev Pack at all until I started in on this little project. While I like the idea, it at first only had a JS client, and then even with a Java client lacked some important capabilities I'd need for most of what I do. That's still the case, unfortunately, but my general annoyance with the problems of developing on top of a local Notes runtime tipped the scales to get me to investigate it.

In essence, the core part of the AppDevPack - the Proton addin - is kind of like a modern take on the CORBA driver for Notes.jar, in that it provides a remote way to make API calls on the server that are "low-level-ish" and ideally higher-performance than REST APIs. Since it's remote, it imposes no requirements on the environment of the app making the calls, and could in theory work with any language that has a gRPC library (though in practice HCL has only ever shipped JS and Java clients without providing the gRPC spec to generate others).

The API that Proton provides is heavily geared around batch operations, focusing on using DQL for querying. This suits my needs well, as Jakarta NoSQL essentially assumes you'll have something like DQL available to do arbitrary document queries. It's also geared around requesting specific items from documents to return, which further suits me well - for efficiency purposes, I wrote code into the LSXBE driver to only read desired items anyway, so that adapted naturally.

The Driver

So, over the long weekend here, I set out into writing a driver suitable for use in standalone JEE applications to access Domino via Proton. The goal here is to make it so that code written to target the LSXBE driver will work the same way with this Proton-based one, with the only differences being limitations in what Proton provides and a switch to providing a lotus.domino.Database context to providing a com.hcl.domino.db.model.Database one.

And, though the limitation list there is a bit lengthy, I accomplished the main part of my goal there. Since it shares a lot of code in common with the LSXBE driver, the implementation here is pretty small. The bulk of the code happens in ProtonDocumentCollectionManager, which is the implementation of all the raw JNoSQL primitives - insert, update, select, etc. - and then ProtonEntityConverter, which is the utility class that translates between Proton's concept of Document and JNoSQL's concept of DocumentEntity.

The Proton API is... very weird from a Java perspective. Some of it bears a little similarity to the NIO Files API if you squint, but otherwise it's just an odd duck. Fortunately, a consumer of this driver doesn't have to care about that beyond the initial connection to the server. The specifics of actually connecting to a server and opening a database are the same as with normal Proton use - you do that and then provide a CDI bean to hand off the Database object to the driver:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
@RequestScoped
public class ContextDatabaseSupplier {
	// This type of supplier is required
	@Produces
	public DatabaseSupplier get() {
		// Here, the `Database` object is set in a `ServletRequestListener`, though it likely could be app-wide
		return () -> {
			HttpServletRequest request = CDI.current().select(HttpServletRequest.class).get();
			return (Database)request.getAttribute(DatabaseContextListener.ATTR_DB);
		};
	}
	
	// This supplier is optional - leaving it out or returning null will cause the driver to skip performing Act-As-User
	@Produces
	public AccessTokenSupplier getAccessToken() {
		return () -> {
			HttpServletRequest request = CDI.current().select(HttpServletRequest.class).get();
			return (String)request.getAttribute("oidc_access_token");
		};
	}
}

Next Steps

Now, admittedly, I'm not sure how much more time I'm going to put into this driver. Though there are a few enhancements that could be made - attachments and rich text, namely - I'm probably not going to use this myself in the near future. The immediate cases I have where I'd want to use this would end up involving things like views that aren't currently accessible over Proton, and so it's be really just speculative development. Still, I wanted to make this to kick the tires a bit and see what's possible, and it's got my gears turning.

There'll also be a bit more I want to go over in the next post. In the above code block, I mention Act-As-User tokens; the driver will use those when provided, and the driver itself doesn't care how you acquire them, but it's an interesting topic on its own. During my testing, I was able to get a Liberty-hosted app performing OIDC authentication for its own login and then using that token for access to Proton, and I think that that will deserve some expansion on its own.

In the mean time, if you're curious, the driver is available up on GitHub:

https://github.com/OpenNTF/jnosql-driver-proton

The artifact is published to OpenNTF's Maven repository, though you'll need to follow the instructions in the README there to add the non-redistributable Proton client library to your local Maven repo.

Working Domino Views Into Jakarta NoSQL

Sun Jun 12 15:33:47 EDT 2022

A few versions ago, I added Jakarta NoSQL support to the XPages Jakarta EE Support project. For that, I used DQL and QueryResultsProcessor exclusively, since it's a near-exact match for the way JNoSQL normally goes things and QRP brought the setup into the realm of "good enough for the normal case".

However, as I've been working on a project that puts this to use, the limitations have started to hold me back.

The Limitations

The first trouble I ran into was the need to list, for example, the most recent 20 of an entity. This is something that QRP took some steps to handle, but it still has to build the pseudo-view anew the first time and then any time documents change. This gets prohibitively expensive quickly. In theory, QRP has enough flexibility to use existing views for sorting, but it doesn't appear to do so yet. Additionally, its "max entries" and "max documents" values are purely execution limits and not something to use to give a subset report: they throw an exception when that many entries have been processed, not just stop execution. For some of this, one can deal with it when manually writing the DQL query, but the driver doesn't have a path to do so.

The second trouble I ran into was the need to get a list composed of multiple kinds of documents. This one is a limitation of the default idiom that JNoSQL uses, where you do queries on named types of documents - and, in the Domino driver, that "type" corresponds to Form field values.

The Uncomfortable Solution

Thus, hat in hand, I returned to the design element I had hoped to skim past: views. Views are an important tool, but they are way, way overused in Domino, and I've been trying over time to intentionally limit my use of them to break the habit. Still, they're obviously the correct tool for both of these jobs.

So I made myself an issue to track this and set about tinkering with some ways to make use of them in a way that would do what I need, be flexible for future needs, and yet not break the core conceit of JNoSQL too much. My goal is to make almost no calls to an explicit Domino API, and so doing this will be a major step in that direction.

Jakarta NoSQL's Extensibility

Fortunately for me, Jakarta NoSQL is explicitly intended to be extensible per driver, since NoSQL databases diverge more wildly in the basics than SQL databases tend to. I made use of this in the Darwino driver to provide support for stored cursors, full-text search, and JSQL, though all of those had the bent of still returning full documents and not "view entries" in the Domino sense.

Still, the idea is very similar. Jakarta NoSQL encourages a driver author to write custom annotations for repository methods to provide hints to the driver to customize behavior. This generally happens at the "mapping" layer of the framework, which is largely CDI-based and gives you a lot of room to intercept and customize requests from the app-developer level.

Implementation

To start out with, I added two annotations you can add to your repository methods: @ViewEntries and @ViewDocuments. For example:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
@RepositoryProvider("blogRepository")
public interface BlogEntryRepository extends DominoRepository<BlogEntry, String> {
    public static final String VIEW_BLOGS = "vw_Content_Blogs"; //$NON-NLS-1$
    
    @ViewDocuments(value=VIEW_BLOGS, maxLevel=0)
    Stream<BlogEntry> findRecent(Pagination pagination);
    
    @ViewEntries(value=VIEW_BLOGS, maxLevel=0)
    Stream<BlogEntry> findAll();
}

The distinction here is one of the ways I slightly break the main JNoSQL idioms. JNoSQL was born from the types of databases where it's just as easy to retrieve the entire document as it is to retrieve part - this is absolutely the case in JSON-based systems like Couchbase (setting aside attachments). However, Domino doesn't quite work that way: it can be significantly faster to fetch only a portion of a document than the data from all items, namely when some of those items are rich text or MIME.

The @ViewEntries annotation causes the driver to consider only the item values found in the entries of the view it's referencing. In a lot of cases, this is all you'll need. When you set a column in Designer to be just directly an item value from the documents, the column is by default named with the same name, and so a mapped entity pulled from this column can have the same fields filled in as from a document. This does have the weird characteristic where objects pulled from one method may have different instance values from the "same" objects from another method, but the tradeoff is worth it.

@ViewDocuments, fortunately, doesn't have this oddity. With that annotation, documents are processed in the same way as with a normal query; they just are retrieved according to the selection and order from the backing view.

Using these capabilities allowed me to slightly break the JNoSQL idiom in the other way I needed: reading unrelated document types in one go. For this, I cheated a bit and made a "document" type with a form name that doesn't correspond to anything, and then made the mapped items based on the view name. So I created this entity class:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
@Entity("ProjectActivity")
public class ProjectActivity {
    @Column("$10")
    private String projectName;
    @Column("Entry_Date")
    private OffsetDateTime date;
    @Column("$12")
    private String createdBy;
    @Column("Form")
    private String form;
    @Column("subject")
    private String subject;

    /* snip */
}

As you might expect, no form has a field named $10, but that is the name of the view column, and so the mapping layer happily populates these objects from the view when configured like so:

1
2
3
4
5
@RepositoryProvider("projectsRepository")
public interface ProjectActivityRepository extends DominoRepository<ProjectActivity, String> {
    @ViewEntries("AllbyDate")
    Stream<ProjectActivity> findByProjectName(@ViewCategory String projectName);
}

These are a little weird in that you wouldn't want to save such entities lest you break your data, but, as long as you keep that in mind, it's not a bad way to solve the problem.

Future Changes

Since this implementation was based on fulfilling just my immediate needs and isn't the result of careful consideration, it's likely to be something that I'll revisit as I go. For example, that last example shows the third custom annotation I introduced: @ViewCategory. I wanted to restrict entries to a category that is specified programmatically as part of the query, and so annotating the method parameter was a great way to do that. However, there are all sorts of things one might want to do dynamically when querying a view: setting the max level programmatically, specifying expand/collapse behavior, and so forth. I don't know yet whether I'll want to handle those by having a growing number of parameter annotations like that or if it would make more sense to consolidate them into a single ViewQueryOptions parameter or something.

I also haven't done anything special with category or total rows. While they should just show up in the list like any other entry, there's currently nothing special signifying them, and I don't have a way to get to the note ID either (just the UNID). I'll probably want to create special pseudo-items like @total or @category to indicate their status.

There'll also no doubt be a massive wave of work to do when I turn this on something that's not just a little side project. While I've made great strides in my oft-mentioned large client project to get it to be more platform-independent, it's unsurprisingly still riven with Domino API references top to bottom. While I don't plan on moving it anywhere else, writing so much code using explicit database-specific API calls is just bad practice in general, and getting this driver to a point where it can serve that project's needs would be a major sign of its maturity.

Video Series On The XPages Jakarta EE Project

Mon Feb 07 15:54:15 EST 2022

  1. Updating The XPages JEE Support Project To Jakarta EE 9, A Travelogue
  2. JSP and MVC Support in the XPages JEE Project
  3. Migrating a Large XPages App to Jakarta EE 9
  4. XPages Jakarta EE Support 2.2.0
  5. DQL, QueryResultsProcessor, and JNoSQL
  6. Implementing a Basic JNoSQL Driver for Domino
  7. Video Series On The XPages Jakarta EE Project
  8. JSF in the XPages Jakarta EE Support Project
  9. So Why Jakarta?
  10. XPages Jakarta EE 2.5.0 And The Looming Java-Version Wall
  11. Adding Concurrency to the XPages Jakarta EE Support Project
  12. Adding Transactions to the XPages Jakarta EE Support Project
  13. XPages Jakarta EE 2.9.0 and Next Steps
  14. XPages JEE 2.11.0 and the Javadoc Provider
  15. The Loose Roadmap for XPages Jakarta EE Support
  16. XPages JEE 2.12.0: JNoSQL Views and PrimeFaces Support
  17. XPages JEE 2.13.0
  18. XPages JEE 2.14.0
  19. XPages JEE 2.15.0 and Plans for JEE 10 and 11

Over the last two weeks, Graham Acres and I recorded a video series for OpenNTF about my XPages Jakarta EE Support project, which has seen a flurry of development in the last few months. The 15-part series is up on YouTube:

The project itself saw the release of version 2.3.0 today, which is the first release with the Jakarta NoSQL driver I blogged about recently.

I think the project has turned into a pretty-interesting "platform update" for XPages, and I hope the video series captures that a bit. I'm still mulling over a sort of "thesis statement" about the whole thing, but for now describing the various new capabilities and how they interact will have to suffice.

Implementing a Basic JNoSQL Driver for Domino

Tue Jan 25 13:36:06 EST 2022

  1. Updating The XPages JEE Support Project To Jakarta EE 9, A Travelogue
  2. JSP and MVC Support in the XPages JEE Project
  3. Migrating a Large XPages App to Jakarta EE 9
  4. XPages Jakarta EE Support 2.2.0
  5. DQL, QueryResultsProcessor, and JNoSQL
  6. Implementing a Basic JNoSQL Driver for Domino
  7. Video Series On The XPages Jakarta EE Project
  8. JSF in the XPages Jakarta EE Support Project
  9. So Why Jakarta?
  10. XPages Jakarta EE 2.5.0 And The Looming Java-Version Wall
  11. Adding Concurrency to the XPages Jakarta EE Support Project
  12. Adding Transactions to the XPages Jakarta EE Support Project
  13. XPages Jakarta EE 2.9.0 and Next Steps
  14. XPages JEE 2.11.0 and the Javadoc Provider
  15. The Loose Roadmap for XPages Jakarta EE Support
  16. XPages JEE 2.12.0: JNoSQL Views and PrimeFaces Support
  17. XPages JEE 2.13.0
  18. XPages JEE 2.14.0
  19. XPages JEE 2.15.0 and Plans for JEE 10 and 11

A few weeks back, I talked about my use of DQL and QRP in writing a JNoSQL driver for Domino. In that, I left the specifics of the JNoSQL side out and focused on the Domino side, but that former part certainly warrants some expansion as well.

Background

As a quick overview, Jakarta NoSQL is an approaching-finalization spec for working with NoSQL databases of various stripes in a Jakarta EE app. This is as opposed to the venerable JPA, which is a long-standing API for working with RDBMSes in JEE.

JNoSQL is the implementation of the Jakarta NoSQL spec, and is also an Eclipse project. As a historical note, the individual components of the implementation used to have Greek-mythological names, which is why older drivers like my Darwino driver or original Domino driver are sprinkled with references to "Diana" and "Artemis". The "JNoSQL" name also pre-dates its reification into a Jakarta spec - normally, spec names and implementations aren't quite so similarly named.

The specification is broken up into two main categories. The README for the implementation describes this well, but the summary is:

  1. "Communication" handles interpreting JNoSQL CRUD operations and actually applying them to the database.
  2. "Mapping" handles what the app developer interacts with: annotating classes to relate them to the back-end database and querying object repositories.

An individual driver may include code for both sides of this, but only the Communication side is obligatory to implement. A driver would contribute to the Mapping side as well if they want to provide database-specific higher-level concepts. For example, the Darwino driver does this to provide explicit annotations for its full-text search, stored-cursor, and JSQL capabilities. I may do similarly in the Domino driver to expose FT search, view operations, or DQL queries directly.

Jakarta NoSQL handles Key-Value, Column, Graph, and Document data stores, but we only care about the last category for now.

Implementation Overview

Now, on to the actual implementation in question. The handful of classes in the implementation fall into a few categories:

Implementation Details

The core entrypoint for data operations is DefaultDominoDocumentCollectionManager, and JNoSQL specifies a few main operations to implement, basically CRUD plus total count:

  • insert and its overrides handle taking an abstract DocumentEntity from JNoSQL and turning it into a new lotus.domino.Document in the target database.
  • update does similarly, but with the assumption that the incoming entity represents a modification to an existing document.
  • delete takes an incoming abstract query and deletes all documents matching it.
  • select takes an incoming abstract query, finds matching documents, converts them to a neutral format, and returns them to JNoSQL. This is what my earlier post was all about.
  • count retrieves a count for all documents in a "collection". "Collection" here is a MongoDB-ism and the most-practical Domino equivalent is "documents with a specific Form name".

Entity Conversion

The insert, update, and select methods have as part of their jobs the task of translating between Domino's storage and JNoSQL's intermediate representation, and this happens in EntityConverter.

Now, this point of the code has some... nomenclature-based issues. There's lotus.domino.Document, our legacy representation of a Domino document handle. Then, there's jakarta.nosql.document.Document: this oddly-named interface actually represents a single key-value pair within the conceptual document - roughly, this corresponds to lotus.domino.Item. Finally, there's jakarta.nosql.document.DocumentEntity, which is the higher-level representation of a conceptual document on the JNoSQL side, and this contains many jakarta.nosql.document.Documents. This all works out in practice, but it's important to know about when you look into the implementation code.

The first couple methods in this utility class handle converting query results of different types: QRP result JSON, QRP result views, and generic DocumentCollections. Strictly speaking, I could remove the first one now that it's unused, but there's a non-zero chance that I'll return to it if it ends up being efficient down the line.

Each of those methods will eventually call to toDocuments, which converts a lotus.domino.Document object to an equivalent List of JNoSQL Documents (i.e. the individual key-value pairs). Due to the way JNoSQL works, this method has no way to know what the actual desired fields the higher level will want are, so it attempts to convert all items in the document to more-common Java types. There's much more work to do here, some of it based on just needing to add other types (like improving rich text handling) and some of it based on needing a better Notes API (like proper conversion from Notes times to java.time).

In the other direction, there's the method that converts from a JNoSQL DocumentEntity to a Domino document, which is used by the insert and update methods. This converts some known common incoming types and converts them to Domino item values. Like the earlier methods, this could use some work in translating types, but that's also something that a better Notes API could handle for me.

Query Conversion

The QueryConverter class has a slightly-simpler job: taking JNoSQL's concept of a query and translating it to a document selection.

The jakarta.nosql.document.DocumentQuery type does a bit of double duty: it's used both for arbitrary queries (Foo='Bar'-type stuff) as well as for selecting documents by UNID. The select method covers that, producing a QueryConverterResult object to ferry the important information back to DefaultDominoDocumentCollectionManager.

The core work of QueryConverter is in getCondition, where it performs an AST-to-DQL conversion. JNoSQL has a couple mechanisms for querying entities: explicit Java-based queries, implicit queries based on repository definition, or a SQL-like query language. Regardless of what the higher level does to query, though, it comes to the Communication driver as this tree of objects (technically, the driver can handle the last specially, but by default it arrives parsed).

Fortunately, this sort of work is a common sort of idiom. You start with the top node of the tree, handle it based on its type and, as needed, recurse down into the next node. So if, for example, the top node is an EQUALS, all this converter needs to do is return the DQL representation of "this field equals that value", and so forth for other comparison operators. If it encounters AND, OR, or NOT, then the job changes to making a composite query of that operator plus the results of converting whatever the operator is applied to - which is where the recursion back into the same method comes in.

Future Work

The main immediate work to do here is enhancing the data conversion: handling more outgoing Domino item types and incoming Java object types. A good deal of this can be done as-is, but doing some other parts reliably will be best done by changing out the specific Notes API in use. I used lotus.domino because it's present already, but it's a placeholder for sure.

There are also a bunch of efficiency tweaks I can make: more lazy loading in conversion, optimizing data fetching for specific queries, and logging DQL explain results for developers.

Beyond that, I'll have to consider if it's worth adding extensions to the mapping side. As I mentioned, the Darwino driver has some extensions for its JSQL language and similar concepts, and it's possible that it'd be worth adding similar things for Domino, in particular direct FT searching. That said, DQL does a pretty good job being the all-consuming target, and so translating JNoSQL queries to DQL may suffice to extract what performance Domino can provide.

So we'll see. A lot of this will be based on what I need when I actually put this into real use, since right now it's partly hypothetical. In any event, I'm looking forward to finding places where I can use this instead of explicitly coding to Notes API objects for sure.

DQL, QueryResultsProcessor, and JNoSQL

Thu Jan 13 14:32:04 EST 2022

  1. Updating The XPages JEE Support Project To Jakarta EE 9, A Travelogue
  2. JSP and MVC Support in the XPages JEE Project
  3. Migrating a Large XPages App to Jakarta EE 9
  4. XPages Jakarta EE Support 2.2.0
  5. DQL, QueryResultsProcessor, and JNoSQL
  6. Implementing a Basic JNoSQL Driver for Domino
  7. Video Series On The XPages Jakarta EE Project
  8. JSF in the XPages Jakarta EE Support Project
  9. So Why Jakarta?
  10. XPages Jakarta EE 2.5.0 And The Looming Java-Version Wall
  11. Adding Concurrency to the XPages Jakarta EE Support Project
  12. Adding Transactions to the XPages Jakarta EE Support Project
  13. XPages Jakarta EE 2.9.0 and Next Steps
  14. XPages JEE 2.11.0 and the Javadoc Provider
  15. The Loose Roadmap for XPages Jakarta EE Support
  16. XPages JEE 2.12.0: JNoSQL Views and PrimeFaces Support
  17. XPages JEE 2.13.0
  18. XPages JEE 2.14.0
  19. XPages JEE 2.15.0 and Plans for JEE 10 and 11

As I've been adding new technologies to and talking about the XPages Jakarta EE project, I've kind of danced around a major missing layer: data access.

Technically, the toolchain has provided Domino data access all along, by way of having the same contextual sessions and database as XPages. You could use those to access whatever data you want, and they'd do the job as well as they ever do (c'est-?-dire: poorly). Beyond that, though, there's no equivalent to the (questionable) xp:dominoDocument and xp:dominoView components of XPages, and definitely no pre-provided object-to-database mapper.

The answer is pretty clear: Jakarta NoSQL. This API isn't quite finalized, but it's been usable for a long time: I wrote a Darwino driver for it years ago, and that driver is powering this very blog. I also wrote a Domino driver years ago, but it was very much a proof-of-concept: since it pre-dated DQL, it used formula queries for everything, and thus would scale extremely poorly. It was a nice exercise, but not anything useful.

For XPages JEE, I decided to take another swing at that. The implementation of the driver will warrant a tale on its own, but for now I'd like to focus on the DQL side of it.

DQL

I talked a bit about DQL when it came out, back when it wasn't well-understood, but since then I haven't actually had much occasion to put it to use. For the times I've needed complex Domino data access since then, it's been built on pre-existing operations on top of views. While adding DQL has been something I've considered from time to time, it'd never hit the threshold of being worth it: our needs involve extracting tons of data to bulk send it to service clients, and so views have remained necessary. While we could in theory alter our querying and filtering to select documents and project those selections onto the views, it'd have been a lot of work for partial benefits.

DQL itself has gotten more capable in the intervening years, and just on its own it's a perfect match for JNoSQL needs. Since all JNoSQL operations are sent to the driver as either individual doc IDs or an arbitrary query, something like DQL is required, and it's up to the task now.

It's half of the story, though. What DQL (by way of the DominoQuery object) gives you is a DocumentCollection, effectively just the list of note IDs. You can, as I'd hypothesized about doing, apply that against a view to extract data, but that still requires you to separate out the act of view management from the act of doing queries. If you want to have data sorted or categorized, you would still have to create an equivalent or superset view.

QueryResultsProcessor

So that's where the addition of QueryResultsProcessor comes in. QRP is technically distinct from DQL - you can use it to process arbitrary document collections, for example - but they're certainly a conceptual match. If you're comparing it to a SQL statement, DQL is the "FROM foo" and "WHERE x" parts, while QRP is the "SELECT a,b,c", "ORDER BY y", and "GROUP BY z" parts.

The general way it works is that you:

  1. Create a QueryResultsProcessor from a Database instance (as opposed to Session - this distinction becomes important later)
  2. Feed it sources of documents: DQL queries or arbitrary document collections
  3. Add any desired columns to extract data. These are Domino-style columns, and you can also specify sorting and categorization here, as you would when building a view
  4. Since data may come from multiple databases, you can also customize column formulas to account for that
  5. Execute the process and retrieve the results, currently either as JSON or as a "view". More on these "views" later

When I first heard about QRPs, I had a concern with step 2: I'd thought that you could only pass a built DocumentCollection to the processor, which would significantly limit the room for Domino to add behind-the-scenes efficiencies. However, my fears were unfounded; the ability to pass in a DominoQuery object and the DQL directly and let the QRP execute it means that HCL is free to do whatever they want to make it fast. That's the sort of thing that makes SQL queries potentially so stupidly efficient: because you're just asking the database for results, the DB is free to optimize the heck out of them. This pairing potentially brings that to Domino, and that's what makes it important.

JSON Output

The executeToJson method is pretty straightforward if a somewhat-peculiar choice. It has no parameters, and returns the results of your query as reasonably-formatted JSON. It's unfortunate that this returns a String and not an InputStream, which adds some inherent inefficiency to dealing with it on the Java side, but that will only really hurt with very-large data sets.

Along with the requested fields, formula results, and aggregations, the document entries include the note ID (oddly in "formula" format) and the database file path, so you can use that to open up the document.

Anyway, this is a workmanlike format and can be potentially just sent to REST clients directly, though it'd be good form to at least strip out the DB paths and note IDs.

View Output

Now here's the spicy one. The executeToView method stores the results in a very-weird type of view. This has a few big advantages over the JSON mechanism:

  • The view persists in the database, up to a number of hours you specify programmatically. This allows you to essentially offload some extra caching to the database, which is ideal
  • You can use ViewNavigator and other efficient mechanisms to work with the view data, meaning you don't have the "here's a big result blob in memory" problem you have with the JSON format
  • Since it's a "view", anything that works with view data can work with it. This is presumably the reason it's implemented this way at all, rather than as some new kind of entity - building on existing mechanisms
  • The "anything that works with view data" doesn't just mean things like ViewNavigator: it also means the Notes client and view data sources

Now, these "views" have a lot of weird characteristics. It's useful to see the specifics listed out like that, but they all derive from a core lesson to ingest:

This is not a stored query; it is a cached result.

These views are not auto-updated, nor is there any mechanism I know of to refresh them outside of deleting and re-creating them. They're equivalent in concept to if you took the JSON from the first type and stored it in a document somewhere: it'll only change if you change it. The only way Domino will act on them is to delete them when they're expired.

Anyway, the data in these views is the same data that would go to the JSON format, just stored as Notes collation data instead of a string. It contains columns, potentially categorized and aggregated, for the data you requested, as well as hidden "$DBPath" and "$NoteID" columns at the end. The entry-level note ID (the one from entry.getNoteID()) is arbitrary and intended to not represent an actual document - since, after all, the documents may come from distinct databases. I've found the value of entry.getUniversalID() to be the doc's original UNID, but this is best treated as not a guarantee and so should not be used.

Designer Rights

So here's a fun catch: though any Reader can perform a query, you need Designer access to create a view. This seemed like a problem to me at first, since I'd want the generated results to be from a specific user for reader-field purposes, but it's not really an impediment, at least when you're in an environment like XPages.

Above, I mentioned that the fact that you create a QueryResultsProcessor object from a Database is important, and this is one of the reasons why. Though traditionally you wouldn't mix descendants of session and sessionAsSigner together, there's no actual rule against it. You can re-open your context database with sessionAsSigner, make a QRP object from that, and then feed it a DominoQuery object created from the non-signer database:

1
2
3
4
5
6
7
8
Database database = ExtLibUtil.getCurrentDatabase();
Session sessionAsSigner = ExtLibUtil.getCurrentSessionAsSigner();
Database databaseAsSigner = sessionAsSigner.getDatabase(database.getServer(), database.getFilePath());

DominoQuery dominoQuery = database.createDominoQuery();
QueryResultsProcessor qrp = databaseAsSigner.createQueryResultsProcessor();
qrp.addDominoQuery(dominoQuery, "some DQL", null);
View result = qrp.executeToView("some view name");

Because the QueryResultsProcessor uses the provided DominoQuery object as the "engine" for the DQL search, the query will use the normal user's rights while the processing will use the signer rights.

Naming and Expiring Results

As seen there, you have to name your views. While you could in theory use this mechanism to kind of manage your own views for general use and name them things like "People By First Name" or whatever, you'll likely want to work with them programmatically and name them based on your query input.

In the case of this JNoSQL driver, I compute a predictable-from-input hash-based name from the name of the creating class, the current user, and the sort/skip/limit attributes of the incoming query. You could really do whatever you want here, but having at least some sort of hash like this is likely the way to go.

Now there's the matter of detecting when you need to refresh the data. In some applications, it may suffice to go with the "expire in X hours" parameter when creating the view, though that's extremely coarse and only really useful on its own for specific needs (like a daily report).

The tack I took here was to try to do an efficient check of view creation time compared to the last data modification time from the source database. The Database class only has a "last modified" time in general, but I can't very well use that when my results caches are being added as design elements: a second distinct query would "invalidate" the first even when the data hasn't changed. There might be a proper way to get this in lotus.domino, the NAPI has a wrapper for NSFDbModifiedTimeByName: NotesSession.getLastDataModificationDateByName. That lets you get the last data-mod time in epoch seconds, and you can then compare that to the creation time of the view.

While it's unfortunate that you have to remove the view outright to refresh it instead of doing a delta update like NIF would do, I get it, and it's generally fast enough. Plus, there's enough hand-wavy stuff going on with feeding the DQL query to the QRP that Domino would be free to secretly retain results for a bit and do deltas internally if it so desires.

Storing Result Views

The other interesting aspect of creating a QRP object from a Database and not a Session is that that DB serves as the destination to house the views. While in a single-DB environment it would seem very natural to just store the views in the same place as the data, there's no particular requirement to do so. Moreover, if you're querying multiple databases, you're naturally not going to do this for all docs anyway, so you'll be forced to conceptualize this anyway.

Now, personally, I'm fine with a bunch of temporary machine-named views hanging out in the NSF (especially since the names are wrapped in parentheses to hide them from default UI listings), I can see why it could be annoying. For one, these views sync to an ODP in Designer, which I put in as an Aha idea to change, but might actually rightly be called a bug. Beyond that, while these views won't meaningfully contribute to NIF's workload (since NIF will skip them), they're unsightly and would get in the way if you're trying to tend to the design of your NSF like a garden.

So you might want to have a side database to store these views, and this could also be a way to get around the "needing Designer access" requirement if you're in an environment where you don't have a signer session. In the Notes client, you could store the results in a local NSF; on the server, you could make a "scratch" NSF somewhere to house them, and then add readers to the view design note when doing so to prevent leaking data across users and apps.

Conclusion

Anyway, this is all pretty neat. Reusing view design elements to just be static containers for collation data is weird, but I get the practical reasons why it makes sense. Importantly, this pairing solves some very-real problems with querying and extracting data from Domino. For example, if you do all of your querying via this route, you can use DQL's "EXPLAIN" capability to actually get some insight into database performance for once. You could imagine having an optional mode where you log the EXPLAIN results and execution times for all queries your app is performing, and then manually create "index" views to fix hotspots. It's quite satisfying to finally get that kind of ability in Domino. It'd be neat if that also came to QueryResultsProcessor.

I'm looking forward to expanding the JNoSQL driver further and then either using that directly in client work or adapting the code I use there. I'll definitely add such a logging capability, which will go a long way to put some numbers to the "feels slow" problem that can crop up. Beyond that, barring any show stoppers, I'm thoroughly excited by the prospect of moving away from fetching explicitly-named views in code and switching to an idiom of querying the pool of documents and letting the database make it work for me.

Another Project: XPages Jakarta EE Support

Sun Jun 03 16:40:11 EDT 2018

In my dealings with JNoSQL recently, I’ve been delving more into the world of modern Jakarta EE/Java EE/J2EE development, particularly the magic land of CDI.

The JEE stack tends to be organized as a collection of specs and implementations, many of which are really independent of each other and the underlying platform, making them pretty portable onto any reasonably-recent JVM. Now that Domino is actually on a reasonably-recent JVM, that makes it a workable target! So I decided to create a side project to bring some of JEE to XPages.

XPages has always been “sort of Java EE” - you don’t really have the full stack, and it’s far behind on the components that it does have, but a lot of the concepts are there. Of particular interest are managed beans and expression language.

CDI and Managed Beans

The XPages stack contains what amounts to a priomordial version of CDI. Since the release of XPages, JSF improved on the original faces-config.xml declaration method to add annotation-based declarations, and then CDI is something of a codification and expansion of that into the full Java world.

My project uses the Weld reference implementation of CDI to create a CDI context for each XPages app that opts in, allowing it to use annotations on classes to declare beans and properties:

@ApplicationScoped
@Named // or @Named("applicationGuy")
public class ApplicationGuy {
    public void getFoo() {
        return "hello";
    }
}

These can then be used like normal managed beans in an XPage:

<xp:text value="#{applicationGuy.foo}"/>

The project’s README contains some further examples.

I went with the Java SE implementation of Weld instead of the pre-built servlet or OSGi packages since those are a little too smart for this use: they pick up on the fact that they’re in a JSF environment, but expect newer versions of the servlet spec and JSF.

Expression Language

Since its original release, EL went through a similar standardization process as CDI and is now at version 3.0 and is distinct from JSP and JSF. As anyone who has tried to call a method on a bean in EL has found out, the XPages EL implementation lags pretty far behind, at the JSF 1.0/1.1 level. Since that time, it sprouted parameters and “projection” and is essentially a tiny scripting language now.

My project uses GlassFish’s EL implementation to outright replace the stock EL interpreter for apps making use of it. I added some affordances to IBM’s customized data support, so it’s intended as a drop-in replacement:

<xp:text value="${dataObjectExample.calculateFoo('some arg')}"/>

<xp:text value="#{el:requestGuy.hello()}"/> 

Note the “el:” prefix in the runtime-bound expression: that’s to get around Designer’s validation of runtime EL expressions.

So… Why?

That’s a good question! The first two reasons are “because it’s fun” and “to learn more about JEE”, but there’s also practical value for this sort of thing.

XPages is moribund, and that leaves Domino developers with a few options:

  • Go back to LotusScript. The iPad Notes client makes this a terrifyingly-practical option, but it’s soul death.
  • Go to JavaScript (or another platform). This is another route HCL is pushing, and it’s entirely valid: Node is a great platform with excellent support and momentum.
  • Go to modern Java.

For anyone who has invested a lot of time and brainpower in XPages over the years, that last one particularly appealing, and projects like this can help you get there. If you have a large XPages code base, as I do with one of my clients, it makes a lot more sense to work on that in such a way that it gradually becomes less XPage-dependent while avoiding the trap of a full rewrite in another language.

Many of us have already done something of this sort: JAX-RS is another JEE standard, and the Wink implementation in the Extension Library, though also aging, accomplishes this same sort of task. Especially if your services don’t reference Wink explicitly and write just to the spec, they are very portable.

That portability - of code and skillset - is critical. Say you have a class like this:

import javax.inject.Inject;
import javax.ws.rs.Path;
import javax.ws.rs.GET;
import javax.ws.rs.core.MediaType;
import javax.ws.rs.core.Response;

@Path("/issues")
public class IssuesResource {
    @Inject IssueRepository issueRepository;

    @GET
    @Produces(MediaType.APPLICATION_JSON)
    public Response get(@QueryParam("category") String category) {
        return issueRepository.find(category).stream()
            .map(this::doSomething)
            .skip(3)
            .collect(this::toResponse);
        }

​     // ...
}

Which Java platform is that targetting? What’s the data storage mechanism? Who cares? This class certainly doesn’t. That could just as easily be Domino reading from an NSF or (as is actually the case in the example’s source) Tomcat with Darwino.

What’s Next?

Truthfully, maybe not much. Though JEE contains a whole raft of technologies, these two were the ones that scratch my immediate itch. We’ll see, though - the skill portability of erstwhile XPages developers is critically important, and I think that this is another one of the paths that can get us where we need to go.

Lessons From Writing a JNoSQL Driver

Sat Dec 30 11:12:24 EST 2017

The other day, I decided to start up a side project to write an app for my Stars Without Number game in Darwino. Like back when I wrote a forum/raiding app for my WoW guild, I like to use this kind of opportunity to try new technologies and flesh out my skills in existing ones.

One such tech I've had my eye on for a bit is JNoSQL, which is a framework for integrating with NoSQL databases in Java. It's along the lines of Hibernate OGM, but intended to avoid the pitfalls of the relational/NoSQL that came with trying to adapt JPA directly to NoSQL databases. JNoSQL promised to be much easier to implement for a new database, so I decided to give it a shot.

JNoSQL

JNoSQL is split into two paired components, cleverly named Diana (the driver side) and Artemis (the model/integration side). The task of writing a driver for a new database is pretty well-contained: pick the database type(s) you want to implement (out of key/value, column, document, and graph) and implement about half a dozen interfaces. This is in stark contrast from when I took a swing at writing a Hibernate OGM driver, where the task was significantly more daunting. The final result is only ten Java files, with a chunk of them being utility classes for code organization.

It's a young project - young enough that the best version to run right now is 0.0.4-SNAPSHOT - but it functions well and it's been taken under the wing of the Eclipse foundation, which builds some confidence.

Implementation

Though the task was small, there were still a couple initial hurdles to getting going.

To begin with, I decided to start with the Couchbase driver - this certainly made the overall task easier, since Couchbase's semantics are very similar to Darwino's, but it also meant that I had to be wary of which parts of the codebase were really about implementing a Diana driver and which were Couchbase-isms. Fortunately, this was much easier than the equivalent work when I adapted the CouchDB Hibernate OGM driver, which was a sprawling codebase by comparison.

More significantly, though, it's always tough coming in to modify a codebase written by a single person or small team and learning as you go. The structure of the code is clean, but not quite my normal style (in part because Domino kept me from diving into Java 8 streams for so long), and I also had to ramp up quickly on the internal concepts of Diana. Fortunately, this was mostly easy, since the document-DB driver scaffolding is purpose-built, the hooks are straightforward and the query semantics were extremely easy to adapt. The largest impediment was getting used to the use of the term "Document", which internally refers to a key/value pair, while "DocumentEntity" is closer to the expected meaning.

Like the core implementation, the test suite I adapted from Couchbase was also pleasantly svelte, covering the bases without being an onerous nightmare to convert. Indeed, most of the code I added to it was the Darwino app scaffolding just for the test runtime.

Putting It Into Practice

Once the driver was written, I was hit by a bit of a personal curveball when I went to implement some actual data models. The model side, Artemis, is heavily wrapped together with CDI, which is a Java EE thing that, as I gather, handles managed beans, separation of implementation, and variable injection. This is a pretty normal thing for Java EE developers, but XPages's "don't call it Java EE" environment didn't introduce me to this aspect. As such, the fact that the documentation just kind of casually tossed around CDI terms and annotations threw me for a bit of a loop trying to determine what was what was required and what was just an idiom.

I eventually determined that I could use the reference implementation, Weld, without necessarily going whole-hog on Java-EE-everything. I'm a bit wary of what this bodes for whether I'll be able to use JNoSQL in Darwino on mobile devices, but I'll cross that bridge when I come to it. Once I got a bit of a handle on what Weld is and how to use it in unit tests (hint: make sure you have beans.xml files!), I was able to start writing my model objects and testing them.

Doing It Again

The fact that the bulk of my implementation work ended up being on the app side with CDI goes to show that the Diana driver model really shines. It got me thinking about how difficult it would be in the future, say to write a driver for Domino. There'd be some hurdles - Domino's lack of nested objects and antiquated querying mechanisms would need replacing - but the core task wouldn't be too bad. I don't know if I'd have a need for it, but it's nice to keep in mind as potential future small project.

All in all, I'm optimistic about the use of this. I'd love for Darwino to integrate as smoothly as possible into whatever standard environments it can, and this is one more step in that direction. I'll know as my side app takes shape how much this ingrains itself into my actual work.