Homelab Rework: Phase 2

Fri Sep 15 11:39:42 EDT 2023

Tags: homelab linux
  1. Planning a Homelab Rework
  2. Homelab Rework: Phase 1
  3. Homelab Rework: Phase 2
  4. Homelab Rework: Phase 3 - TrueNAS Core to Scale

CollabSphere 2023 came and went the other week, and I have some followup to do from that for sure, not the least of which being the open-sourcing of JNX, but that post will have to wait a bit longer. For now, I'm here to talk about my home servers.

When last I left the topic, I had installed Proxmox as my VM host of choice to replace Windows Server 2019, migrated my existing Hyper-V VMs, and set up a Windows 11 VM with PCIe passthrough for the video card. There were some hoops to jump through, but I got everything working.

Now, though, I've gone back on all of that, or close to it. Why?

Why Did I Go Back On All Of That, Or Close To It?

The core trouble that has dogged me for the last few months is performance. While the host I'm using isn't a top-of-the-line powerhouse (namely, it's using an i7-8700K and generally related-era consumer parts), things were running worse than I was sure they should. My backup-runner Linux VM, which should have been happy as a clam with a Linux host, suffered to the extent that it never actually successfully ran a backup. My Windows dev VMs worked fine, but would periodically just drag when trying to redraw window widgets in a way they hadn't previously. And, most importantly of all, Baldur's Gate 3 exhibited bizarre load-speed problems: the actual graphical performance was great even on the highest settings, but I'd get lags of 10 seconds or so loading assets, much worse than the initial performance trouble reported by others in the release version of the game.

Some of this I chalked up to lack of optimized settings, like how the migrated VMs were using "compatibility" settings instead of all the finest-tuned VirtIO stuff. However, my gaming VM was decked out fully: VirtIO network and disk, highest-capability UEFI BIOS, and so forth. They were all sitting on ZFS across purely-NVMe drives, so they shouldn't have been lacking for disk speed. I tried a bunch of things, like dedicating an SATA SSD to the VM, or passing through a USB 3 SSD, but the result was always the same. Between game updates and re-making the VM in a "lesser" way with Windows 10, I ended up getting okay performance, but the speed of the other VMs bothered me.

Now, I don't want to throw Proxmox specifically or KVM generally under the bus here. It's possible that I could have improved this situation - perhaps, despite my investigations and little tweaks, I had things configured poorly. And, again, this hardware isn't built for the purpose, but instead I was cramming server-type behavior into "prosumer"-at-best hardware. Still, Hyper-V didn't have this trouble, so it nagged at me.

But Also Containers

As I mentioned towards the bottom of the previous post, Proxmox natively uses Linux Containers and not Docker, but I wanted to see what I could do about that. I tried a few things, installing Docker inside an LXC container as well as on the main host OS, but ran into odd filesystem-related problems within Dockerfiles. I found ways to work around those by doing things like deleting just files instead of directory trees, but I didn't want to go and change all my project Dockerfiles just to account for an odd local system. I had previously used my backup-manager VM for Docker, but that VM's performance trouble made me make a new secondary one. That ended up expanding the overhead and RAM consumption, which defeated some of the potential benefits.

Little Things

Beyond that, there were little things that got to me. Though Proxmox is free, it still gives a little nag screen about being unlicensed the first time you visit the web UI each reboot, which is a mild annoyance. Additionally, it doesn't have built-in support for suspending/resuming active VMs when you reboot the machine, as Hyper-V does - I found some people recommending systemd scripts for this, but that would introduce little timing problems that wouldn't arise if it was a standard capability.

There also ended up being a lot that was done solely via CLI and not the GUI. To an extent, that's fine - I'm good with using the CLI for quite a bit - but it did defeat some of the benefit of having a nice front-end app when I would regularly drop down to the CLI anyway for disk import/export, some device assignments, and so forth. That's not a bug or anything, but it made the experience feel a bit rickety.

The New Setup

So, in the end, I went crawling back to Windows and Hyper-V. I installed Windows 11 Pro and set up the NVMe drives in Storage Spaces... I was a little peeved that I couldn't use ReFS, since apparently "Pro" and "Pro for Workstations" are two separate versions of Windows somehow, but NTFS should still technically do the job (I'll just have to make sure my backup routine to my TrueNAS server is good). After I bashed at it for a little while to remove all the weird stupid ads that festoon Windows nowadays, I got things into good shape.

Hyper-V remains a champ here. I loaded up my re-converted VMs and their performance is great: my backup manager is back in business and my dev VMs are speedy like they used to be.

Among the reasons why I wanted to move away from Server 2019 in the first place is that the server-with-desktop-components versions of Windows always lagged behind the client version in a number of ways, and one of them was WSL2. Now that I'm back to a client version, I was able to install that with a little Debian environment, and then configure Docker Desktop to make use of it. With some network fiddling, I got the Docker daemon listening on a local network port and usable for my Testcontainers suites. Weirdly, this means that my Windows-based setup for Docker is actually a bit more efficient than the previous Linux-based one, but I won't let that bother me.

As for games, well... it's native Windows. For better or for worse, that's the best way to run them, and they run great. Baldur's Gate 3 is noticeably snappier with its load times already, and everything else still runs fine.

So, overall, it kind of stings that I went back to Windows as the primary host, but I can't deny that I'm already deriving a lot of benefits from it. I'll miss some things from Proxmox, like the smooth handling of automatic mounting of network shares as opposed to Windows's schizophrenic approach, but I'm otherwise pleased with how it's working again.

CollabSphere Workshop Schedule Update and OpenNTF Webinar

Tue Aug 22 14:06:55 EDT 2023

As I mentioned last month, I'll be participating in a couple presentations, including a workshop on the XPages Jakarta EE project.

This workshop is scheduled for Tuesday, but its time has shifted. Originally, it was scheduled for 1 PM - 3 PM local time, but it's moved up to 9 AM - 11 AM to help with some coordination. Looks like it's also in the Pullman room and not the Linnaeus room now. Same idea, but you'll probably want to bring a cup of coffee with you.

In addition, and particularly so if you won't be attending CollabSphere, I'll be doing this month's OpenNTF webinar this week, on Thursday. The plan for that is to be like a mini/less-interactive version of the workshop, but covering the same general idea of the various ways to develop JEE apps on Domino with the project. If you're interested in that, you can register here.

Modes Of App Development With XPages Jakarta EE

Fri Jul 28 11:46:50 EDT 2023

I've been working on my workshop for this year's CollabSphere, and one of the main decisions I have to make is what I'm going to focus on. The idea of the workshop is to give a bit more brass-tacks information about how to use the project: rather than just a list of features, it'll be about the specific business of building an app using it.

But how does one build an app in it? There's certainly no lack of tools available, but that leads to the opposite problem: what's the right one for your project? What's likely to be the most common path people take?

The Types

As I've been working on it, I've grouped things into four main categories, and I figured it'd be useful to enumerate them here to coordinate my thoughts and provide some general information. There aren't hard lines between these: you can use any mixture of some or all of the parts in an app, and do different mixes in different apps. These are just what I expect to be the main groupings:

  • "XPages Plus", using some new capabilities in existing or new apps with XPages-based UIs
  • REST services, focusing on providing REST endpoints for JavaScript-based apps or other servers
  • MVC and JSP, focusing on clean, lightweight UIs for document-based apps, but less ideal for complex business logic
  • JSF, building the same sorts of apps XPages is adept at, but using newer technology

"XPages Plus"

The first route is how the project got started: you keep building XPages apps but sprinkle in a few new capabilities to improve them.

For example, you could replace your managed beans defined in faces-config.xml with CDI beans, allowing you to get the quick benefit of annotation-based definitions and then the bigger benefits of @Inject, producer methods, and interceptors.

You could also start using newer EL features, like the long-desired ability to pass parameters to methods.

This path wouldn't necessarily require a lot of reworking of your app or changing the way you think about XPages development, but would still be something of a minor development refresh and can set you up well for future improvements.

Your data access will likely still be through the traditional xp:dominoDocument and xp:dominoView components, but you could also write beans that access data with lotus.domino or ODA, or switch to using the NoSQL driver.

REST Services

Alternatively, you could decide you want to focus your apps around REST services with either a JavaScript app in, for example, React as the front end, or providing services to remote servers.

With this, you'd largely stop using XPages design elements entirely, instead defining your services in Java classes with JAX-RS annotations. This brings huge advantages over other ways to write REST services on Domino, with the JAX-RS annotations allowing for clear, logical definition of services, their parameters, and their output. Moreover, the ancillary tooling brings things like automatic OpenAPI definitions, which would be annoying to maintain using things like the XPages-side REST controls.

This path is good if you're specifically aiming to build a JavaScript-based app, either because you just like it, because your organization decided to go that route, or if you have a larger team that splits the duties of front-end and back-end developers. It can also naturally blend into the next one.

Your data access here won't be through the XPages components, but you could still use lotus.domino or ODA classes, or switch to the NoSQL driver. That actually goes for the next two, too, so we'll just count that as assumed.

MVC and JSP

I'll admit that part of the reason I want to consider this a top-tier route is because I just personally really like it. I've had a blast writing apps like this blog and the OpenNTF site using this path, with its much-cleaner code and back-to-basics approach to HTML.

Regardless of my personal enjoyment of it, though, this has some nice advantages. The fact that MVC builds on top of JAX-RS means that it melds well with the REST-services approach above. For example, you might primarily write REST services for a JS app, but then do a set of "admin" pages with MVC. Or you might use this as part of the prototype phase: structure your app the same way you will when you expand to a multi-tier team, but start out by doing a quick UI with MVC on top of the same or related endpoints.

With this path, your app will start with Java classes with JAX-RS annotations, and then you'd mix it with JSP files inside WebContent/WEB-INF. One down side to this approach is that Designer doesn't provide much help for writing JSP files. In the tooling, I bind .jsp and .tag files to the HTML editor, so you at least get normal HTML assistance, but that won't help you with specific JSP tags and EL. Fortunately, the set of tools you'll likely use in JSP is comparatively small, so you'll eventually memorize things like <c:forEach items="..." var="...">...</c:forEach> in much the same way that you could eventually write out an <xp:repeat/> in your sleep in XPages.

JSF

This one, technically tricky though it may be, is conceptually straightforward: write the same sort of apps you do with XPages, but do it with modern JSF instead. This makes a lot of sense, since JSF shares XPages's acumen with complicated forms with partial refreshes and changing state data, but has benefited from some development that didn't happen on the XPages side.

It's not a direct replacement: in particular, JSF has no knowledge of Domino data sources, so there's no xp:dominoDocument or xp:dominoView. You'd still need to do your data access via beans, as in the previous two options, likely using either lotus.domino/ODA or the NoSQL driver. Additionally, Designer really doesn't help you here - again, I map .xhtml and .jsf files to the HTML editor, but JSF components have a lot of properties to set, and so you'll be spending a lot of time referencing documentation.

Still, it's clear why this is proving to be a popular path. The development model is the same as in XPages, while the JSF stack (especially including PrimeFaces) brings a lot of amenities that aren't in XPages and are also more portable to other environments.

Conclusion

So, for now, I'm thinking of splitting up the workshop to cover each of these paths a bit. That runs the risk of feeling like too much of a grab bag, but I don't want to give the opposite impression, that the project only allows for some specific path. It's a broad platform update, accommodating many development approaches, and I want to keep that clear. Fortunately, each path has a pretty-clean pitch, and the shared components (CDI, bean validation, the REST client, etc.) build on each other well, so the idea that it's a pool of features that you can swim in is, I think, compelling.

XPages JEE 2.13.0

Fri Jul 21 11:51:52 EDT 2023

  1. Updating The XPages JEE Support Project To Jakarta EE 9, A Travelogue
  2. JSP and MVC Support in the XPages JEE Project
  3. Migrating a Large XPages App to Jakarta EE 9
  4. XPages Jakarta EE Support 2.2.0
  5. DQL, QueryResultsProcessor, and JNoSQL
  6. Implementing a Basic JNoSQL Driver for Domino
  7. Video Series On The XPages Jakarta EE Project
  8. JSF in the XPages Jakarta EE Support Project
  9. So Why Jakarta?
  10. XPages Jakarta EE 2.5.0 And The Looming Java-Version Wall
  11. Adding Concurrency to the XPages Jakarta EE Support Project
  12. Adding Transactions to the XPages Jakarta EE Support Project
  13. XPages Jakarta EE 2.9.0 and Next Steps
  14. XPages JEE 2.11.0 and the Javadoc Provider
  15. The Loose Roadmap for XPages Jakarta EE Support
  16. XPages JEE 2.12.0: JNoSQL Views and PrimeFaces Support
  17. XPages JEE 2.13.0
  18. XPages JEE 2.14.0
  19. XPages JEE 2.15.0 and Plans for JEE 10 and 11

Today, I released version 2.13.0 of the XPages Jakarta EE Support project. Though there's not a single big banner feature, this one brings a number of good enhancements in a bunch of areas.

Domino 14

The first thing it brings is compatibility with Domino 14 EAP1. The goal here is to just bring the same features to that version - it doesn't bump the individual components to their Jakarta EE 10 versions yet, since that will come with breaking changes and prevent use on 12.0.2 and before.

There remains a caveat here, which is that EAP1 doesn't include a Java compiler, and so JSP doesn't work unless you shim in parts of a JDK into a Domino installation. If you're not using JSP, though, you should be able to run your apps on 14 using this new build.

JSF

It turns out that Faces support is a popular feature, which makes sense: it's the most direct analogue to writing XPages, while bringing in a lot of new features. While Faces has always been tricky to keep working, this build includes some fixes for stability and usability. I'd still consider this route to be the least-proven way to do UIs with this project, but it's shaping up really nicely.

JavaSapi

Speaking of experimental features, this release comes with a new feature that builds on the JavaSapi bridge I added a bit ago: you can now specify extensions within an NSF that will participate in JavaSapi pre-processing of requests.

To do this, you can make a file named META-INF/services/org.openntf.xsp.jakartaee.jasapi.JavaSapiExtension in your NSF's Java classpath (e.g. the Code/Java folder) and have it name a JavaSapiExtension class. For example:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
package javasapi;

import org.openntf.xsp.jakartaee.jasapi.JavaSapiContext;
import org.openntf.xsp.jakartaee.jasapi.JavaSapiExtension;

public class TestJavaSapiExtension implements JavaSapiExtension {
	@Override
	public Result rawRequest(JavaSapiContext context) {
		// Add a custom header to all responses
		context.getResponse().setHeader("X-InNSFCustomHeader", "Hello from NSF");
		return Result.SUCCESS;
	}
	
	@Override
	public Result authenticate(JavaSapiContext context) {
		// Custom authentication mechanism. If you use this, do it more securely!
		String overrideName = context.getRequest().getHeader("X-OverrideName");
		if(overrideName != null && !overrideName.isEmpty()) {
			context.getRequest().setAuthenticatedUserName(overrideName, "Basic");
			return Result.REQUEST_AUTHENTICATED;
		}
		return JavaSapiExtension.super.authenticate(context);
	}
}

As with any time I do anything with JavaSapi, I can't stress enough how unsupported this is. It's not even officially a feature of Domino, and I've found it fairly easy to crash the server by doing the wrong thing here. On the other hand, it's neat and fun, so... feel free to tinker with it.

NoSQL

Finally, I added some methods to DominoRepository instances to access profile and named notes:

1
2
3
4
SomeEntity profile = repository.findProfileDocument("SomeProfile", "Your Username")
	.orElseThrow(() -> new NotFoundException("Could not find profile for user"));
SomeEntity named = repository.findNamedDocument("Some Name", "Your Username")
	.orElseThrow(() -> new NotFoundException("Could not find named doc for user"));

I made them return Optional for safety's sake - I think they'll in general create the documents if they don't exist, but I wanted to leave room in the API for a future ability to only return them if they haven't previously been explicitly created.

Anyway, that's one more step in making the driver useful as a general-purpose Domino access mechanism. My goal is to make it so that you'd only need lotus.domino, ODA, or another Domino-specific API in specific edge cases. I can already do almost everything I need to, and now I'm just working down the list of less-critical features.

Next Steps

As I've been working on 2.13.0, I've also been working on the 3.0 branch, including an early beta last month. That's the branch that breaks pre-Domino-14 compatibility and bumps most components up to their Jakarta EE 10 versions. Since I can't realistically have a proper release of that until Domino 14 is out, my plan is to keep tinkering with the side branch and releasing betas from time to time.

In the mean time, I wouldn't be surprised if there's a 2.14.0. There are some tweaks and efficiency improvements I want to make particularly for JSF, so I expect I'll have enough on my plate before Domino 14's release to get another current-line release out.

Upcoming Sessions at CollabSphere 2023

Wed Jul 19 10:07:57 EDT 2023

Tags: collabsphere

CollabSphere 2023 is coming up at the end of next month, from August 29th through the 31st. It's in-person again this year, and, after quite a bit of hemming and hawing on that point, I decided to go. Moreover, I'll be hosting and participating in a couple sessions, including a workshop on the 29th. Specifically, that one is:

Developing Applications With XPages Jakarta EE
Tuesday at 1 PM Central, Linnaeus Room
The XPages Jakarta EE project adds a swath of new capabilities to Domino development. This workshop will give an overview of some of the major features and will include a follow-along demonstration of developing a representative application. There will also be room for questions and discussion about the project and how to integrate it with existing apps and workflows.

Attendees who wish to follow along with development should bring a laptop with a local Domino server and Designer (12.0.2 recommended) with the XPages Jakarta EE project from openntf.org installed in both.

As with the other workshops, these require pre-registration, so I encourage you to do so. If you'd like to attend and end up hitting trouble with the pre-setup process, head on to the OpenNTF Discord - there's a thread in the #specific-projects channel for the project that's a perfect spot for questions.

Beyond the workshop, I'll be running a roundtable similar to the one I did at last year's online CollabSphere:

DEV107 - Java and Jakarta EE on Domino Roundtable
Wednesday at 4 PM Central, Burnstein Room
The story of Java on Domino can be complex, but there are tools and strategies available to keep your development maintainable and consistent with the wider world. This roundtable will focus on discussing the current array of best options for development on Domino, with a secondary focus towards discussing how the XPages Jakarta EE Support project can be used to improve your development experience and the capabilities of your apps.

Immediately afterward, in the same room, is OpenNTF's roundtable, with the theme of a "live Repair Café" in the vein of the events we've been hosting in Discord:

COL104 - OpenNTF Live Repair Café: REST Practices
Wednesday at 5 PM Central, Burnstein Room
Join us for an interactive discussion at our OpenNTF Live Repair Café as we troubleshoot and repair together. Building on the success of our previous virtual events, this live event offers a unique opportunity to engage with experts and peers in the field face-to-face. Don't miss out on this chance to learn, share and grow together.

Finally, I'll be running a session on Thursday to discuss the new features we'll have access to in the move from Java 8 to Java 17:

DEV106 - Java 8 to Java 17: The New Goodies
Thursday at 9 AM Central, Linnaeus Room
Domino 14 is slated to include Java 17, an upgrade of two Long-Term-Service Java versions that brings a slew of improvements and new features. While most of the APIs we use will likely remain the same, recent versions of Java have focused heavily on developer-friendly niceties. This presentation will go over some of the more-useful ones for Domino developers, such as the new HttpClient, text blocks, records, convenience methods, and cleaner syntax.

That one should be neat. It's been a long time between releases we've used, and Java's been getting pretty good lately.

In any event, if you're attending CollabSphere, I hope to see you at these sessions.

Homelab Rework: Phase 1

Mon Jul 10 10:22:48 EDT 2023

Tags: homelab linux
  1. Planning a Homelab Rework
  2. Homelab Rework: Phase 1
  3. Homelab Rework: Phase 2
  4. Homelab Rework: Phase 3 - TrueNAS Core to Scale

I mentioned the other week that I've been pondering ways to rearrange the servers I have in the basement here, which I'm presumptuously calling a "homelab".

After some fits and starts getting the right parts, I took the main step over the weekend: reworking terminus, my gaming PC and work VM host. In my earlier post, my choice of OS was a bit up in the air, but I did indeed end up going with the frontrunner, Proxmox. While the other candidates had their charms, I figured it'd be the least hassle long-term to go with the VM-focused utility OS when my main goal was to run VMs. Moreover, Proxmox, while still in the category of single-vendor-run OSes, is nonetheless still open-source in a way that should be reliable.

The Base Drive Setup

terminus, being a Theseus-style evolution of the desktop PC I've had since high school, is composed generally of consumer-grade parts, so the brands don't matter too much for this purpose. The pertinent part is how I reworked the storage: previously, I had had three NVMe drives: one 256GB one for the system and then two distinct 2TB drives formatted NTFS, with one mounted in a folder in the other that had grown too big for its britches. It was a very ad-hoc approach, having evolved from earlier setups with other drives, and it was due for a revamp.

For this task, I ended up getting two more 2TB NVMe drives (now that they're getting cheap) and some PCIe adapters to hold them beyond the base capacity of the motherboard. After installing Proxmox on the 256GB one previously housing Windows, I decided to join the other four in a RAID-Z with ZFS, allowing for one to crap out at a time. I hit a minor hitch here: though they're all 2TB on paper, one reported itself as being some tiny sliver larger than the other three, and so the command to create the pool failed in the Proxmox GUI. Fortunately, the fix is straightforward enough: the log entry in the UI shows the command, so I copied that, added "-f" to force creation based on the smallest common size, and ran the command in the system shell. That worked just fine. This was a useful pace-setting experience too: while other utility OSes like pfSense and TrueNAS allow you to use the command line, it seems to be more of a regular part of the experience with Proxmox. That's fine, and good to know.

Quick Note On Repositories

Proxmox, like the other commercial+open utility OSes, has its "community"-type variant for free and the "enterprise" one for money. While I may end up subscribing to the latter one day, it'd be overkill for this use for now. By default, a Proxmox installation is configured to use the enterprise update repositories, which won't work if you don't set up a license key. To get on the community train, you can configure your apt sources. Specifically, I commented out the enterprise lines in the two pre-existing files in /etc/apt/sources.list.d/ and then added my own "pve-ce.list" file with the source from the wiki:

deb http://download.proxmox.com/debian/pve bookworm pve-no-subscription

Importing Old Windows VMs

My first task was to make sure I'd be able to do work on Monday, so I set out to import the Hyper-V Windows VMs I use for Designer for a couple clients. Before destroying Windows, I had copied the .vhdx files to my NAS, so I set up a CIFS connection to that in the "Storage" section of the Proxmox GUI, basically a Proxmox-managed variant of adding an automatic mount to fstab.

From what I can tell, there's not a quick "just import this Hyper-V drive to my VM" process in the Proxmox GUI, so I did some searching to find the right way. In general, the tack is this:

  • Make sure you have a local storage location set to house VM Disk Images
  • Create a new VM with a Windows type in Proxmox and general settings for what you'd like
  • On the tab where you can add disks, delete the one it auto-creates and leave it empty
  • On the command line, go to the directory housing your disk image and run a command in the format qm importdisk <VMID> <imagename>.vhdx <poolname> --format qcow2. For example: qm importdisk 101 Designer.vhdx images --format qcow2
  • Back in the GUI (or the command line if you're inclined - qm is a general tool for this), go to your VM, find the imported-but-unattached drive in "Hardware", and give it an index other than 0. I set mine to be ide1, since I had told the VM in Hyper-V that it was an IDE drive
  • In "Options", find the Boot Order and add your newly-attached disk to the list
  • Download the drivers ISO to attach to your VM. Depending on how old your Windows version is, you may have to go back a bit to find one that works (say, 0.1.141 for Windows 7). Upload that ISO to a local storage location set to house ISOs and attach it to your VM
  • Boot Windows (hopefully), let it do its thing to realize its new home, and install drivers from the "CD drive" as needed

If all goes well, you should be able to boot your VM and get cracking. If it doesn't go well, uh... search around online for your symptoms. This path is about the extent of my knowledge so far.

The Windows Gaming Side

My next task was to set up a Windows VM with PCIe passthrough for my video card. This one was a potential dealbreaker if it didn't work - based on my hardware and what I read, I figured it should work, but there's always a chance that consumer-grade stuff doesn't do what it hypothetically should.

The first step here was to make a normal Windows VM without passthrough, so that I'd have a baseline to work with. I decided to take the plunge and install Windows 11, so I made sure to use a UEFI BIOS for the VM and to enable TPM support. I ran into a minor hitch in the setup process in that I had picked the "virtio" network adapter type, which Windows doesn't have driver support for in the installer unless you slipstream it in (which I didn't). Windows is extremely annoyed by not having a network connection at launch, dropping me into a "Let's connect you to a network" screen with no options and no way to skip. Fortunately, there's a workaround: type Shift+F10 to get a command prompt, then run "OOBE\BYPASSNRO", which re-launches the installer and sprouts a "skip" button on this phase. Once I got through the installer, I was able to connect the driver ISO, install everything, and have Windows be as happy as Windows ever gets. I made sure to set up remote access at this point, since I won't be able to use the Proxmox console view with the real video card.

Then, I set out to connecting the real video card. The documentation covers this well, but it's still kind of a fiddly process, sending you back to the command line for most of it. The general gist is that you have to explicitly enable IOMMU in general and opt in your device specifically. As a note, I had to enable the flags that the documentation says wouldn't be necessary in recent kernel versions, so keep an eye out for that. Before more specifics, I'll say that my GRUB_CMDLINE_LINUX_DEFAULT line ended up looking like this:

GRUB_CMDLINE_LINUX_DEFAULT="quiet iommu=pt intel_iommu=on pcie_acs_override=downstream"

This enables IOMMU in general and for Intel CPUs specifically (the part noted as obsolete in the docs). I'll get to that last bit later. In short, it's an unfortunate concession to my current hardware.

Anyway, back to the process. I went through the instructions, which involved locating the vendor and device identifiers using lspci. For my card (a GeForce 3060), that ended up being "10de:2414" and "01:00.0", respectively. I made a file named /etc/modprobe.d/geforce-passthrough.conf with the following lines (doing a "belt and suspenders" approach to pass through the device and block the drivers, an artifact of troubleshooting):

options vfio-pci ids=10de:2414,01:00.0
blacklist nvidiafb
blacklist nvidia
blacklist nouveau

The host graphics are the integrated Intel graphics on the CPU, so I didn't need to worry about needing the drivers otherwise.

With this set, I was able to reboot, run lspci -nnk again, and see that the GPU was set to use "vfio-pci" as the driver, exactly as needed.

So I went to the VM config, mapped this device, launched the VM, and... everything started crapping out. The OS was still up, but the VM never started, and then no VMs could start, nor could I do anything with the ZFS drive. Looking at the pool listing, I saw that two of the NVMe drives had disappeared from the listing, which was... alarming. I hard-rebooted the system, tried the same thing, and got the same results. I started to worry that the trouble was the PCIe->NVMe adapter I got: the two missing drives were attached to the same new card, and so I thought that it could be that it doesn't work well under pressure. Still, this was odd: booting the VM was far less taxing on those drives specifically than all the work I had done copying files over and working with them, and the fact that it consistently happened when starting it made me think that wasn't related.

That led me to that mildly-unfortunate workaround above. The specific trouble is that PCIe devices are controlled in groups, and my GPU is in the same group as the afflicted PCIe-> NVMe adapter. The general fix for this is to move the cards around, so that they're no longer pooled. However, I only have three PCIe ports of suitable size, two filled with NVMe adapters and one with a video card, so I'm SOL on that front.

This is where the "pcie_acs_override=downstream" kernel flag works. This is something that used to be a special kernel patch, but is present in the stock one nowadays. From what I gather, it's a "don't do this, but it'll work" sort of thing, tricking the kernel into treating same-grouped PCIe devices separately. I think most of the trouble comes in when multiple grouped devices are performing similar tasks (such as two identical video cards), which could lead two OSes to route confusing commands to them. Since the two involved here are wholly distinct, it seems okay. But certainly, I don't love it, and it's something I'll look forward to doing without when it comes time to upgrade the motherboard in this thing.

As a small note, I initially noticed some odd audio trouble when switching away from an active game to a different app within Windows. This seems to be improved by added a dummy audio device to the VM at the Proxmox level, but that could also be a placebo.

But, hackiness aside, it works! I was able to RDP into the VM and install the Nvidia drivers normally. Then, I set up Parsec for low-latency connections, installed some games, and was able to play with basically the same performance I had when Windows was the main OS. Neat! This was one of the main goals, demoting Windows to just VM duty.

Next Steps: Linux Containers and New VMs

Now that I have my vital VMs set up, I have some more work to do for other tasks. A few of these tasks should be doable with Linux Containers, like the VM I had previously used to coordinate cloud backups. Linux Containers - the proper noun - differ from Docker in implementation and idioms, and are closer to FreeBSD jails in practice. Rather than being "immutable image base + mutable volumes" in how you use them, they're more like setting up a lightweight VM, where you pick an OS base and its contents are persistent for the container. My plan is to use this for the backup coordinator and (hopefully) for a direct-install Domino installation I use for some work.

Beyond that, I plan to set up a Linux VM for Docker use. While I could probably hypothetically install Docker on the top-level OS, that gives me the willies for a utility OS. Yes, Proxmox is basically normal old Debian with some additions, but I still figure it's best to keep the installation light on bigger-ticket modifications and, while I'm not too worried about the security implications with my type of use, I don't need to push my luck. I tinkered a little with installing Docker inside a Linux Container, but ran into exactly the sort of hurdles that any search for the concept warn you about. So, sadly, a VM will be the best bet. Fortunately, a slim little Debian VM shouldn't be too much worse than a Container, especially with performance tweaks and memory ballooning.

So, in short: so far, so good. I'll be keeping an eye on the PCIe-passthrough hackiness, and I always have an option to give up and switch to a "Windows 11 host with Hyper-V VMs" setup. Hopefully I won't have to, though, and things seem promising so far. Plus, it's all good experience, regardless.

Planning a Homelab Rework

Sun Jun 25 20:39:37 EDT 2023

  1. Planning a Homelab Rework
  2. Homelab Rework: Phase 1
  3. Homelab Rework: Phase 2
  4. Homelab Rework: Phase 3 - TrueNAS Core to Scale

Over the years, I've shifted how I use my various computers here, gaining new ones and shifting roles around. My current setup is a mix of considered choices and coincidence, but it works reasonably well. "Homelab" may be a big term for it, but I have something approaching a reasonable array:

  • nemesis, a little Netgate-made device running pfSense and serving as my router and firewall
  • trantor, a first-edition Mac Pro running TrueNAS Core
  • terminus, my gaming PC that does double duty as my Hyper-V VM host for the Windows and Linux VMs I use for work
  • nephele, a 2013 MacBook Air I recently conscripted to be a Docker host running Debian
  • tethys, my current desktop workhorse, an M2 Mac mini running macOS
  • ione, an M1 MacBook Air with macOS that does miscellaneous and travel duty

While these things work pretty well for me, there are some rough edges. tethys and ione need no changes, but others are up in the air.

The NAS

trantor's Xeon is too old to run the bhyve hypervisor used by TrueNAS Core, so I can't use it for VM duty. It can run jails, which I've historically used for some apps like Plex and Postgres, and those are good. Still, as much as I like FreeBSD, I have to admit that running Linux there would get me a bit more, such as another Domino server (I got Domino running under Linux binary compatibility, but only in a very janky and unreliable way). I'm considering side-grading it to TrueNAS Scale, but the extremely-high likelihood of trouble with that switch on such an odd, old machine gives me pause. As a NAS, it's doing just fine as-is, and I may leave it like that.

The Router

nemesis is actually perfectly fine as it is, but I'm pondering giving it more work. While Linode is still doing well for me, its recent price hike left a bad taste in my mouth, and I wouldn't mind moving some less-essential services to my own computers here. If I do that, I could either map a port through the firewall to an internal host (as I've done previously) or, as I'm considering, run HAProxy directly on it and have it serve as a reverse proxy for all machines in the house. On the one hand, it seems incorrect and maybe unwise to have a router do double-duty like this. On the other, since I'm not going to have a full-fledged DMZ or anything, it's perfectly-positioned to do the job. Assuming its little ARM processor is up to the task, anyway.

Fortunately, this one is kind of speculative anyway. While I used to have some outwardly-available services I ran from in the house, those have fallen by the wayside, so this would be for a potential future case where I do something different.

The VM Host/Gaming PC

terminus, though, is where most of my thoughts have been spent lately. It's running Windows Server 2019, an artifact of a time when I thought I'd be doing more Server-specific stuff with it. Instead, though, the only server task I use it for is Hyper-V, and that runs just the same on client Windows. The straightforward thing to do with it would be to replace Windows Server with Windows 11 - that'd put me back on a normal release train, avoid some edge cases of programs complaining about OS compatibility, and generally be a bit more straightforward. However, the only reason I have Windows as the top-level OS on there at all is for gaming needs. Moreover, it just sits in the basement, and I don't even play games directly at its console anymore - it's all done through Parsec.

That has me thinking: why don't I demote Windows to a VM itself? The processor has a built-in video chipset for basic needs, and I could use PCIe passthrough to give full control of my video card to a Windows VM. I could pick a Linux variant to be the true OS for the machine, run VMs and containers as top-level peers, and still get full speed for games inside the Windows VM.

I have a few main contenders in mind for this job.

The frontrunner is Proxmox, a purpose-built Debian variant geared towards running VMs and Linux Containers as its bread-and-butter. Though I don't need a lot of the good features here, it's tough to argue with an OS that's specifically meant for the task I'm thinking of. However, it's a little sad that the containers I generally run aren't LXC-type containers, and so I'd have to make a VM or container to then run Docker. In the latter form, there wouldn't be too much overhead to it, but it'd be kind of ugly, and it'd feel weird to do a full revamp to then immediately do one of the main tasks only in a roundabout way.

That leads to my dark-horse candidate of TrueNAS Scale. This isn't as obvious a choice, since this machine isn't intended to be a NAS, but Scale bumps Kubernetes and Docker containers to the top level, and that's exactly how I'd plan to use it. I assume that the VM management in TrueNAS isn't as fleshed out as Proxmox, but it's on a known foundation and looks plenty suitable for my needs.

Finally, I could just skip the "utility" distributions entirely and just install normal Debian. It'd certainly be up to the task, though I'd miss out on some GUI conveniences that I'd otherwise use heavily. On the other hand, that'd be a good way to force myself to learn some more fundamentals of KVM, which is the sort of thing that usually pays off down the line.

For Now

Or, of course, I could leave everything as-is! It does generally work - Hyper-V is fine as a hypervisor, and it's only sort of annoying having an extra tier in there. The main thing that will force me to do something is that I'll want to ditch Windows Server regardless, even if I just do an in-place switch to Windows 11. We'll see!

Kicking the Tires on Domino 14 and Java 17

Fri Jun 02 13:50:53 EDT 2023

Tags: domino java

As promised, HCL launched the beta program for Domino 14 the other day. There's some neat stuff in there, but I still mostly care about the JVM update.

I quickly got to downloading the container image for this to see about making sure my projects work on it, particularly the XPages JEE Support project. As expected, I had a few hoops to jump through. I did some groundwork for this a while back, but there was some more to do. Before I get to some specifics, I'll mention that I put up a beta release of 2.13.0 that gets almost everything working, minus JSP.

Now, on to the notes and tips I've found so far.

AccessController and java.policy

One of the changes in Java 17 specifically is that our old friend AccessController and the whole SecurityManager framework are marked as deprecated for removal. Not a minute too soon, in my opinion - that was an old relic of the applet days and was never useful for app servers, and has always been a thorn in the side of XPages developers.

However, though it's deprecated, it's still present and active in Java 17, so we still have to deal it. One important thing to note is that java.policy moved from the "jvm/lib/security" directory to "jvm/conf/security". A while back, I switched to putting .java.policy in the Domino user's home directory and this remains unchanged; I suggest going this route even with older versions of Domino, but editing java.policy in its new home will still work.

Extra JARs and jvm/lib/ext

Another change is that "jvm/lib/ext" is gone, as part of a general desire on Java's part to discourage installing system-wide libraries.

However, since so much Java stuff on Domino is older than dirt, having system-wide custom libraries is still necessary, so the JVM is configured to bring in everything from the "ndext" directory in the Domino program directory in the same way it used to use "jvm/lib/ext". This directory has actually always been treated this way, and so I started also using this for third-party libraries for older versions, and it'll work the same in 14. That said, you should ideally not do this too much if it's at all avoidable.

The Missing Java Compiler

I mentioned above that everything except JSP works on Domino 14, and the reason for this is that V14 as it is now doesn't ship with a Java compiler (JSP, for historical reasons, does something much like XPages in that it translates to Java and compiles, but this happens on the server).

I originally offhandedly referred to this as Domino no longer shipping with a JDK, but I think the real situation was that Domino always used a JRE, but then had tools.jar added in to provide the compiler, so it was something in between. That's why you don't see javac in the jvm/bin directory even on older releases. However, tools.jar there provided a compiler programmatically via javax.tools.ToolProvider.getSystemJavaCompiler() - on a normal JRE, this API call returns null, while on a JDK (or with tools.jar present) it'd return a usable compiler. tools.jar as such doesn't exist anymore, but the same functionality is bundled into the newer-era weird runtime archives for efficiency's sake.

So this is something of a showstopper for me at the moment. JSP uses this system compiler, and so does the XPages Bazaar. The NSF ODP Tooling project uses the Bazaar to do its XSP -> Java -> bytecode compilation of XPages, and so the missing compiler will break server-based compilation (which is often the most reliable compilation currently). And actually, NSF ODP Tooling is kind of double broken, since importing non-raw Java agents via DXL is currently broken on Domino 14 for the same reason.

I'm pondering my options here. Ideally, Domino will regain its compiler - in theory, HCL could switch to a JDK and call it good. I think this is the best route both because it's the most convenient for me but also because it's fairly expected that app servers run on a JDK anyway, hence JSP relying on it in the first place.

If Domino doesn't regain a compiler, I'll have to look into something like including ECJ, Eclipse's redistributable Java compiler. I'd actually looked into using that for NSF ODP Tooling on macOS, since Mac Notes lost its Java compiler a long time ago. The trouble is that its mechanics aren't quite the same as the standard one, and it makes some more assumptions about working with the local filesystem. Still, it'd probably be possible to work around that... it might require a lot more of extracting files to temporary directories, which is always fiddly, but it should be doable.

Still, returning the built-in one would be better, since I'm sure I'm not the only one with code assuming that that exists.

Overall

Despite the compiler bit, I've found things to hold together really well so far. Admittedly, I've so far only been using the container for the integration tests in the XPages JEE project, so I haven't installed Designer or run larger apps on it - it's possible I'll hit limitations and gotchas with other libraries as I go.

For now, though, I'm mostly pleased. I'm quite excited about the possibility of making more-dramatic updates to basically all of my projects. I have a branch of the XPages JEE project where I've bumped almost everything up to their Jakarta EE 10 versions, and there are some big improvements there. Then, of course, I'm interested in all the actual language updates. It's been a very long time since Java 8, and so there's a lot to work with: better switch expression, text blocks, some pattern matching, records, the much-better HTTP client, and so forth. I'll have a lot of code improvement to do, and I'm looking forward to it

Weekend Tinkering With Traefik

Mon May 29 11:57:34 EDT 2023

Tags: docker

For my D&D group, we've been using the venerable Roll20 for a good long time. It's served us okay, but it's barely improved for our needs over the years and our eyes have been wandering. Specifically, our eyes wandered over to Foundry VTT. Foundry has a lot going for it: it's sharp-looking, it has tons of mods, and you can host it yourself.

So, a bit ago, I set up just such an instance, making a Docker container out of it on one of my Linode servers and configuring my nginx reverse proxy on another Linode to point to it. There was a little fiddling to be done to my usual setup to make sure it passes along the WebSocket stuff, but it worked.

However, when we put it to the test, the DM side seemed slow, in a way that could be readily attributable to the fact that there's an extra network hop between the reverse proxy and the WebSocket destination. To lessen that as a possibility, I decided I should point the DNS directly to the host running it, eliminating the hop.

My first plan was to do the same thing I had with the larger setup, but just locally: spin up nginx and pair it with certbot on a cron job to handle the HTTPS certificates. However, it's been a long time since I had developed my current standard setup and I figured there's probably a nicer way to do it, since this is a very normal case.

Traefik

And so my eyes turned to Traefik, a purpose-built tool for this sort of thing. It has a lot of nice fiddly options, but one of its cleanest uses is to deploy it as a Docker container and have it use the Docker socket for picking up configuration to route to other containers.

I ended up with a Compose configuration that's more-or-less right out of any tutorial you'd find for this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
version: "3.3"
services:
  traefik:
    image: "traefik:v2.10"
    privileged: true
    userns_mode: host
    command:
      - "--providers.docker=true"
      - "--providers.docker.exposedbydefault=false"
      - "--entrypoints.web.address=:80"
      - "--entrypoints.websecure.address=:443"
      - "--certificatesresolvers.leresolver.acme.email=<my email>"
      - "--certificatesresolvers.leresolver.acme.httpchallenge.entrypoint=web"
      - "--certificatesresolvers.leresolver.acme.storage=/letsencrypt/acme.json"
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - "/var/run/docker.sock:/var/run/docker.sock:ro"
      - "letsencrypt:/letsencrypt"
    networks:
      - traefiknet
networks:
  traefiknet:
    name: traefiknet
    external: true
volumes:
  letsencrypt: {}

You can configure Traefik with configuration files as well, but the route I'm taking is to pass the config I need in the command parameters, so the entire thing is specified in the Compose file. I have it configured here to use Docker for its configuration discovery, to listen on ports 80 and 443, and to enable a Let's Encrypt resolver. On that last point, it really handles basically everything automatically: if you have an app that declares itself as "app.foo.com" on the HTTPS endpoint, Traefik will pick up on that and automatically do the dance with Let's Encrypt to present the certificate.

I created a Docker network named "traefiknet" for this and all participating apps to sit in. You can also do this by using host networking, but I kind of like this way.

Foundry

With that set up, my next step was to configure Foundry to participate in this. I tweaked the Foundry Compose config to remove the published port, join the common network, and to include Traefik information in its labels:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
version: "3.8"

services:
  foundry:
    image: felddy/foundryvtt:release
    init: true
    restart: always
    volumes:
      - foundry_data:/data
    networks:
      - traefiknet
    environment:
      - "FOUNDRY_USERNAME=<my username>"
      - "FOUNDRY_PASSWORD=<my password>"
      - "FOUNDRY_ADMIN_KEY=secret-admin-key"
    labels:
      - "traefik.enable=true"
      - "traefik.docker.network=traefiknet"
      
      - "traefik.http.routers.vtt-example.rule=Host(`my.vtt.host`)"
      - "traefik.http.routers.vtt-example.entrypoints=websecure"
      - "traefik.http.services.vtt-example.loadbalancer.server.port=30000"
      - "traefik.http.routers.vtt-example.tls=true"
      - "traefik.http.routers.vtt-example.tls.certresolver=leresolver"
      - "traefik.http.routers.vtt-example.tls.domains[0].main=my.vtt.host"
volumes:
  foundry_data: {}
networks:
  traefiknet:
    name: traefiknet
    external: true

The labels are the meat of it here. I declare that the container participates in the Traefik configuration and will be accessible via the "traefiknet" network I created. Then, I have bits to describe the specific routing. Here, "vtt-example" is an arbitrary name that I picked for this routing config - mostly, it's important that it's distinct from other routing configurations, but otherwise you can pick whatever.

The .rule=Host(my.vtt.host) bit is enough to map all requests beneath that host name to this container. There are other ways to do this - by path, by headers, and other things, and a combination thereof - but this suffices for my needs. This handles the normal sensible defaults for such a thing, including passing WebSockets through nicely. With .entrypoints=websecure, I have it opt in to the HTTPS port (left out of this is that I have another container that configures blanket HTTP -> HTTPS redirection for all hosts). With .loadbalancer.server.port (under "services" instead of "router"), I can declare that the Foundry app is listening on port 30000 within the container.

The .tls bits declare that this should get a TLS certificate, that it should use the Let's Encrypt resolver (by the name I chose, "leresolver"), and that it should use the domain I specified for it. In theory, I think it should pick up on that domain from the Host rule, but in my setup that didn't work for me - it's possible that that was just due to teething problems in my config, though.

Conclusion

I haven't yet had the opportunity to see if this fixed the sluggishness problem, but I'm glad it gave me the impetus to tinker with this. While I'll probably keep using nginx for most of my configuration (some of my configs are a lot more fiddly than this), I really like this as a default for on-host routing. If you combine that with my overall move to figuring that all server software should be deployed in a container unless you have a good reason to do otherwise, this slots in very nicely. I really like how the configuration is distributed away from the reverse proxy and to the apps that are actually being proxied to. With that, you can see everything you need in one place: you know the proxy is out there somewhere, and now the app's Compose file has everything important right in it. So, if you have a need, I'd say give it a look - it's quite neat.

XPages JEE 2.12.0: JNoSQL Views and PrimeFaces Support

Thu May 25 15:08:31 EDT 2023

Tags: jakartaee
  1. Updating The XPages JEE Support Project To Jakarta EE 9, A Travelogue
  2. JSP and MVC Support in the XPages JEE Project
  3. Migrating a Large XPages App to Jakarta EE 9
  4. XPages Jakarta EE Support 2.2.0
  5. DQL, QueryResultsProcessor, and JNoSQL
  6. Implementing a Basic JNoSQL Driver for Domino
  7. Video Series On The XPages Jakarta EE Project
  8. JSF in the XPages Jakarta EE Support Project
  9. So Why Jakarta?
  10. XPages Jakarta EE 2.5.0 And The Looming Java-Version Wall
  11. Adding Concurrency to the XPages Jakarta EE Support Project
  12. Adding Transactions to the XPages Jakarta EE Support Project
  13. XPages Jakarta EE 2.9.0 and Next Steps
  14. XPages JEE 2.11.0 and the Javadoc Provider
  15. The Loose Roadmap for XPages Jakarta EE Support
  16. XPages JEE 2.12.0: JNoSQL Views and PrimeFaces Support
  17. XPages JEE 2.13.0
  18. XPages JEE 2.14.0
  19. XPages JEE 2.15.0 and Plans for JEE 10 and 11

Last week, I put up version 2.12.0 of the XPages JEE Support project. Beyond the usual fit-and-finish bits here and there, there are two main improvements in this release.

Jakarta NoSQL Views

A while back, I caved to the necessity of explicit view use in Domino by adding the @ViewEntries and @ViewDocuments annotations that you can use in DominoRepository instances to point to a view to read. In the normal case, this works well: you generally know what the view you want to read from is, and these are made for that purpose.

However, you don't always know the view or folder you want to read from. The classic case here is a mail file: a user can make a bunch of custom views and folders, and so, if you were to make a web UI for this, you'll need some way to read these arbitrarily. So, to account for that, I added two new methods available on all DominoRepository instances:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
Stream<T> readViewEntries(
	String viewName,
	int maxLevel,
	boolean documentsOnly,
	ViewQuery viewQuery,
	Sorts sorts,
	Pagination pagination
);

Stream<T> readViewDocuments(
	String viewName,
	int maxLevel,
	boolean distinct,
	ViewQuery viewQuery,
	Sorts sorts,
	Pagination pagination
);

These work similarly to using the annotations: the first three parameters in each correspond to the properties you can set on the annotations, while the last three are the implicitly-supported optional parameters on such a method. The results of calling them are the same as if you had called an annotated method - it's just that the calling code is a bit more detailed.

The other piece of this puzzle is that you'll need to know what views are available, say for a sidebar. To account for that, I added this method:

1
Stream<ViewInfo> getViewInfo();

This will return information about all the views and folders in the database referenced by the DominoRepository instance. It doesn't try to be too smart: it will query all views and folders, without trying to parse out selection formulas for references to the repository's form, since that would be error prone in the normal case and outright wrong in edge cases (like if you have "synthetic" entity types that don't reference a real form at all). The information you get here is what you'd likely expect: view name, whether it's a view or folder, selection formula, column info, and so forth.

Jakarta Faces and PrimeFaces

I'm calling this one "PrimeFaces" since that's the immediate goal of these changes, but it's really about allowing for third-party Faces (JSF) extensions and themes without having to jump through too many hoops.

The challenge with PrimeFaces and things like it is that, while the Java packages for JSF no longer conflict with XPages (javax.faces and jakarta.faces are clearly related, but Java considers them entirely distinct), not all of the implementation bits changed. The big one here is WEB-INF/faces-config.xml: that file goes by the same name for XPages and JSF, but any Faces lifecycle participants declared in there (ViewHandlers, PhaseListeners, etc.) are not at all compatible.

To account for this, I've carved out a subdirectory, WEB-INF/jakarta. Within that, you can put JARs in WEB-INF/jakarta/lib and make a file named WEB-INF/jakarta/faces-config.xml. When present, the new-JSF runtime will pick up on these libraries, while XPages won't, and the runtime will also redirect calls to WEB-INF/faces-config.xml from JSF to WEB-INF/jakarta/faces-config.xml. In this way, you're able to have advanced extensions for both frameworks in the same NSF.

This isn't without its necessary workarounds, though. The big one comes in if you want to reference classes from these JSF-specific libraries in Java design elements. Since Designer's classpath won't know about them, your safest bet is to access them reflectively. For example, I ported a JSF example app from rieckpil.de to an NSF. In this, almost all of the code is identical - other than removing some EJB bits (which is not part of the XPages JEE project), the majority of the code was unchanged. However, one of the classes, IndexBean, directly referenced PrimeFaces model classes in order to build the bar chart. Think of that as similar to when you use com.ibm.xsp.model.DataObject in XPages code: it's a UI-specific class that can help bridge the difference between your stuff and the UI. However, since Designer doesn't know about those classes at build time, I had to change the calls to stuff like barChartModelClass.getMethod("setSeriesColors", String.class).invoke(model, "007ad9");. Not unworkable, but definitely ungainly. In a cruel twist of fate, this is exactly the sort of time when a JVM scripting language like SSJS shines. Alas.

As a final note, I waffled a bit (and am still waffling) on whether it'd be worth wrapping libraries like PrimeFaces in OSGi bundles, potentially as an optional add-on project. The way it's done here - including the JARS in your "webapp" - is more or less the standard way to do it, but real current projects would use a dependency mechanism like Maven instead of manually adding the JAR. On the other hand, there's a distinct benefit this way in that you can pick your version without having to do anything server-wide, and the use of a side directory means you don't suffer from Designer's poor performance when using JARs on a non-local server. Still, I may at least add an extension point for JSF classpath extensions at some point, though, since it could be useful.

Next Versions

As I mentioned earlier this month, this project is in some ways waiting for the Domino 14 beta cycle to properly begin, which will allow me to make some significant long-desired changes.

Still, there'll probably be at least another release before 3.x, which is currently named 2.13.0. Beyond ideally having no-app-changes support for Java 17, I've been doing some tinkering with JavaSapi, with the idea of being able to have your app code participate in filtering and authenticating requests. As with anything related to JavaSapi, it's sort of inherently-treacherous territory, considering it's not an official feature of Domino, but I've had some promising (if crash-prone) success so far. I'll probably also want to consolidate some of my handling of individual components and how they're configured in the NSF. There'll be a bigger push for that in 3.x, but for now there's still definitely room for me to go back and clean up some of the ways I've gone about things. The specs I added early (CDI, JAX-RS, etc.) are a bit more ad-hoc than some of the newer ones, with the newer ones coalescing more around the ComponentModule part (Domino's Java conception of a running app, NSF or otherwise) and less around the XPages ApplicationEx part. There's an inherent amount of necessary grime with this stack, but I have some ideas for at least some cleaning.

Otherwise, I'm mostly champing at the bit to do my big revamps in 3.x: lowering the count of individual XPages Libraries that separate the features, bumping specs and implementations to their next major versions, improving the code with Java 9 through 17 enhancements, and so forth. That should be fun.