Planning a Homelab Rework

Sun Jun 25 20:39:37 EDT 2023

  1. Planning a Homelab Rework
  2. Homelab Rework: Phase 1
  3. Homelab Rework: Phase 2
  4. Homelab Rework: Phase 3 - TrueNAS Core to Scale

Over the years, I've shifted how I use my various computers here, gaining new ones and shifting roles around. My current setup is a mix of considered choices and coincidence, but it works reasonably well. "Homelab" may be a big term for it, but I have something approaching a reasonable array:

  • nemesis, a little Netgate-made device running pfSense and serving as my router and firewall
  • trantor, a first-edition Mac Pro running TrueNAS Core
  • terminus, my gaming PC that does double duty as my Hyper-V VM host for the Windows and Linux VMs I use for work
  • nephele, a 2013 MacBook Air I recently conscripted to be a Docker host running Debian
  • tethys, my current desktop workhorse, an M2 Mac mini running macOS
  • ione, an M1 MacBook Air with macOS that does miscellaneous and travel duty

While these things work pretty well for me, there are some rough edges. tethys and ione need no changes, but others are up in the air.

The NAS

trantor's Xeon is too old to run the bhyve hypervisor used by TrueNAS Core, so I can't use it for VM duty. It can run jails, which I've historically used for some apps like Plex and Postgres, and those are good. Still, as much as I like FreeBSD, I have to admit that running Linux there would get me a bit more, such as another Domino server (I got Domino running under Linux binary compatibility, but only in a very janky and unreliable way). I'm considering side-grading it to TrueNAS Scale, but the extremely-high likelihood of trouble with that switch on such an odd, old machine gives me pause. As a NAS, it's doing just fine as-is, and I may leave it like that.

The Router

nemesis is actually perfectly fine as it is, but I'm pondering giving it more work. While Linode is still doing well for me, its recent price hike left a bad taste in my mouth, and I wouldn't mind moving some less-essential services to my own computers here. If I do that, I could either map a port through the firewall to an internal host (as I've done previously) or, as I'm considering, run HAProxy directly on it and have it serve as a reverse proxy for all machines in the house. On the one hand, it seems incorrect and maybe unwise to have a router do double-duty like this. On the other, since I'm not going to have a full-fledged DMZ or anything, it's perfectly-positioned to do the job. Assuming its little ARM processor is up to the task, anyway.

Fortunately, this one is kind of speculative anyway. While I used to have some outwardly-available services I ran from in the house, those have fallen by the wayside, so this would be for a potential future case where I do something different.

The VM Host/Gaming PC

terminus, though, is where most of my thoughts have been spent lately. It's running Windows Server 2019, an artifact of a time when I thought I'd be doing more Server-specific stuff with it. Instead, though, the only server task I use it for is Hyper-V, and that runs just the same on client Windows. The straightforward thing to do with it would be to replace Windows Server with Windows 11 - that'd put me back on a normal release train, avoid some edge cases of programs complaining about OS compatibility, and generally be a bit more straightforward. However, the only reason I have Windows as the top-level OS on there at all is for gaming needs. Moreover, it just sits in the basement, and I don't even play games directly at its console anymore - it's all done through Parsec.

That has me thinking: why don't I demote Windows to a VM itself? The processor has a built-in video chipset for basic needs, and I could use PCIe passthrough to give full control of my video card to a Windows VM. I could pick a Linux variant to be the true OS for the machine, run VMs and containers as top-level peers, and still get full speed for games inside the Windows VM.

I have a few main contenders in mind for this job.

The frontrunner is Proxmox, a purpose-built Debian variant geared towards running VMs and Linux Containers as its bread-and-butter. Though I don't need a lot of the good features here, it's tough to argue with an OS that's specifically meant for the task I'm thinking of. However, it's a little sad that the containers I generally run aren't LXC-type containers, and so I'd have to make a VM or container to then run Docker. In the latter form, there wouldn't be too much overhead to it, but it'd be kind of ugly, and it'd feel weird to do a full revamp to then immediately do one of the main tasks only in a roundabout way.

That leads to my dark-horse candidate of TrueNAS Scale. This isn't as obvious a choice, since this machine isn't intended to be a NAS, but Scale bumps Kubernetes and Docker containers to the top level, and that's exactly how I'd plan to use it. I assume that the VM management in TrueNAS isn't as fleshed out as Proxmox, but it's on a known foundation and looks plenty suitable for my needs.

Finally, I could just skip the "utility" distributions entirely and just install normal Debian. It'd certainly be up to the task, though I'd miss out on some GUI conveniences that I'd otherwise use heavily. On the other hand, that'd be a good way to force myself to learn some more fundamentals of KVM, which is the sort of thing that usually pays off down the line.

For Now

Or, of course, I could leave everything as-is! It does generally work - Hyper-V is fine as a hypervisor, and it's only sort of annoying having an extra tier in there. The main thing that will force me to do something is that I'll want to ditch Windows Server regardless, even if I just do an in-place switch to Windows 11. We'll see!

Kicking the Tires on Domino 14 and Java 17

Fri Jun 02 13:50:53 EDT 2023

Tags: domino java

As promised, HCL launched the beta program for Domino 14 the other day. There's some neat stuff in there, but I still mostly care about the JVM update.

I quickly got to downloading the container image for this to see about making sure my projects work on it, particularly the XPages JEE Support project. As expected, I had a few hoops to jump through. I did some groundwork for this a while back, but there was some more to do. Before I get to some specifics, I'll mention that I put up a beta release of 2.13.0 that gets almost everything working, minus JSP.

Now, on to the notes and tips I've found so far.

AccessController and java.policy

One of the changes in Java 17 specifically is that our old friend AccessController and the whole SecurityManager framework are marked as deprecated for removal. Not a minute too soon, in my opinion - that was an old relic of the applet days and was never useful for app servers, and has always been a thorn in the side of XPages developers.

However, though it's deprecated, it's still present and active in Java 17, so we still have to deal it. One important thing to note is that java.policy moved from the "jvm/lib/security" directory to "jvm/conf/security". A while back, I switched to putting .java.policy in the Domino user's home directory and this remains unchanged; I suggest going this route even with older versions of Domino, but editing java.policy in its new home will still work.

Extra JARs and jvm/lib/ext

Another change is that "jvm/lib/ext" is gone, as part of a general desire on Java's part to discourage installing system-wide libraries.

However, since so much Java stuff on Domino is older than dirt, having system-wide custom libraries is still necessary, so the JVM is configured to bring in everything from the "ndext" directory in the Domino program directory in the same way it used to use "jvm/lib/ext". This directory has actually always been treated this way, and so I started also using this for third-party libraries for older versions, and it'll work the same in 14. That said, you should ideally not do this too much if it's at all avoidable.

The Missing Java Compiler

I mentioned above that everything except JSP works on Domino 14, and the reason for this is that V14 as it is now doesn't ship with a Java compiler (JSP, for historical reasons, does something much like XPages in that it translates to Java and compiles, but this happens on the server).

I originally offhandedly referred to this as Domino no longer shipping with a JDK, but I think the real situation was that Domino always used a JRE, but then had tools.jar added in to provide the compiler, so it was something in between. That's why you don't see javac in the jvm/bin directory even on older releases. However, tools.jar there provided a compiler programmatically via javax.tools.ToolProvider.getSystemJavaCompiler() - on a normal JRE, this API call returns null, while on a JDK (or with tools.jar present) it'd return a usable compiler. tools.jar as such doesn't exist anymore, but the same functionality is bundled into the newer-era weird runtime archives for efficiency's sake.

So this is something of a showstopper for me at the moment. JSP uses this system compiler, and so does the XPages Bazaar. The NSF ODP Tooling project uses the Bazaar to do its XSP -> Java -> bytecode compilation of XPages, and so the missing compiler will break server-based compilation (which is often the most reliable compilation currently). And actually, NSF ODP Tooling is kind of double broken, since importing non-raw Java agents via DXL is currently broken on Domino 14 for the same reason.

I'm pondering my options here. Ideally, Domino will regain its compiler - in theory, HCL could switch to a JDK and call it good. I think this is the best route both because it's the most convenient for me but also because it's fairly expected that app servers run on a JDK anyway, hence JSP relying on it in the first place.

If Domino doesn't regain a compiler, I'll have to look into something like including ECJ, Eclipse's redistributable Java compiler. I'd actually looked into using that for NSF ODP Tooling on macOS, since Mac Notes lost its Java compiler a long time ago. The trouble is that its mechanics aren't quite the same as the standard one, and it makes some more assumptions about working with the local filesystem. Still, it'd probably be possible to work around that... it might require a lot more of extracting files to temporary directories, which is always fiddly, but it should be doable.

Still, returning the built-in one would be better, since I'm sure I'm not the only one with code assuming that that exists.

Overall

Despite the compiler bit, I've found things to hold together really well so far. Admittedly, I've so far only been using the container for the integration tests in the XPages JEE project, so I haven't installed Designer or run larger apps on it - it's possible I'll hit limitations and gotchas with other libraries as I go.

For now, though, I'm mostly pleased. I'm quite excited about the possibility of making more-dramatic updates to basically all of my projects. I have a branch of the XPages JEE project where I've bumped almost everything up to their Jakarta EE 10 versions, and there are some big improvements there. Then, of course, I'm interested in all the actual language updates. It's been a very long time since Java 8, and so there's a lot to work with: better switch expression, text blocks, some pattern matching, records, the much-better HTTP client, and so forth. I'll have a lot of code improvement to do, and I'm looking forward to it