Realmz

Sun Mar 31 11:35:14 EDT 2024

For a while now, I've wanted to just kind of gush about an old Mac game I played when I was a teenager, and the last day of Marchintosh for the year is as good a time as any.

Overview

Realmz is a game that ran on the classic Mac OS and, in later versions, Windows. It was shareware at the time - one of the few shareware games I ended up cobbling together the money for - but has long been made fully available for free, with my go-to source being the Macintosh Garden. If you have SheepShaver around, it works nicely there.

The game itself is quickly identified as a party-based fantasy RPG. I didn't really realize it at the time, but it's a full-on CRPG in the nerdiest sense. I mean, look at this freaking character sheet:

Screenshot of the Realmz character creation sheet

While it's not strictly D&D rules, it basically is. Older versions (which are also available on the Macintosh Garden) even used THAC0 before switching to an "Armor Rating" system.

CRPG

Looking back, I'm glad I had an experience with such a true-blood CRPG at the time. I didn't play D&D growing up, didn't play the Gold Box games, and was too busy playing pretty much exclusively Blizzard games to play the Infinity Engine games or Neverwinter Nights when they came out. It wasn't really until Dragon Age: Origins and then (especially) Pillars of Eternity that I realized the glory of the genre. But looking at Realmz, it's obvious that it's right in the same lineage.

Combat is strictly turn-based, takes place on a grid, and is suitably technical:

Screenshot of a combat situation in Realmz

It even does some of the weird stuff: for example, martial characters won't just get multiple attacks per round, but will also get "partial" steps like my rogue Hebs there, who gets three attacks every two rounds, as a stepping stone to 2 / 1.

Realmz also has its own mechanics-heavy take on the thing CRPGs try to do where they want to emulate an open-ended experience a DM might oversee beyond just combat. For example, early on, you meet a kid who wants you to help his dog, which is stuck in a well. When you get there, you're presented with the "encounter" screen, where you can try all sorts of things:

Screenshot of the Realmz encounter screen

There are a lot of ways to deal with these encounters. In this case, I might have Galba there do an Acrobatic Act, which has about even odds. My sorcerer Fenton there might use a Spider Climb (might not be the name) spell to make scaling the well effortless. Or, if I stocked up, I might just use a rope. You can easily fail this - if you do, the kid runs off crying and you have to wait for the guards to show up to help you, with no experience gain. Realmz has a bunch of these scenarios and they're pretty neat. Admittedly, they fall short in the ways that all non-DM-run games eventually do, where your actual options aren't truly limitless. The "Speak" option is available in other situations, but it's only ever really practical if you have, say, a magic word to open a door or something. It's not a true tabletop experience, but it's trying, bless its heart.

Mac-ness

One thing I really enjoy about games in the heyday of Mac shareware games (by the way, read The Secret History of Mac Gaming if you haven't - it's great) is how thoroughly Mac-like they are. For both practical and cultural reasons, a lot of Mac games didn't necessarily take over the whole screen with their own interface like DOS and Windows games usually do. While there are some Windows games that use the Windows UI, like another small classic Castle of the Winds, it's very common in Mac games. For example, there's Scarab of Ra:

Screenshot of Scarab of Ra from Macintosh Garden

As it happens, Scarab of Ra is another game where I didn't appreciate its lineage at the time: it's a true roguelike, albeit with a first-person perspective.

Realmz doesn't go quite as hard in using native widgets for everything, but you can see the menu bar in earlier screenshots - you use the normal Mac menu to access game commands, the bestiary, your ally list, your collected notes, and so forth. It's just neat. Also, like a lot of Mac software at the time, Realmz's program directory is just a delight to look at:

Screenshot of the Realmz 2.5 installation folder

My use of the "Drawing Board" Appearance Manager theme helps it too, but just check out those icons. That sort of thing wasn't strictly necessary, but it was the Mac way, and it was wonderful.

Versions

And this isn't exactly a Mac-like attribute, but I like that Realmz wasn't afraid of using version numbers. It went from version 1.x all the way up through 8.x, with minor and patch versions along the way. It was updated all the time, and it was always exciting to see a new major version to find what the big changes are.

Mostly, the changes were things like adding classes: the old versions have the same sort of handful you'd find in basic D&D, while the later ones have so many that you can pick between "Archer" and "Marksman" or "Bard" and "Minstrel". Some of the changes were less like finding a D&D source book and more like the game gradually morphing into its own sequel, though.

For example, the original versions didn't have music of any kind, as was the style at the time. Somewhere along the line (version 5, I think), it gained music, and... boy, it's a doozy. Here's, for example, the camping music:

What I assume happened is that the developer wanted to add some music and then found some free or cheap module files and slapped them in there where they kind of work. The tone is absolutely bizarre, and it's kind of great for it.

It was just neat seeing the game progress, with the changes in systems and new features, even the "eh, not the best idea" stuff like parts of dungeons that switch to a first-person mode.

Scenarios

I have to admit that, though I played a ton of Realmz, I never even got that far into it. A big part of that was that the scenarios beyond the starting City of Bywater also cost money above the core game, and it's a tall order for a cash-strapped teenager to cough up money at all, let alone add-on costs. So my characters probably always plateaued around level 10, as there's just not that much to do in the base scenario. That's good news for future me, though: there's a ton of stuff waiting for me whenever I want to get back into it.

If you have a taste for older games or CRPGs in general, I definitely suggest you give it a try. The Windows version may be easier to run than the Mac one, though it loses some of the appeal. Depending on your temperament, Realmz may be easier to get into from scratch than games like, say, the original Baldur's Gate, with the latter's janky RTwP combat and constant fourth-wall-breaking dweebiness. Definitely keep it in mind for a cozy-day game, I say.

Adding Code Coverage Reports To Domino-Container-Run Tests

Mon Mar 11 15:33:02 EDT 2024

Tags: docker testing

When you're writing test suites for your code, it can be very useful to use a tool to analyze the code coverage of your tests. While people can get a little obsessive about coverage percents, there's certainly no denying that it's helpful to know how much of your code is actually run when testing, and also being able to look down into the specifics of what is covered.

With Java, one of the preeminent tools for this is JaCoCo, a venerable open-source library that you can integrate with your test suites to give reports of your coverage. In a normal project, such as a build run via Maven, you can use the Maven plugin in tandem with the Maven Surefire and Failsafe plugins. However, things get more complicated if the code you're actually testing isn't in the Surefire JVM, but rather inside a container.

That's exactly the situation I have with the integration-test suite of the XPages Jakarta EE project, where it creates a Docker container with the current build of the project deployed as OSGi plugins, and then executes HTTP calls against OSGi bundles and NSFs. I figured this was a solvable problem, so I set out doing so.

I first came across this blog post, which describes the general idea well, but unfortunately references Gists that seem to no longer exist. Still, it gave me a good starting point.

Installing JaCoCo in Domino

The first thing I had to do was to get the JaCoCo Java agent into the container. I added it as a Maven dependency to the IT suite project:

1
2
3
4
5
6
<dependency>
	<groupId>org.jacoco</groupId>
	<artifactId>org.jacoco.agent</artifactId>
	<version>0.8.11</version>
	<scope>test</scope>
</dependency>

Conveniently, this dependency is itself a wrapper for the agent JAR and comes with a convenience method for accessing the JAR data. I used that to read it into memory and send it to the Docker runtime during the container build:

1
2
3
4
5
byte[] agentData;
try(InputStream is = AgentJar.getResourceAsStream()) {
	agentData = IOUtils.toByteArray(is);
}
withFileFromTransferable("staging/jacoco.jar", Transferable.of(agentData)); //$NON-NLS-1$

The use of Transferable here allows me to keep the process independent of whether Docker is running locally or remote - I run remotely almost all the time nowadays, due to Domino's continued lack of an ARM port.

With the file in place, I modified my Dockerfile to copy it to a known location in the container:

1
2
COPY --chown=notes:notes staging/jacoco.jar /local/
COPY --chown=notes:notes staging/JavaOptionsFile.txt /local/

The JavaOptionsFile.txt was already there for another ARM-related reason, but it's important to note for the next step. This sort of file is how you enable JaCoCo in the Domino JVM: I set JavaUserOptionsFile=/local/JavaOptionsFile.txt and it'll read its rules from there. Following the instructions, I added -javaagent:/local/jacoco.jar=output=file,destfile=/tmp/jacoco.exec on its own line in this file. This causes JaCoCo to be automatically loaded with the HTTP JVM and to store its report in the named file on shutdown.

Reading the Data

That said, this didn't work immediately. The file "/tmp/jacoco.exec" was created properly inside the container, so the agent was running, but the file content was always zero bytes. I realized that this was due to the merciless way in which the container is killed by my test suite: there's no proper shutdown step, and so JaCoCo's shutdown hook never fires.

Fortunately, writing to a file isn't the only way JaCoCo can do its reporting - you can also have it open up a TCP port to connect to and read. So I changed the Java option line to:

1
-javaagent:/local/jacoco.jar=output=tcpserver,address=*,port=6300

I modified the withExposedPorts(...) call inside the class that builds my Testcontainers container to also include 6300, and then used getMappedPort(6300) to identify the actual randomized port mapped by Docker.

The remaining task was to figure out the little protocol used by JaCoCo to signal that it should collect and return its data. I get the impression that it's not too complicated, but I still figured it'd be best to use an existing implementation. I found jacocotogo, a Maven plugin that reads the data, and it looked promising. However, it had two problems: being a Maven plugin, it came with a bunch of transitive dependencies I didn't want, and it's also 11 years old and thus a bit out of date.

I ended up forking the main utility class, trimming out the parts I didn't need (like JMX), switching it to NIO, and going from there.

Using the Data

With that all in place, a test run will end up with a file named "jacoco.exec" inside the "target" directory. Using this file varies by IDE, but, in Eclipse, you can install the EclEmma tool, open the "Coverage" view, right-click in the table area, and choose "Import Session...". That will let you locate the file and then choose the projects from your workspace that you're looking to analyze.

When I did that, I got my results:

Screenshot of Eclipse's Coverage tool detailing my test suite's coverage of somewhere around 50-65%

This is surprisingly good for the project, especially when you consider how large chunks of the red bars are things like the servlet wrapper package, which includes a lot of delegating code that is obligatory to match the interface but is not likely to be actually used in practice.

While this is currently the only project where I've needed to do this, it'll certainly be good to keep these techniques in mind. The TCP port thing in particular should be handy in future edge cases even without the Docker part.

Homelab Rework: Phase 3 - TrueNAS Core to Scale

Sat Mar 02 14:00:08 EST 2024

  1. Planning a Homelab Rework
  2. Homelab Rework: Phase 1
  3. Homelab Rework: Phase 2
  4. Homelab Rework: Phase 3 - TrueNAS Core to Scale

When I last talked about the ragtag fleet of computers I generously call a "homelab" now, I had converted my gaming/VM machine back from Proxmox to Windows, where it remains (successfully) to this day.

For a while, though, I've been eyeing converting my NAS from TrueNAS Core to Scale. While I really like FreeBSD technically and philosophically, running Linux was very appealing for a number of reasons. Still, it was a high-risk operation, even though the actual process of migration looked almost impossibly easy. For some reason, I decided to take the plunge this week.

The Setup

Before going into the actual process, I'll describe the setup a bit. The machine in question is a Mac Pro 1,1: two Xeon 5150s, four traditional HDDs for storage, and a handful of M.2 drives. The machine itself is far, far too old to have NVMe on the motherboard, but it does have PCIe, so I got a couple adapter cards. The boot volume is a SATA M.2 disk on one of them, while I have some actual NVMe ones serving as cache/log devices in the ZFS pool. Also, though everything says that the maximum RAM capacity is 32 GB, I actually have 64 in there and it's worked perfectly.

It's a bit of a weird beast this way, but those old Mac Pros were built to last, and it's holding up.

Also, if you're not familiar with TrueNAS and its different variants, it's worth a bit of explanation. TrueNAS Core (née FreeNAS) is a FreeBSD-based NAS-focused OS. You primarily interact with it via a web-based GUI and its various features heavily revolve around the use of ZFS, while its app system uses FreeBSD jails and its VM system uses Bhyve. TrueNAS Scale is a related system, but based on Debian Linux instead of FreeBSD. It still uses ZFS, and its GUI is similar to Core, but it implements its apps and VMs differently (more on this in a bit). For NAS/file-share uses, there's actually less of a difference than you might think based on their different underlying OSes, but the distinctions come into play once you go beyond the basics.

The Conversion

If anything, the above-linked documentation overstates the complexity of the operation. I didn't even need to go the "manual update" route: I went to the Update panel, switched from the TrueNAS Core train to the current non-beta TrueNAS Scale one, hit Update, and let it go. It took a long time, presumably due to the age of the machine, but it did its job and came back up on its own.

Well, mostly: for some reason, the actual data ZFS pool was sort of half-detached. The OS knew it was supposed to have a pool by its name, but didn't match it up to the existing disks. To fix this, I deleted the configuration for the pool (but did not delete the connected service configuration) and then went to Import Pool, where the real one existed. Once it was imported, everything lined back up without further issue.

Being basically a completely-different OS, there are a number of features that Core supports but Scale doesn't. Of that list, the only one I was using was the plugin/jail system, but I had whittled my use down to just Postgres (containing only discardable dev data) and Plex. These are both readily available in Scale's app system, and it was quick enough to get Plex re-set-up with the same library data.

Apps

As I mentioned, TrueNAS Core uses a custom-built "plugin" system sitting on top of the venerable FreeBSD jail capabilities. Those jails are similar in concept to things like Docker containers, and work very similarly in practice to the Linux Containers system I experienced with Proxmox.

TrueNAS Scale, for its part, uses Kubernetes, specifically by way of K3s, and provides its own convenient UI on top of it. Good thing it does provide this UI, too, since Kubernetes is a whole freaking thing, and I've up until this point stayed away from learning it. I guess my time has come, though. Kubernetes is distinct from Docker - while older versions used Docker as a runtime of sorts, this was always an implementation detail, and the system in use in current TrueNAS Scale is containerd.

Setting aside the conceptual complexity of Kubernetes, this distinction from Core is handy: while not being Docker, Kubernetes can consume Docker-compatible images and run them, and that ecosystem is huge. Additionally, while TrueNAS ships with a set of common app "charts" (Plex included), there's a community project named TrueCharts that adds definitions for tons and tons more.

Domino

That brings me to our beloved Domino. I had actually kind of gotten Domino running in a jail on TrueNAS Core, but it was much more an exercise in seeing if I could do it than anything useful: the installer didn't run, so I had to copy an installation from elsewhere, and the JVM wouldn't even load up without crashing. Neat to see, but I didn't keep it around.

The prospect on Scale is better, though. For one, it's actually Linux and thus doesn't need a binary-compatibility shim like FreeBSD has, and the container runtime meant I could presumably just use the normal image-building process. I could also run it in a VM, since the Linux hypervisor works on this machine while bhyve did not, but I figured I'd give the container path a shot.

Before I go any further, I'll give a huge caveat: while this works better than running it on FreeBSD, I wouldn't recommend actually doing what I've done for production. It'll presumably do what I want it to do here (be a local replica of all of my DBs without requiring a distinct VM), it's not ideal. For one, Domino plus Kubernetes is a weird mix: Kubernetes is all about building up and tearing down a swarm of containers dynamically, while Domino is much more of a single-server sort of thing. It works, certainly, but Kubernetes is always there to tempt you into doing things weird. Also, I know almost nothing about Kubernetes anyway, so don't take anything I say here as advice. It's good fun, though.

That said, on to the specifics!

Deploying the Container

The way the TrueNAS app UI works, you can go to "Custom App" and configure your container by referencing a Docker image from a repository. I don't normally actually host a Docker registry, instead manually loading the image into the runtime. It might be possible to do that here, but I took the opportunity to set up a quick local-network-only one on my other machine, both because I figured it'd be neat to learn to do that and because I forgot about the Harbor-hosted option on that link.

Since the local registry used HTTP and there's nowhere in the TrueNAS UI to tell it to not use HTTPS, I followed this suggestion to configure K3s to explicitly map it. With that in place, I was able to start pulling images from my registry.

The Domino Version

One quirk I quickly ran into was that I can't use Domino 14 on here. The reason for this isn't an OS problem, but rather a hardware limitation: the new glibc that Domino 14 uses requires the "x86-64-v2" microarchitecture level and the Xeon 5150 just doesn't have that by dint of pre-dating it by two years.

That's fine, though: I really just want this to house data, not app development, and 12.0.2 will do that with aplomb.

Volume Configuration

The way I usually set up a Domino container when using e.g. Docker Compose is that I define a handful of volumes to go with it: for the normal data dir, for DAOS, for the Transaction Log, and so forth. This is a bit of an affectation, I suppose, since I could also just define one volume for everything and it's not like I actually host these volumes elsewhere, but... I don't know, I like it. It keeps things disciplined.

Anyway, I originally set this up equivalently in the Custom App UI in TrueNAS, creating a "Volume" entry for each of these. However, I found that, for some reason, Domino didn't have write access to the newly-created volumes. Maybe this is due to the uid the container is built to use or something, but I worked around it by using Host Path Volumes instead. The net effect is the same, since they're in the same ZFS pool, and this actually makes it easier to peek at the data anyway, since it can be in the SMB share.

Once I did that and made sure the container user could modify the data, all was well. Mostly, anyway.

Transaction Logs, ZFS, and Sector Size

Once Domino got going, I noticed a minor problem: it would crash quickly, every time. Specifically, it crashed when it started preparing the transaction log directory. I eventually remembered running into the same problem on Proxmox at one point, and it brought me back to this blog post by Ted Hardenburgh. Long story short, my ZFS pool uses 4K sectors and Domino's transaction logs can't deal with that, at least in 12.0.2 and below.

This put me in a bit of a sticky spot, since the way to change this is to re-create the entire pool and I really didn't want to do that.

I came up with a workaround, though, in the form of making a little disk image and formatting it ext4. You can use a loop device to mount a file like a disk, so the process looks like this:

1
2
3
4
dd if=/dev/zero of=tlog.img bs=1G count=1
sudo /sbin/losetup --find --show tlog.img
sudo mkfs.ext4 /dev/loop0
sudo mount /dev/loop0 /mnt/tlog

That makes a 1GB disk image, formats it ext4, and mounts it as "/mnt/tlog". This process defaults to 512-byte sectors, so I made a directory within it writable by the container user (more on this shortly), configured the Domino container to map the transaction log directory to that path, and all was well.

Normally, to get this mounted at boot, you'd likely put an entry in fstab. However, TrueNAS assumes control over system configuration files like that, and you shouldn't edit them directly. Instead, what I did was write a small script that does the losetup and mount lines above and added an entry in "System Settings" - "Advanced" - "Init/Shutdown Scripts" to run this at pre-init.

Networking

The next hurdle I wanted to get over was the networking side. You can map ports in apps in a similar way to what you'd do with Docker, but you have to map them to a port 9000 or above. That would be an annoying issue in general, but especially for NRPC. Fortunately, the app configuration allows you give the container its own IP address in the "Add external Interfaces" (sic) configuration section. Since the virtual MAC address changes each time the container is deployed, I gave it a static IP address matching a reservation I carved out on my DHCP server, pointed it to the DNS server, and all was well. All of Domino's open ports are available on that IP, and it's basically like a VM in that way.

Container User

Normally, containers in TrueNAS's app system run as the "apps" user, though this is configurable per-app. The way the Domino container launches, though, it runs as UID 1000, which is notes inside the container. Outside the container, on my setup, that ID maps to... my user account jesse.

Administration-wise, that's not exactly the best! In a less "for fun" situation, I'd change the container user or look into UID mapping as I've done with Docker in the past, but honestly it's fine here. This means it's easy for me to access and edit Domino data/config files over the share, and it made the volume mapping above work without incident. As long as no admins find out about this, it can be my secret shame.

Future Uses

So, at this point, the server is doing the jobs it was doing previously, plus acting as a nice extra replica server for Domino. It's positioned well now for me to do a lot of other tinkering.

For one, it'll be a good opportunity for me to finally learn about Kubernetes, which I've been dragging my feet on. I installed the Portainer chart from TrueCharts to give me a look into the K8s layer in a way that's less abstracted than the TrueNAS UI but more familiar and comfortable than the kubectl tool for me for now.

Additionally, since the hypervisor works on here, it'll be another good location for me to store utility VMs when I need them, rather than putting everything on the Windows machine (which has half as much RAM).

I could possibly use it to host servers for various games like Terraria, though I'm a bit wary of throwing such ancient processors at the task. We'll see about that.

In general, I want to try hosting more things from home when they're non-critical, and this will definitely give me the opportunity. It's also quite fun to tinker with, and that's the most important thing.