Showing posts for tag "admin"

Using Custom DNS Configurations With CertMgr

Thu Oct 24 18:51:20 EDT 2024

The most common way that I expect people use Domino's CertMgr/certstore.nsf is to use Let's Encrypt with the default HTTP-based validation. This is very common in other products too and usually works great, but there are cases when it's not what you want. I hit two recently:

  • My Domino servers are behind Traefik reverse proxies to blend them with other Docker-based services, and by default the HTTP challenge doesn't like when there's already an HTTP->HTTPS redirect in place
  • I also have dev servers at home that aren't publicly-visible at all, and so can't participate in the HTTP flow at all

The first hasn't been trouble until recently, since the reverse proxy is fine, but now I want to have a proper certificate for other services like DRAPI. For the second, I've had a semi-manual process: my [pfSense][(https://www.pfsense.org/)-based router uses its Acme Certificate plugin to do the dns-01 challenge (since it knows about my DNS provider) and then, every three months, I would export that certificate and put it into certstore.nsf.

Re-Enter CertMgr

Domino's CertMgr can handle those DNS challenges just fine, though, and the HCL-TECH-SOFTWARE/domino-cert-manager project on GitHub contains configuration documents for several common providers/protocols.

For historical reasons (namely: I didn't like Network Solutions in 2000), I use joker.com as my registrar, and they're not in the default list. Indeed, it seems like their support for this process is very much a "oh geez, everyone's asking us for this, so let's hack something together" sort of thing. Fortunately, the configuration docs are adaptable with formula (and other methods) - I'll spare you the troubleshooting details and get to the specifics.

DNS Provider Configuration

In certstore.nsf, the "DNS Configuration" view lets you create configuration documents for custom providers. Before I go further, I'll mention that I put the DXL of mine in OpenNTF's Snippets collection - certstore.nsf has a generic "Import DXL" action that lets you point to a file and import it, made for exactly this sort of situation.

Anyway, the meat of the config document happens on the "Operations" tab. This tab has a bunch of options for various lookup/query actions that different providers may need (say, for pre-request authorization flows), but we won't be using most of those here.

Operations

Our type here is "HTTP Request" - there are options to shell out to a command or run an agent if you need even more flexibility, but that type should handle most cases.

The "Status formula" field controls what Domino considers the success/failure state of the request. It contains a formula that will be run in the context of a consistent document used across each phase. If your provider responds with JSON, the JSON will be broken down into JSONPath-ish item names, as you can see in the HCL-provided examples. You can then use that to determine success or failure. Joker replies in a sparse human-readable text format, but does set the HTTP status code nicely, so I set this to ret_AddStatus.

The "DNS provider delay" field indicates how long the challenge check will wait after the "Add" operation, and that latency will depend on your specific provider. I did 20 seconds to be safe, and it's proven fine.

During development, setting "HTTP request tracing" to "Enabled" is helpful for learning how things go, and then "Write trace on error" is likely the best choice once things look good.

HTTP Lookup Request

For Joker, you can leave this whole section blank - its "API" doesn't support this and it's optional, so ignore it.

HTTP Add Request

This section is broken up into two parts, "Query request" and "Request". Set/leave the "Query request type" to "None" or blank, since it's not useful here.

Now we're back into the meat of the configuration. For "Request type", set it to "POST".

"URL formula" should be cfg_URL, which represents the URL configured above. Other providers may have URL permutations for different operations, but Joker has only the one.

Joker is very picky about the Content-Type header, so set the "Header formula" field to "Content-Type: application/x-www-form-urlencoded", which will include that constant string in the upload.

Things get a bit more complicated when it gets to the "Post data formula". For this, we want to match the example from Joker's example, but we also need to do a bit of processing based on the specific name you're asking for. Some DNS providers may want you to send a DNS key value like _acme-challenge.foo.example.com, while others (like Joker) want just _acme-challenge.foo. So we do a check here:

txtName := @If(@Ends(param_DnsTxtName; "."+cfg_DnsZone); @Left(param_DnsTxtName; "."+cfg_DnsZone); param_DnsTxtName);

"username=" + @UrlEncode("Domino"; cfg_UserName) + "&password=" + @UrlEncode("Domino"; cfg_Password) + "&zone=" + @UrlEncode("Domino"; cfg_DnsZone) + "&label=" + @UrlEncode("Domino";txtName) + "&type=TXT&value=" + @UrlEncode("Domino"; param_DnsTxtValue)

In my experience, this covers both single-host certificates and wildcard certificates.

HTTP Delete Request

This is for the cleanup step, so your DNS isn't littered with a bunch of useless TXT challenge records.

As before, make sure "Query request type" is "None" or blank.

Similarly, "Request type", "URL formula", and "Header formula" should all be the same as in the "Add" section.

Finally, the "Post data formula" is almost the same, but sets the value to nothing:

txtName := @If(@Ends(param_DnsTxtName; "."+cfg_DnsZone); @Left(param_DnsTxtName; "."+cfg_DnsZone); param_DnsTxtName);

"username=" + @UrlEncode("Domino"; cfg_UserName) + "&password=" + @UrlEncode("Domino"; cfg_Password) + "&zone=" + @UrlEncode("Domino"; cfg_DnsZone) + "&label=" + @UrlEncode("Domino";txtName) + "&type=TXT&value="

Putting It To Use

Once you have your generic provider configured, you can create a new Account document in the "DNS Providers" view.

In this document, set your "Registered domain" to your, uh, registered domain - in my case, "frostillic.us". This remains the case even if you want to register wildcard certificates for subdomains, like if I wanted "*.foo.frostillic.us". CertMgr will use this to match your request, and matches subdomain wildcards too.

There's a lot of room for special tokens and keys here, but Joker only needs three fields:

"DNS zone" is again your domain name.

"User name" is the user name you get when you go to your DNS configuration and enable Dynamic DNS - it's not your normal account name. This is a good separation, and a lot of other providers will likely have similar "don't use your normal account" stuff.

Similarly, "Password" is the Dynamic-DNS-specific password.

Joker account configuration

TLS Credentials

Your last step is to actually put in the certificate request. This stage is actually pretty much identical to the normal process, with the extra capability that you can now make use of wildcard certificates.

On this step, you can fill in your host name, servers with access, and then your ACME account. Even more than with the normal process, it's safest to start with "LetsEncryptStaging" instead of "LetsEncryptProduction" to avoid them temporarily banning you if you make too many requests.

With a custom provider, I recommend opening up a server console for your CertMgr server before you hit "Submit Request", since then you can see its progress as it goes. You can get potentially more info if you launch CertMgr as load certmgr -d for debug output. Anyway, with that open, you can click "Submit Request" and let it rip.

As it goes, you'll see a couple lines reading "CertMgr: Error parsing JSON Result" and so forth. This is normal and non-fatal - it comes from the default behavior of trying to parse the response as JSON and failing, but it should still put the unparsed response in the document. What you want is something at the end starting "CertMgr: Successfully processed ACME request", and for the document in certstore.nsf to get its nice little lock icon. If it fails, check the error message in the cert document as well as the document in the "DNS Trace Logs" view - that will contain logs of each step, and all of the contextual information written into the doc used by your formulas.

Wrapping Up

This process is, unfortunately, necessarily complicated - since each DNS provider does their own thing, there's not a one-config-fits-all option. But the nice thing is that, once it's configured, it should be good for a long while. You'll be able to generate certificates for non-public servers and wildcard at will, and that makes a lot of things a lot more flexible.

Dipping My Feet Into DKIM and DMARC

Mon Apr 10 10:56:13 EDT 2023

Tags: admin

For a very long time now, I've had my mail set up in a grandfathered-in free Google Whatever-It's-Called-Now account, which, despite its creepiness, serves me well. It's readily supported by everything and it takes almost all of the mail-hosting hassle out of my hands.

Not all of the hassle, though, and over the past couple weeks I decided that I should look into configuring DKIM and DMARC, first for my personal mail and (if it doesn't blow up) for my company mail. I had set up SPF a couple years back, and I figured it was high time to finish the rest.

As with any admin-related post, keep in mind that I'm just tinkering with this stuff. I Am Not A Lawyer, and so forth.

The Standards

DKIM is a neat little standard. It's sort of like S/MIME's mail-signing capabilities, except less hierarchical and more commonly enforced on the server than on the client. That "sort of" does some heavy lifting, but it should suit to think of it like that. What you do is have your server generate a keypair (Google has a system for this), take the public key from that, and stick it in your DNS configuration. The sending server will then add a header to outgoing messages with a signature and a lookup key - in turn, the receiving server can choose to look up the key in the claimed DNS to verify it. If the key exists in DNS and the signature is valid, then the receiver can be fairly certain that the receiver can at least be confident that the sender is who they say they are (in the sense of having control of a sending server and DNS, anyway). Since this signing is server-based, it requires a lot less setup than S/MIME or GPG for mail users, though it also doesn't confer all the benefits. Neat, though.

DMARC is an interesting thing. It kind of sits on top of SPF and DKIM and allows an admin to define some requested handling of mail for their domain. You can explicitly state that you expect your SPF and DKIM records to be enforced and provide some guidance for recipient servers to do so. For example, you might own "foo.com" and go whole-hog: declare that your definitions are complete and that remote servers should outright reject 100% of email claiming to be from "foo.com" but either didn't come from a server enumerated in your SPF record or lack a valid DKIM signature. Most likely, at least when rolling it out, you'll start softer, maybe saying to not reject anything, or to quarantine some percentage of failing messages. It's a whole process, but it's good that gradual adoption is built in.

Interestingly, DMARC also lets you request that servers that received mail from "you" email you summaries from time to time. These generally (always?) take the form of a ZIP attachment containing an XML file. In there, you'll get a list of servers that contacted them claiming to be you and a summary of the pass/fail state of SPF and DKIM for them. This has been useful - I found that I had to do a little tweaking to SPF for known-good servers. This is vital for a slow roll-out, since it's very difficult to be completely sure you got everything when you first start setting this stuff up, and you don't want to too-eagerly poison your outgoing mail.

Configuring

Really, configuring this stuff wasn't bad. I mostly followed Google's guides for DKIM and DMARC, which are pretty clear and give you a good plan for a slow rollout.

Though Google is my main sender, I still have some older agents that might send out mail for my old ID from time to time from Domino, so I wanted to make sure that was covered too. Fortunately, Domino supports DKIM as well, and it wasn't too bad. Admittedly, the process is a little more "raw" than with Google's admin site, but it's not too bad. It's not like I'm uncomfortable with a CLI-based approach, and it's in line with other recent-era security additions using the keymgmt tool, like shared DAOS encryption.

It just came down to following the instructions in HCL's docs and it worked swimmingly. If you have a document in your cred store that matches an INI-configured "domain to ID" value for outgoing mail, Domino will use it. Like how DMARC has a slow-roll-out system built in, Domino lets you choose between signing mail just when available or being harsher about it, and refusing to send out any mail it doesn't know how to sign. I'll probably switch to the second option eventually, since it sounds like a good way to ensure that your server is being a good citizen across the board.

Conclusion

In any event, this is all pretty neat. It's outside my bailiwick, but it's good to know about it, and it also helps reinforce a pub-key mental model similar to things like OIDC. It also, as always, just feels good to check a couple more boxes for being a good modern server.

Tinkering with Mastodon, Keycloak, and Domino

Thu Nov 10 13:01:00 EST 2022

Tags: admin keycloak

Because of what I'll euphemistically call the current historical moment on Twitter, I (like a lot of people) decided to give another look at Mastodon. The normal way one would use it would be to sign up at mastodon.social and be on one's merry way, treating it just like a slightly-different Twitter.

However, Mastodon is intentionally designed to be federated in a way similar to email, and the software is available on GitHub complete with scripts for Docker Compose, Vagrant, and so forth. So I went and did that, setting up my currently-barely-used account at @jesse@pub.frostillic.us.

That on its own isn't particularly notable, nor are the specifics of how I set it up (it was a hodgepodge of a couple posts you can find by looking for "mastodon docker compose"). What I found neat for our purposes here was the way I could piggyback authentication onto stuff I had recently done with Keycloak. Keycloak, incidentally, was the topic of today's OpenNTF webinar, so, if you didn't see it, check back there for the replay when it's posted.

Having done the legwork for setting up Keycloak backed by Domino LDAP for my earlier tinkering, the setup to work with Mastodon was pretty straightforward (as far as these things go). I did the professional thing and took the basic config from a StackOverflow post, tweaking it to suit my needs.

The main Domino-y thing I wanted to tweak here was the username that I ended up with on Mastodon. Internally, the Domino short name for my account is "jgallagh", but I like to go by "jesse" when in an environment small enough to get away with it. So I cracked open the names.nsf subform I had added years ago for POSIX and SSH pubkey purposes and added a Mastodon section:

Screenshot of a 'Mastodon Attributes' section in a names.nsf

(apologies for how bad the new-era fonts look in my poor old Windows VM)

Then, I told my Mastodon config about that field for the UID:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
OIDC_ENABLED=true
OIDC_DISPLAY_NAME=frostillic.us Auth
OIDC_DISCOVERY=true
OIDC_ISSUER=https://<keycloak_url>/auth/realms/<real>
OIDC_AUTH_ENDPOINT=https://<keycloak_url>/auth/realms/<real>/.well-known/openid-configuration
OIDC_SCOPE=openid,profile,email
OIDC_UID_FIELD=mastodonusername
OIDC_CLIENT_ID=<client id>
OIDC_CLIENT_SECRET=<client secret>
OIDC_REDIRECT_URI=https://<mastodon URL>/auth/auth/openid_connect/callback
OIDC_SECURITY_ASSUME_EMAIL_IS_VERIFIED=true

On Keycloak, I made a new realm to cover this sort of "personal" setup to be able to split the user pool and then added a Client definition for Mastodon. I set it up as "Access Type" "confidential" and grabbed the client ID and secret for the config above and then configured the Redirect URI. To get the custom username field over from LDAP, I added a "user-attribute-ldap-mapper" Mapper in the LDAP User Federation definition to bring it in. Then, back in the Client definition, I added a "User attribute" token mapper to the config to bring this in as well so it's added to the JWT.

That covered the auth config, and it's been working well since. When you have OIDC configured in your Mastodon config, it sprouts a button below the main login mechanically labeled "OPENID_CONNECT":

Screenshot of a Mastodon login form with OIDC configured

Clicking that will send you to the configured Keycloak page to do the OIDC dance and, when all goes well, back to a freshly-configured Mastodon account.

Now, technically, this doesn't really gain me much that I couldn't have gotten by configuring the users separately in the Mastodon instance, but the experience is useful. I'm gradually getting really sold on the idea of having a multi-purpose Keycloak instance to handle authentication and authorization. Most of the time, it's a thin layer over what you could get by pointing to Domino LDAP from these disparate apps themselves. However, there are some benefits in that Keycloak is now the only one that has to deal with Domino's weird LDAP and also this gives me a lot of room for fine-grained control and federation with other providers. It's just neat.

New Adventures in Administration: Docker Compose and One-Touch Setup

Sat Dec 04 14:23:58 EST 2021

Tags: admin docker

As I do from time to time, this weekend I dipped a bit into the more server-admin-focused side of Domino development. This time, I had set out to improve the deployment experience for one of my client's apps. This is the sprawling multi-NSF-plus-OSGi one, and I've long been blessed to not have to worry about actually deploying anything. However, I knew in the back of my head the whole time that it must be fairly time-consuming between installing Domino, getting all the Java code in place, deploying the DBs, and configuring all the documents that associate them.

I had also had a chance this past week to give Docker Compose a swing. Though I'd certainly known of it for a good while, I hadn't actually used it for anything - all my Docker scripting involved really fiddly operations that I ended up using Bash scripts to launch a single container for anyway, so Compose didn't bring so much to the table. However, using it to tie together the process of launching a Postgres server with pre-populated user info and schema scripts whetted my appetite.

So today I set out to tinker with the Domino side of things.

Deploying To The Domino Data Directory

Some parts of this were the same as what I've done before: I wanted to deploy some JARs to jvm/lib/ext for legacy purposes and then drop ".java.policy" into the notes user's home directory. That was accomplished easily enough with some COPY operations in the Dockerfile:

1
2
COPY program/ /opt/hcl/domino/notes/latest/linux/
COPY --chown=notes java.policy /local/notes/.java.policy

What wouldn't be accomplished so easily, though, would be getting files into the data directory: the app's NTFs and the OSGi plugins. This is because of the way the Domino Docker image works, where it deploys the contents of a ZIP to /local/notesdata on launch, in order to let you work properly with mounted volumes. Because of this, I couldn't just copy the files there in the Dockerfile, since it would conflict with the volume mount; however, I still wanted to do this in an automated way.

This was my impetus to switch away from the official Docker images on Flexnet and over to the "community-ish" Domino-on-Docker build script maintained at https://github.com/IBM/domino-docker. This script is generally more feature-rich than the official one, and one feature in particular caught my eye: the ability to add your own ZIP file (or URL, I believe) to deploy to the data directory at first launch.

So I downloaded the repo, build the image, bundled the OSGi plugins and NTFs into a ZIP, and altered my Dockerfile:

1
2
3
4
5
FROM hclcom/domino:12.0.0

COPY program/ /opt/hcl/domino/notes/latest/linux/
COPY --chown=notes java.policy /local/notes/.java.policy
COPY --chown=notes data.zip /tmp/

Then, I set the environment variable in my "docker-compose.yaml" file: CustomNotesdataZip=/tmp/data.zip. That worked like a charm.

One-Touch Setup

Next up, I wanted to automate the initial server setup. I knew that Domino had been gaining some automated setup capabilities recently, and that they really came of age in V12. What I hadn't appreciated until today is how much capability this system has. I'd figured it would let you configure the server either as a new domain or additional and to create an admin user, but I hadn't noticed that it also has the ability to declaratively create and modify databases and documents. Looking over the sample file that Daniel Nashed put up, I realized that this would cover essentially all of my remaining needs.

The file there was most of what I needed: other than tweaking the server and user names, the main things I'd want to change in the basic config were to set HTTP_AllowAnonymous/HTTP_SSLAnonymous to "1" and also add a line to set OnBehalfOfInvokerLst to "LocalDomainAdmins" (which allows XPages to run properly).

Then, I got to the meat of the app deployment. That's all done in the $.appConfiguration.databases object near the bottom, and I set out to adding entries to deploy each of the NTFs I'd copied to the data directory, and adding the required documents to tie them together. This also went smoothly.

The Final Scripts

The final form of the setup is pretty clean. The Dockerfile is very similar to the above, with just an added line to copy in the config file:

1
2
3
4
5
6
FROM hclcom/domino:12.0.0

COPY program/ /opt/hcl/domino/notes/latest/linux/
COPY --chown=notes java.policy /local/notes/.java.policy
COPY --chown=notes domino-config.json /tmp/
COPY --chown=notes data.zip /tmp/

The docker-compose.yaml file is longer, but I think pretty explicable. It maps the ports, sets up some volumes for the various persistent-data portions of Domino, and configures the environment variables for setup:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
services:
  clientapp:
    build: .
    ports:
      - "1352:1352"
      - "80:80"
      - "443:443"
    volumes:
      - data:/local/notesdata
      - ft:/local/ft
      - nif:/local/nif
      - translog:/local/translog
      - daos:/local/daos
    restart: always
    environment:
      - LANG=en_US.UTF-8
      - CustomNotesdataZip=/tmp/data.zip
      - SetupAutoConfigure=1
      - SetupAutoConfigureParams=/tmp/domino-config.json
volumes:
  data: {}
  ft: {}
  nif: {}
  translog: {}
  daos: {}

Miscellaneous Notes

In doing this, I came across a few things that are worth noting for anyone diving into it clean as I was:

  • Docker Compose (intentionally) doesn't rebuild images on docker compose up when they already exist. Since I was changing all sorts of stuff, I switched to docker compose build && docker compose up.
  • Errors during server autoconfig don't show up in the console output from docker compose up: if the server doesn't come up like you expect, check in "/local/notesdata/IBM_TECHNICAL_SUPPORT/autoconfigure.log". It's easy for a problem to gum up the whole works, such as when using "computeWithForm": true on a created document throws an exception.
  • Daniel's example autoconf file above places the admin user ID in "/local/notesdata/domino/html/admin.id", so it will be accessible via http://servername/admin.id after the server comes up. Alternatively, you could snag it by copying it via Docker commands.
  • This really drives home the desperate need for a full web-based admin app for Domino.

All in all, this was a delight to work with. Next, I should be able to make a script that generates the config JSON for me based on all the app's NTFs, and then include that whole thing as part of the Maven build in a distribution ZIP. That will be pretty neat.

Notes From A Week in Administration-Land

Tue Jan 26 20:06:42 EST 2021

Tags: admin domino

This past week, I've been delving back into the world of Domino administration for a client and taking the opportunity to brush up on the niceties we've gotten in the past few releases. A few things struck me as I went along, so I'll list them here in no particular order.

Containers are the Way to Go

This isn't too much of a surprise, I suppose, but I imagine the only reason I'll set up a server the "installer" way again is for dev servers on Windows. Using Docker just skips over that installation phase completely and makes things so much quicker and more consistent.

It also essentially forces you to make an install-support script in the form of a Dockerfile. I started out using the default one from FlexNet, but then had a need to install fontconfig to avoid this delightful little gotcha that crops up with Poi. Since the program container is intended to be ephemeral, this meant that I had to make a Dockerfile to make a proper image for it, and now there's inherently an artifact for future admins to use.

Cluster Symmetry is a Delight

Years ago, I wrote a "Generic Replicator" agent that I would configure per-server to tell it to do the work of mirroring all NSFs. It's done yeoman's work since then, but I'm all the happier to use a built-in capability. So, tip-of-the-hat to the team that added that one in.

It'd be nice if it didn't also require notes.ini settings, but I suppose that's the way of things.

DBMT is Still Great

I know it's years and years old at this point, but I can never help but gush over DBMT. It's great and should be promoted to an on-by-default setting instead of being something you have to know to configure via a Program document.

It Still Sucks to Configure Every Server Doc

Every time I make a new server document, there's this pile of obligatory "fix the defaults" work: filling in all the stuff on the security tab, enabling web site documents, changing all the fiddly Ports tab options (including having to enable enforcing access settings (?!)), and so forth. That's on top of the giant tower of notes.ini settings in the Configuration document, but at least those can be applied to a server group and are less tedious once you know them.

I put an idea in for that last year and it sounds like it's in the works, so... that'll be nice.

The Server Doc Could Use Lots More Settings

I took the opportunity of re-laying-out servers to move as much as I can out of the data directory - namely, DAOS, transaction logs, FT indexes, and view indexes. The first two of these are configurable in the server doc, which is nice, but the latter two require specification via notes.ini properties. Since they're server-specific, it feels like a leaky abstraction to put them in a Configuration document - while it would work, and I could remove them from the doc once applied, that's just gross.

It would also be good to have a way to properly share filesystem-bound files and have them auto-deployed. For example, I have a notes.ini property in the Configuration doc for JavaUserOptionsFile=jvm.properties. The property is set automatically, but I have to create the file manually per-server. I could certainly write an agent to do that, and it'd work, but it's server configuration and belongs in the Directory.

Ideally, I'd like to be able to obliterate the container and data image, recreate them with the ID and location info, and have the server reconstitute itself after that entirely from NSF-based configuration.

HTTP is Better Than It Used to Be, But Still Needs Work

I'd love to replace my use of the WebSphere connector headers with X-Forwarded-For, but it doesn't work like that, and I'm not about to write a DSAPI filter to do it. Ideally, that'd be supported and promoted to the server config.

Same goes for Java-related settings that you just have to kind of magically know, like HTTPJVMMaxHeapSize+HTTPJVMMaxHeapSizeSet and ENABLE_SNI (I don't know why you wouldn't want SNI enabled by default).

The SSL cert manager in V12 can't come soon enough.

HTTP's better off than it was for a while, and it's nice that the TLS stack isn't dangerous now, but knowing the right way to configure it is still essentially playground lore.

Domino Configuration Tuner Deserves a New Life

I remember discovering DCT back at my old company in the 7.x days, but it unfortunately looks like it hasn't been updated since not long after that, and now doesn't even parse the current Domino version correctly. If it was brought up to date and produced reliable suggestions, it'd be huge.

As it is, my server configuration docs have all sorts of notes.ini properties like NLCACHE_SIZE=67108864 and UPDATE_NOTE_MINIMUM=40 that I saw recommended somewhere once years ago, but I have no idea whether they're still good or appropriately-sized. I want the computer to tell me that (and, in a lot of cases, just do the right thing without configuration).

Conclusion

Anyway, those are the things that came to me as I was working on this. The last few major releases have had some huge server-side improvements, and I like that the pace is continuing. Good work, server core team.

Putting Apache in Front of Domino

Sat Dec 08 13:59:11 EST 2012

Tags: apache admin
  1. Putting Apache in Front of Domino
  2. Better Living Through Reverse Proxies
  3. Domino's Server-Side User Security
  4. A Partially-Successful Venture Into Improving Reverse Proxies With Domino
  5. PSA: Reverse-Proxy Regression in Domino 12.0.1

The other day, for my side-project company, I wanted to set up hosting for a WordPress site, ideally without setting up another whole server. The first two ideas I had were pretty terrible:

  1. Hosting it on Domino directly with PHP via CGI. Even if this worked, I assume the performance would be pretty poor and I'd have no confidence in its general longevity.
  2. Hosting it on Apache on another port and using Domino to proxy through. While Domino does have some minor proxy capabilities, they didn't strike me as particularly thorough or suitable for the task.

Since the second option involved running two web servers anyway, I decided to flip it around and do the more-common thing: making Apache the main server and switching Domino to another port. Fortunately, even though it's been years since I ran an Apache server and I'd consider myself novice at best, the process has been exceptionally smooth and has brought with it a couple benefits:

  1. Apache's virtual-host support works perfectly, allowing me to host just the one named site and pass through all other requests to Domino.
  2. My crummy SSL setup works better with Apache, allowing for a poor-man's multi-host-name SSL with my one basic StartSSL certificate. Not only does Apache support SNI for down the line, but in the mean time I can use the same certificate for multiple names (with the usual "mis-matched name" warning) - since Apache handles all requests and funnels them over with the host name to Domino via HTTP, I don't run into the usual Domino problem of only the one SSL-enabled Web Site document being active.
  3. I'm now ready to add load-balancing servers at any time with just a one-line adjustment in my Apache config.

The actual configuration of Apache on my Linux machine was pretty straightforward, with the layout of the configuration directory making it fairly self-explanatory. I linked the modules I wanted from the /etc/apache2/mods-available directory to /etc/apache2/mods-enabled (namely, proxy, proxy_balancer, proxy_http, php5, rewrite, and ssl). Then, I set up a couple NameVirtualHost lines in ports.conf:

NameVirtualHost *:80
NameVirtualHost *:443

Then, I set up a new default site in /etc/apache2/sites-available and linked it to /etc/apache2/sites-enabled/000-domino:

<VirtualHost *:80>
        <Proxy balancer://frostillicus-cluster>
                BalancerMember http://ceres.frostillic.us:8088
                ProxySet stickysession=SessionID
        </Proxy>
        ProxyPass / balancer://frostillicus-cluster/ nocanon
        ProxyPassReverse / balancer://frostillicus-cluster/
        ProxyPreserveHost On
        AllowEncodedSlashes On
</VirtualHost>

That last directive is an important note, and I missed it at first. The "optimize CSS and JS" option in 8.5.3 creates URLs with encoded slashes and, by default, Apache's proxy breaks them, leading to 404 errors in apps that use it. If you turn on AllowEncodedSlashes, though, all is well. Note also the ProxySet line: if that's working correctly (I haven't tested it yet since I don't have a second host set up), that should make sure that browser sessions stick to the same server.

For SSL, I'm not sure what the absolute best way to do it is, but I set it up as another proxy just pointing to the HTTP version locally, so I don't have to set up multiple SSL sites for each virtual host (put into a new site document, 002-ssl):

<VirtualHost *:443>
        ProxyPass / http://127.0.0.1/
        ProxyPassReverse / http://127.0.0.1/
        ProxyPreserveHost On

        SSLEngine On
        SSLProtocol all -SSLv2
        SSLCipherSuite ALL:!ADH:!EXPORT:!SSLv2:RC4+RSA:+HIGH:+MEDIUM

        SSLCertificateFile /path/to/ssl.crt
        SSLCertificateKeyFile /path/to/ssl-decrypt.key
        SSLCertificateChainFile /path/to/sub.class1.server.ca.pem
        SSLCACertificateFile /path/to/ca.pem
</VirtualHost>

With that config, SSL seems to work exactly like I want: all my sites have an SSL counterpart that acts just like the normal one, much like with Domino normally.

It's only been running like this a couple days, so we'll see if I run into any more trouble, but so far this seems to be a solid win for my hosting, other than the increase in memory usage. I'm looking forward to having another clustermate in the same location so I can try out the load balancing down the line.