Showing posts for tag "admin"

New Adventures in Administration: Docker Compose and One-Touch Setup

Dec 4, 2021, 2:23 PM

Tags: admin docker

As I do from time to time, this weekend I dipped a bit into the more server-admin-focused side of Domino development. This time, I had set out to improve the deployment experience for one of my client's apps. This is the sprawling multi-NSF-plus-OSGi one, and I've long been blessed to not have to worry about actually deploying anything. However, I knew in the back of my head the whole time that it must be fairly time-consuming between installing Domino, getting all the Java code in place, deploying the DBs, and configuring all the documents that associate them.

I had also had a chance this past week to give Docker Compose a swing. Though I'd certainly known of it for a good while, I hadn't actually used it for anything - all my Docker scripting involved really fiddly operations that I ended up using Bash scripts to launch a single container for anyway, so Compose didn't bring so much to the table. However, using it to tie together the process of launching a Postgres server with pre-populated user info and schema scripts whetted my appetite.

So today I set out to tinker with the Domino side of things.

Deploying To The Domino Data Directory

Some parts of this were the same as what I've done before: I wanted to deploy some JARs to jvm/lib/ext for legacy purposes and then drop ".java.policy" into the notes user's home directory. That was accomplished easily enough with some COPY operations in the Dockerfile:

1
2
COPY program/ /opt/hcl/domino/notes/latest/linux/
COPY --chown=notes java.policy /local/notes/.java.policy

What wouldn't be accomplished so easily, though, would be getting files into the data directory: the app's NTFs and the OSGi plugins. This is because of the way the Domino Docker image works, where it deploys the contents of a ZIP to /local/notesdata on launch, in order to let you work properly with mounted volumes. Because of this, I couldn't just copy the files there in the Dockerfile, since it would conflict with the volume mount; however, I still wanted to do this in an automated way.

This was my impetus to switch away from the official Docker images on Flexnet and over to the "community-ish" Domino-on-Docker build script maintained at https://github.com/IBM/domino-docker. This script is generally more feature-rich than the official one, and one feature in particular caught my eye: the ability to add your own ZIP file (or URL, I believe) to deploy to the data directory at first launch.

So I downloaded the repo, build the image, bundled the OSGi plugins and NTFs into a ZIP, and altered my Dockerfile:

1
2
3
4
5
FROM hclcom/domino:12.0.0

COPY program/ /opt/hcl/domino/notes/latest/linux/
COPY --chown=notes java.policy /local/notes/.java.policy
COPY --chown=notes data.zip /tmp/

Then, I set the environment variable in my "docker-compose.yaml" file: CustomNotesdataZip=/tmp/data.zip. That worked like a charm.

One-Touch Setup

Next up, I wanted to automate the initial server setup. I knew that Domino had been gaining some automated setup capabilities recently, and that they really came of age in V12. What I hadn't appreciated until today is how much capability this system has. I'd figured it would let you configure the server either as a new domain or additional and to create an admin user, but I hadn't noticed that it also has the ability to declaratively create and modify databases and documents. Looking over the sample file that Daniel Nashed put up, I realized that this would cover essentially all of my remaining needs.

The file there was most of what I needed: other than tweaking the server and user names, the main things I'd want to change in the basic config were to set HTTP_AllowAnonymous/HTTP_SSLAnonymous to "1" and also add a line to set OnBehalfOfInvokerLst to "LocalDomainAdmins" (which allows XPages to run properly).

Then, I got to the meat of the app deployment. That's all done in the $.appConfiguration.databases object near the bottom, and I set out to adding entries to deploy each of the NTFs I'd copied to the data directory, and adding the required documents to tie them together. This also went smoothly.

The Final Scripts

The final form of the setup is pretty clean. The Dockerfile is very similar to the above, with just an added line to copy in the config file:

1
2
3
4
5
6
FROM hclcom/domino:12.0.0

COPY program/ /opt/hcl/domino/notes/latest/linux/
COPY --chown=notes java.policy /local/notes/.java.policy
COPY --chown=notes domino-config.json /tmp/
COPY --chown=notes data.zip /tmp/

The docker-compose.yaml file is longer, but I think pretty explicable. It maps the ports, sets up some volumes for the various persistent-data portions of Domino, and configures the environment variables for setup:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
services:
  clientapp:
    build: .
    ports:
      - "1352:1352"
      - "80:80"
      - "443:443"
    volumes:
      - data:/local/notesdata
      - ft:/local/ft
      - nif:/local/nif
      - translog:/local/translog
      - daos:/local/daos
    restart: always
    environment:
      - LANG=en_US.UTF-8
      - CustomNotesdataZip=/tmp/data.zip
      - SetupAutoConfigure=1
      - SetupAutoConfigureParams=/tmp/domino-config.json
volumes:
  data: {}
  ft: {}
  nif: {}
  translog: {}
  daos: {}

Miscellaneous Notes

In doing this, I came across a few things that are worth noting for anyone diving into it clean as I was:

  • Docker Compose (intentionally) doesn't rebuild images on docker compose up when they already exist. Since I was changing all sorts of stuff, I switched to docker compose build && docker compose up.
  • Errors during server autoconfig don't show up in the console output from docker compose up: if the server doesn't come up like you expect, check in "/local/notesdata/IBM_TECHNICAL_SUPPORT/autoconfigure.log". It's easy for a problem to gum up the whole works, such as when using "computeWithForm": true on a created document throws an exception.
  • Daniel's example autoconf file above places the admin user ID in "/local/notesdata/domino/html/admin.id", so it will be accessible via http://servername/admin.id after the server comes up. Alternatively, you could snag it by copying it via Docker commands.
  • This really drives home the desperate need for a full web-based admin app for Domino.

All in all, this was a delight to work with. Next, I should be able to make a script that generates the config JSON for me based on all the app's NTFs, and then include that whole thing as part of the Maven build in a distribution ZIP. That will be pretty neat.

Notes From A Week in Administration-Land

Jan 26, 2021, 8:06 PM

Tags: admin domino

This past week, I've been delving back into the world of Domino administration for a client and taking the opportunity to brush up on the niceties we've gotten in the past few releases. A few things struck me as I went along, so I'll list them here in no particular order.

Containers are the Way to Go

This isn't too much of a surprise, I suppose, but I imagine the only reason I'll set up a server the "installer" way again is for dev servers on Windows. Using Docker just skips over that installation phase completely and makes things so much quicker and more consistent.

It also essentially forces you to make an install-support script in the form of a Dockerfile. I started out using the default one from FlexNet, but then had a need to install fontconfig to avoid this delightful little gotcha that crops up with Poi. Since the program container is intended to be ephemeral, this meant that I had to make a Dockerfile to make a proper image for it, and now there's inherently an artifact for future admins to use.

Cluster Symmetry is a Delight

Years ago, I wrote a "Generic Replicator" agent that I would configure per-server to tell it to do the work of mirroring all NSFs. It's done yeoman's work since then, but I'm all the happier to use a built-in capability. So, tip-of-the-hat to the team that added that one in.

It'd be nice if it didn't also require notes.ini settings, but I suppose that's the way of things.

DBMT is Still Great

I know it's years and years old at this point, but I can never help but gush over DBMT. It's great and should be promoted to an on-by-default setting instead of being something you have to know to configure via a Program document.

It Still Sucks to Configure Every Server Doc

Every time I make a new server document, there's this pile of obligatory "fix the defaults" work: filling in all the stuff on the security tab, enabling web site documents, changing all the fiddly Ports tab options (including having to enable enforcing access settings (?!)), and so forth. That's on top of the giant tower of notes.ini settings in the Configuration document, but at least those can be applied to a server group and are less tedious once you know them.

I put an idea in for that last year and it sounds like it's in the works, so... that'll be nice.

The Server Doc Could Use Lots More Settings

I took the opportunity of re-laying-out servers to move as much as I can out of the data directory - namely, DAOS, transaction logs, FT indexes, and view indexes. The first two of these are configurable in the server doc, which is nice, but the latter two require specification via notes.ini properties. Since they're server-specific, it feels like a leaky abstraction to put them in a Configuration document - while it would work, and I could remove them from the doc once applied, that's just gross.

It would also be good to have a way to properly share filesystem-bound files and have them auto-deployed. For example, I have a notes.ini property in the Configuration doc for JavaUserOptionsFile=jvm.properties. The property is set automatically, but I have to create the file manually per-server. I could certainly write an agent to do that, and it'd work, but it's server configuration and belongs in the Directory.

Ideally, I'd like to be able to obliterate the container and data image, recreate them with the ID and location info, and have the server reconstitute itself after that entirely from NSF-based configuration.

HTTP is Better Than It Used to Be, But Still Needs Work

I'd love to replace my use of the WebSphere connector headers with X-Forwarded-For, but it doesn't work like that, and I'm not about to write a DSAPI filter to do it. Ideally, that'd be supported and promoted to the server config.

Same goes for Java-related settings that you just have to kind of magically know, like HTTPJVMMaxHeapSize+HTTPJVMMaxHeapSizeSet and ENABLE_SNI (I don't know why you wouldn't want SNI enabled by default).

The SSL cert manager in V12 can't come soon enough.

HTTP's better off than it was for a while, and it's nice that the TLS stack isn't dangerous now, but knowing the right way to configure it is still essentially playground lore.

Domino Configuration Tuner Deserves a New Life

I remember discovering DCT back at my old company in the 7.x days, but it unfortunately looks like it hasn't been updated since not long after that, and now doesn't even parse the current Domino version correctly. If it was brought up to date and produced reliable suggestions, it'd be huge.

As it is, my server configuration docs have all sorts of notes.ini properties like NLCACHE_SIZE=67108864 and UPDATE_NOTE_MINIMUM=40 that I saw recommended somewhere once years ago, but I have no idea whether they're still good or appropriately-sized. I want the computer to tell me that (and, in a lot of cases, just do the right thing without configuration).

Conclusion

Anyway, those are the things that came to me as I was working on this. The last few major releases have had some huge server-side improvements, and I like that the pace is continuing. Good work, server core team.

Putting Apache in Front of Domino

Dec 8, 2012, 1:59 PM

Tags: apache admin
  1. Putting Apache in Front of Domino
  2. Better Living Through Reverse Proxies

The other day, for my side-project company, I wanted to set up hosting for a WordPress site, ideally without setting up another whole server. The first two ideas I had were pretty terrible:

  1. Hosting it on Domino directly with PHP via CGI. Even if this worked, I assume the performance would be pretty poor and I'd have no confidence in its general longevity.
  2. Hosting it on Apache on another port and using Domino to proxy through. While Domino does have some minor proxy capabilities, they didn't strike me as particularly thorough or suitable for the task.

Since the second option involved running two web servers anyway, I decided to flip it around and do the more-common thing: making Apache the main server and switching Domino to another port. Fortunately, even though it's been years since I ran an Apache server and I'd consider myself novice at best, the process has been exceptionally smooth and has brought with it a couple benefits:

  1. Apache's virtual-host support works perfectly, allowing me to host just the one named site and pass through all other requests to Domino.
  2. My crummy SSL setup works better with Apache, allowing for a poor-man's multi-host-name SSL with my one basic StartSSL certificate. Not only does Apache support SNI for down the line, but in the mean time I can use the same certificate for multiple names (with the usual "mis-matched name" warning) - since Apache handles all requests and funnels them over with the host name to Domino via HTTP, I don't run into the usual Domino problem of only the one SSL-enabled Web Site document being active.
  3. I'm now ready to add load-balancing servers at any time with just a one-line adjustment in my Apache config.

The actual configuration of Apache on my Linux machine was pretty straightforward, with the layout of the configuration directory making it fairly self-explanatory. I linked the modules I wanted from the /etc/apache2/mods-available directory to /etc/apache2/mods-enabled (namely, proxy, proxy_balancer, proxy_http, php5, rewrite, and ssl). Then, I set up a couple NameVirtualHost lines in ports.conf:

NameVirtualHost *:80
NameVirtualHost *:443

Then, I set up a new default site in /etc/apache2/sites-available and linked it to /etc/apache2/sites-enabled/000-domino:

<VirtualHost *:80>
        <Proxy balancer://frostillicus-cluster>
                BalancerMember http://ceres.frostillic.us:8088
                ProxySet stickysession=SessionID
        </Proxy>
        ProxyPass / balancer://frostillicus-cluster/ nocanon
        ProxyPassReverse / balancer://frostillicus-cluster/
        ProxyPreserveHost On
        AllowEncodedSlashes On
</VirtualHost>

That last directive is an important note, and I missed it at first. The "optimize CSS and JS" option in 8.5.3 creates URLs with encoded slashes and, by default, Apache's proxy breaks them, leading to 404 errors in apps that use it. If you turn on AllowEncodedSlashes, though, all is well. Note also the ProxySet line: if that's working correctly (I haven't tested it yet since I don't have a second host set up), that should make sure that browser sessions stick to the same server.

For SSL, I'm not sure what the absolute best way to do it is, but I set it up as another proxy just pointing to the HTTP version locally, so I don't have to set up multiple SSL sites for each virtual host (put into a new site document, 002-ssl):

<VirtualHost *:443>
        ProxyPass / http://127.0.0.1/
        ProxyPassReverse / http://127.0.0.1/
        ProxyPreserveHost On

        SSLEngine On
        SSLProtocol all -SSLv2
        SSLCipherSuite ALL:!ADH:!EXPORT:!SSLv2:RC4+RSA:+HIGH:+MEDIUM

        SSLCertificateFile /path/to/ssl.crt
        SSLCertificateKeyFile /path/to/ssl-decrypt.key
        SSLCertificateChainFile /path/to/sub.class1.server.ca.pem
        SSLCACertificateFile /path/to/ca.pem
</VirtualHost>

With that config, SSL seems to work exactly like I want: all my sites have an SSL counterpart that acts just like the normal one, much like with Domino normally.

It's only been running like this a couple days, so we'll see if I run into any more trouble, but so far this seems to be a solid win for my hosting, other than the increase in memory usage. I'm looking forward to having another clustermate in the same location so I can try out the load balancing down the line.