Showing posts for tag "nginx"

Domino's Server-Side User Security

Sun Nov 01 07:41:55 EST 2015

Tags: security nginx
  1. Putting Apache in Front of Domino
  2. Better Living Through Reverse Proxies
  3. Domino's Server-Side User Security
  4. A Partially-Successful Venture Into Improving Reverse Proxies With Domino
  5. PSA: Reverse-Proxy Regression in Domino 12.0.1

The other day, Jesper Kiaer wrote an interesting two-part series of blog posts talking about the security implications of HTTPEnableConnectorHeaders, something I mention in my setup example. In it, he describes the ability to specify an arbitrary username as a security hole, which I don't agree with, but I see where he's coming from. It certainly has, shall we say, implications.

The $WSRU header comes along for the ride when you enable the HTTPEnableConnectorHeaders notes.ini property (off by default) and allows you to specify any user name and have the Domino server trust it completely, as if it had generated it itself. The most straightforward use of this is as a sort of lightweight SSO: if you're already authenticated in to the front-end web server (say, however WebSphere did it when using this), that server can fill in the $WSRU field and have it "count" as a Domino login without forcing the user to log in again. This is sort of what I did to replace my DSAPI filter with a Lua script in nginx.

But what immediately comes to mind is that this allows anyone with HTTP access to the server to type in any ol' username and have Domino accept it. This is why binding Domino's HTTP stack to a local-only (or LAN-only, if you have a nicely-divided server infrastructure) address is important: you wouldn't want a server with this setting facing the public Internet.

The extra good news there is that, when it's fronted by a proxy like nginx, the defaults prevent these $-headers from being passed through from remote clients and all is well, whether or not you specifically account for them.

Now, Jesper also pointed out that, even if you bind HTTP locally, you're still potentially vulnerable to this sort of privilege-escalation problem if you have code running on the server that can make local HTTP calls. And for that case, well... yep. That's true, and it's concerning. However, honestly, that boat already sailed.

Domino's security model has long been lauded by its proponents as thorough and strong, and it is - it's quite good. However, things have gotten a little murkier ever since server-side code has become the primary method of executing Domino apps. Between a Notes client and a Domino server, security is very locked down: the client authenticates via its cryptographic keypair and no lying about its username is going to get the server to treat it as another name (hence why "Run on behalf of" agents don't perform as such when run locally). Running on a server, or between a server and a trusted server ID "client", it becomes a matter of just making up and passing around strings. The permissions used for a given database or view are just whatever the server decided to use as the name when opening it - that's what the hNames parameter of NSFDbOpenExtended is for.

This is how all web user logins work for Domino: the server agrees for itself that, for high-level API calls during that web request, it's going to act like CN=Joe Schmoe/O=SomeOrg instead of CN=Super Trusted Server/O=SomeOrg. It's still the server, though, which is why session.getUserName() in a web-run agent or XPage returns the server name, but session.getEffectiveUserName() returns the current user or run-on-behalf-of value.

The thing is that the hooks to do this aren't any deeper than the HTTP call with $WSRU set - they're all over the place. The XPages runtime provides a very-convenient hook, but that's not new: the C API is where it all starts, and that's reachable readily from LotusScript and Java. Once you have the same sort of access you'd need to make an HTTP call from an agent, the doors are open pretty wide.

But this isn't even really a problem! To run into any situation like this, you have to have code executing on your server, whether it's inside Domino or not. And if you have that in the hands of a real attacker, then, honestly, this "run on behalf of" business isn't what should keep you up at night. Once you have a trusted developer executing malicious code on your server, you're already done.

Seriously, Though: Reverse Proxies

Wed Sep 16 17:28:38 EDT 2015

So, Domino administrators: what are your feelings about SSL lately? Do they include, perhaps, stress? It's "oh crap, my servers are broken" season again, and this time the culprit is a change in Apple's operating systems. Fortunately, in this case, the problem isn't as severe as an outright security vulnerability like POODLE and, better still, there is a definitive statement from IBM indicating that they are going to bring their security stack up to snuff almost in time.

But this isn't the first time we've been in this position, nor will it be the last. The focus on cracking and hardening TLS, particularly in the context of HTTPS, is not going to let up any time soon, nor will the basic movement towards encryption everywhere. So I would like to reiterate my stance: Domino is not suitable for direct external exposure via HTTP. The other protocols are problematic as well, but HTTP is the big one and, fortunately, the easiest to solve.

Whenever I've made this exhortation, part of the response I get is that administrators "should not" have to take this step. That Domino should be fully modern in its security stack or, at least, that IBM should handle this problem for them in one way or another. Or that one of Domino's traditional strengths is its all-on-one nature, with a single easy installation that takes care of everything, and that installing a separate web server is a complicated step that administrators shouldn't have to take.

Well... tough.

The promise of an integrated server system that took care of everything is a great promise, but it's always been extremely difficult to achieve, even for a platform firing on all cylinders. No matter the ideal, Domino does not perform at this level, and I still maintain that it should not need to. Outside of Domino and PHP, the application server is not generally expected to also be a full-fledged front-end web server, for exactly this sort of reason. Domino's job with respect to the web is to generate and serve up HTML, JSON, and other content; it's something else's job to make sure that that leaves your company's network securely.

If you still maintain that this should be Domino's job due to how much you pay for licensing, then that's a conversation between you and your IBM sales rep. I, though, am entirely fine with a paid-for app server not covering this ground, and that's in large part because the products that do perform this task are superb and often open-source.

These other products – nginx, Apache, HAProxy, and so forth – are made for this job. This flurry of SSL/TLS features and bugs you've been hearing about? These are all implemented or fixed in dedicated products, sometimes years before they come to your attention. And when new problems crop up, they're fixed and talked about immediately across the web, with guides for what to do appearing as soon as the problem arises.

Is it easier to continue using Domino HTTP directly than to set up a reverse proxy? Sure! Well, sort of, when there's not an active disaster to mitigate. And, much like how keeping an XPages (or other web) app up to spec and working on all target devices is more complicated than a legacy Notes app, sometimes that's just how the world goes. Deciding that it's complexity you don't want, or that your company's policy doesn't allow for an additional server, is not a tenable stance. Unless you're Apple, your company's policy will not bend the arc of the industry.

So, I implore you, at least give this kind of setup a real look and a trial run. I think you'll find that the basic setup is not dramatically more complicated than just Domino alone and will also open the door to new non-security features like improving page load speeds on the fly. If you want, with eyes open, to maintain an externally-facing Domino HTTP stack, that's fine, but I'll see you when the next security apocalypse comes around.

Domino and SSL: Come with Me If You Want to Live

Wed Sep 24 15:38:51 EDT 2014

Tags: nginx
  1. Setting up nginx in Front of a Domino Server
  2. Adding Load Balancing to the nginx Setup
  3. Arbitrary Authentication with an nginx Reverse Proxy
  4. Domino and SSL: Come with Me If You Want to Live

Looking at Planet Lotus and Twitter the last few weeks, it's impossible to not notice that the lack of SHA-2 support in Domino has become something of A Thing. There has been some grumbling about it for a while now, but it's kicked into high gear thanks to Google's announcement of imminent SHA-1 deprecation. While it's entirely possible that Google will give a stay of execution for SHA-1 when it comes to Chrome users (it wouldn't be the first bold announcement they quietly don't go through with), the fact remains that Domino's SSL stack is out-of-date.

Now, it could be that IBM will add SHA-2 support (and, ideally, other modern SSL/TLS features) before too long, or at least announce a plan to do so. This is the ideal case, since, as long as Domino ships with stated support for SSL on multiple protocols, it is important that that support be up-to-date. Still, if they do, it's only a matter of time until the next problem arises.

So I'd like to reiterate my position, as reinforced by my nginx series this week, that Domino is not a suitable web server in its own right. It is, however, a quite-nice app server, and there's a crucial difference: an app server happens to serve HTML-based applications over HTTP, but it is not intended to be a public-facing site. Outside of the Domino (and PHP) world, this is a common case: app/servlet servers like Tomcat, Passenger, and (presumably) WebSphere can act as web servers, but are best routed to by a proper server like Apache or nginx, which can better handle virtual hosts, SSL, static resources, caching, and multi-app integration.

In that light, IBM's behavior makes sense: SSL support is not in Domino's bailiwick, and nor should it be. There are innumerable features that Domino should gain in the areas of app dev, messaging, and administration, and it would be best if the apparently-limited resources available were focused on those, not on patching things that are better solved externally.

I get that a lot of people are resistent to the notion of complicating their Domino installations, and that's reasonable: one of Domino's strengths over the years has been its all-in-one nature, combining everything you need for your domain in a single, simple installation. However, no matter the ideal, the case is that Domino is now unsuitable for the task of being a front-facing web server. Times change and the world advances; it also used to be a good idea to develop Notes client apps, after all. And much like with client apps, the legitimate benefits of using Domino for the purpose - ease of configuration, automatic replication of the config docs to all servers - are outweighed by the need to have modern SSL, load balancing/failover, HTML post-processing (there's some fun stuff on that subject coming in time), and multiple back-end app servers.

The last is important: Domino is neither exclusive nor eternal. At some point, it will be a good idea to use another kind of app server in your organization, such as a non-Domino Java server, Ruby, Node, or so on (in fact, it's a good idea to do that right now regardless). By learning the ropes of a reverse-proxy config now, you'll smooth that process. And from starting with HTTP, you can expand to improving the other protocols: there are proxies available for SMTP, IMAP, and LDAP that can add better SSL and load balancing in similar ways. nginx itself covers the first two, though there are other purpose-built utilities as well. I plan to learn more about those and post when I have time.

The basic case is easy: it can be done on the same server running Domino and costs no money. It doesn't even require nginx specifically: IHS (naturally) works fine, as does Apache, and Domino has had "sitting behind IIS" support for countless years. There is no need to stick with an outdated SSL stack, bizarre limitations, and terrible keychain tools when this problem has been solved with aplomb by the world at large.


Edit: as a note, this sort of setup definitely doesn't cover ALL of Domino's SSL tribulations. In addition to incoming IMAP/SMTP/LDAP access, which can be mitigated, there are still matters of outgoing SMTP and requests from the also-sorely-outdated Java VM. Those are in need of improvement, but the situation is a bit less dire there. Generally, anything that purports to support SSL either as a server or a client has no business including anything but the latest features. Anything that's not maximally secure is insecure.

Arbitrary Authentication with an nginx Reverse Proxy

Mon Sep 22 18:33:37 EDT 2014

  1. Setting up nginx in Front of a Domino Server
  2. Adding Load Balancing to the nginx Setup
  3. Arbitrary Authentication with an nginx Reverse Proxy
  4. Domino and SSL: Come with Me If You Want to Live

I had intended that this next part of my nginx thread would cover GeoIP, but that will have to wait: a comment by Tinus Riyanto on my previous post sent my thoughts aflame. Specifically, the question was whether or not you can use nginx for authentication and then pass that value along to Domino, and the answer is yes. One of the aforementioned WebSphere connector headers is $WSRU - Domino will accept the value of this header as the authenticated username, no password required (it will also tack the pseudo-group "-WebPreAuthenticated-" onto the names list for identification).

Basic Use

So one way to do this would be to hard-code in a value - you could disallow Anonymous access but treat all traffic from nginx as "approved" by giving it some other username, like:

proxy_set_header    $WSRU    "CN=Web User/O=SomeOrg";

Which would get you something, I suppose, but not much. What you'd really want would be to base this on some external variable, such as the user that nginx currently thinks is accessing it. An extremely naive way to do that would be to just set the line like this:

proxy_set_header    $WSRU    $remote_user;

Because nginx doesn't actually do any authentication by default, what this will do will be to authenticate with Domino as whatever name the user just happens to toss in the HTTP Basic authentication. So... never do that. However, nginx can do authentication, with the most straightforward mechanism being similar to Apache's method. There's a tutorial here on a basic setup:

http://www.howtoforge.com/basic-http-authentication-with-nginx

With such a config, you could make a password file where the usernames match something understandable to Domino and the password is whatever you want, and then use the $remote_user name to pass it along. You could expand this to use a different back-end, such as LDAP, and no doubt the options continue from there.

Programmatic Use

What had me most interested is the possibility of replacing the DSAPI login filter I wrote years ago, which is still in use and always feels rickety. The way that authentication works is that I set a cookie containing a BASE64-encoded and XOR-key-encrypted version of the username on the XPages side and then the C code looks for that and, if present, sets that as the user for the HTTP request. This is exactly the sort of thing this header could be used for.

One of the common nginx modules (and one which is included in the nginx-extras package on Ubuntu) adds the ability to embed Lua code into nginx. If you're not familiar with it, Lua is a programming language primarily used for this sort of embedding. It's particularly common in games, and anyone who played WoW will recognize its error messages from misbehaved addons. But it fits just as well here: I want to run a small bit of code in the context of the nginx request. I won't post all of the code yet because I'm not confident it's particularly efficient, but the modification to the nginx site config document is enlightening.

First, I set up a directory for holding Lua scripts - normally, it shouldn't go in /etc, but I was in a hurry. This goes at the top of the nginx site doc:

lua_package_path "/etc/nginx/lua/?.lua;;";

Once I did that, I used a function from a script I wrote to set an nginx variable on the request to the decoded version of the username in the location / block:

set_by_lua $lua_user '
	local auth = require "raidomatic.auth"
	return getRaidomaticAuthName()
';

Once you have that, you can use the variable just like any of the build-in ones. And thus:

proxy_set_header    $WSRU    $lua_user;

With that set, I've implemented my DSAPI login on the nginx site and I'm free to remove it from Domino. As a side benefit, I now have the username available for SSO when I want to include other app servers behind nginx as well (it works if you pass it LDAP-style comma-delimited names, to make integration easier).

Another Potential Use: OAuth

While doing this, I thought of another perfect use for this kind of thing: REST API access. When writing a REST API, you don't generally want to use session authentication - you could, by having users POST to ?login and then using that cookie, but that's ungainly and not in-line with the rest of the world. You could also use Basic authentication, which works just fine - Domino seems to let you use Basic auth even when you've also enabled session auth, so it's okay. But the real way is to use OAuth. Along this line, Tim Tripcony had written oauth4domino.

In his implementation, you get a new session variable - sessionFromAuthToken - that represents the user matching the active OAuth token. Using this reverse-proxy header instead, you could inspect the request for an authentication token, access a local-only URL on the Domino server to convert the token to a username (say, a view URL where the form just displays the user and expiration date), and then pass that username (if valid and non-expired) along to Domino.

With such a setup, you wouldn't need sessionFromAuthToken anymore: the normal session variable would be the active user and the app will act the same way no matter how the user was authenticated. Moreover, this would apply to non-XSP artifacts as well and should work with reader/author fields... and can work all the way back to R6.

Now, I haven't actually done any of this, but the point is one could.


So add this onto the pile of reasons why you should put a proxy server (nginx or otherwise) in front of Domino. The improvements to server and app structure you can make continue to surprise me.

Adding Load Balancing to the nginx Setup

Sat Sep 20 11:00:45 EDT 2014

Tags: nginx
  1. Setting up nginx in Front of a Domino Server
  2. Adding Load Balancing to the nginx Setup
  3. Arbitrary Authentication with an nginx Reverse Proxy
  4. Domino and SSL: Come with Me If You Want to Live

In an earlier post, I went over the basic setup of installing nginx on a single Domino server to get the basic benefits (largely SSL). Next, it's time to expand the setup to have one nginx server in front of two Domino servers.

The concept is relatively straightforward: when an HTTP request comes in, nginx will pick one of the back-end servers it knows about and pass the request along to that. That allows for balancing the load between the two (since the act of processing the request is much more expensive than the proxying, you need far fewer proxies than app servers), as well as silent failover if a server goes down. The latter is very convenient from an administrative perspective: with this setup, you are free to bring down all but one Domino server at any time, say for maintenance or upgrades, without external users noticing.

The main complication in this phase is the handling of sessions. In the normal configuration, nginx's load balancing is non-sticky - that is to say, each request is handled on its own, and the default behavior is to pick a different server. In some cases - say, a REST API - this is just fine. However, XPages have server-side state data, so this would completely mess that up (unless you do something crazy). There is a basic way to deal with this built-in, but it's not ideal: it's "ip_hash", where nginx will send all requests from a given IP to the same back-end. That's more or less okay, but would run into problems if you have an inordinate number of requests from the same IP (or same-hashed sets of IPs). So to deal with this, I use another layer, a tool called "HAProxy". But first, there'll be a bit of a change to note in the Domino setup.

Domino

Since we'll now be balancing between two Domino servers, the main change is to set up a second one. It's also often a good idea to split up the proxy and app servers, so that you have three machines total: two Domino servers and one proxy server in front. If you do that, it's important to change the HTTP port binding from "localhost" to an address that is available on the local LAN. In my previous screenshot, I did just that. You can also change HTTP back to port 80 for conceptual ease.

In my setup, I have two Domino servers named Terminus and Trantor, available on the local network as terminus-local and trantor-local, with Domino serving HTTP on LAN-only addresses on port 80.

HAProxy

HAProxy is a very lightweight and speedy load balancer, purpose-built for this task. In fact, with the latest versions, you could probably use HAProxy alone instead of nginx, since it supports SSL, but you would lose the other benefits of nginx that will be useful down the line. So my setup involves installing HAProxy on the same server as nginx, effectively putting it in place of the Domino server in the previous post.

Much like nginx, installing HAProxy on Ubuntu/Debian is easy:

# apt-get install haproxy

The configuration is done in /etc/haproxy/haproxy.cfg. Most of the defaults in the first couple blocks are fine; the main thing to do will be to set up the last block. Here is the full configuration file:

global
        maxconn 4096
        user haproxy
        group haproxy
        daemon

defaults
        log     global
        mode    http
        option  httplog
        option  dontlognull
        retries 3
        option redispatch
        maxconn 2000
        contimeout      5000
        clitimeout      500000
        srvtimeout      500000

listen balancer 127.0.0.1:8088
        mode http
        balance roundrobin
        cookie balance-target insert
        option httpchk HEAD /names.nsf?login HTTP/1.0
        option forwardfor
        option httpclose
        server trantor trantor-local:80 cookie trantor check
        server terminus terminus-local:80 cookie terminus check

The listen block has HAProxy listening on the localhost address (so external users can't hit it directly) on port 8088 - the same settings as the local Domino server used to be, so the nginx config from before still applies.

The session-based balancing comes in with the balance, cookie, and server lines. What they do is tell HAProxy to use round-robin balancing (choosing each back-end in turn for a new request), to add a cookie named "balance-target" (could be anything) to denote which back-end to use for future requests, and to know about the two servers. The server lines give the back-ends names, point them to the host/post combinations on the local network, tell them to use cookie values of "trantor" and "terminus" (again, could be anything), and, presumably, to perform the HTTP check from above (I forget if that is indeed what "check" means, but it probably is).

The httpchk line is what HAProxy uses to determine whether the destination server is up - the idea is to pick something that each back-end server will respond to consistently and quickly. In Domino's case, doing a HEAD call on the login page is a safe bet (unless you disabled that method, I guess) - it should always be there and should be very quick to execute. When Domino crashes or is brought down for maintenance, HAProxy will notice based on this check and will silently redirect users to the next back-end target. Any current XPage sessions will be disrupted, but generally things will continue normally.


And that's actually basically it - by shimming in HAProxy instead of a local Domino server, you now have smooth load balancing and failover with an arbitrary number of back-end servers. The cookie-based sessions mean that your apps don't need any tweaking, and single-server authentication will continue to work (though it might be a good idea to use SSO for when one back-end server does go down).

There is actually a module for nginx that does cookie-based sticky sessions as well, but the stock Ubuntu distribution doesn't include it. If your distribution does include it - or you want to compile it in from source - you could forgo the HAProxy portion entirely, but in my experience there's no great pressure to do so. HAProxy is light enough and easy enough to configure that having the extra layer isn't a hassle or performance hit.

Setting up nginx in Front of a Domino Server

Thu Sep 18 13:08:46 EDT 2014

Tags: nginx ssl
  1. Setting up nginx in Front of a Domino Server
  2. Adding Load Balancing to the nginx Setup
  3. Arbitrary Authentication with an nginx Reverse Proxy
  4. Domino and SSL: Come with Me If You Want to Live

As I've mentioned before and now presented on, I'm a big proponent of using a reverse proxy in front of Domino. There are numerous benefits to be gained, particularly when you expand your infrastructure to include multiple back-end servers. But even in the case of a single server, I've found it very worthwhile to set up, and not overly complicated. This example uses nginx and Domino on Ubuntu Linux, but the ideas and some configuration apply much the same way on other OSes and with other web servers.

Domino

The first step involves a bit of configuation on the Domino server. The first is to move Domino off the main port 80, disable SSL, and, ideally, bind it to a local-only IP address. The port setting is familiar - I picked port 8088 here, but it doesn't matter too much what you pick as long as it doesn't conflict with anything else on your server:

The next step is to bind Domino to a local-only adapter so external clients don't access its HTTP stack directly. In this example, I have a LAN-only adapter whose IP address I named "terminus-local" in /etc/hosts, but I imagine "localhost" would work just fine in this case:

Once that's set, the last stage of configuration is to enable the WebSphere connector headers by setting a notes.ini property:

HTTPEnableConnectorHeaders=1

Enabling these will allow us to send specialized headers from our reverse proxy to Domino to make Domino act as if the request is coming to it directly.

After that, restart Domino (or just HTTP, probably).

nginx

Next, it's on to setting up nginx. On Ubuntu/Debian, it's pretty straightforward:

# apt-get install nginx

The main config file /etc/nginx/nginx.conf should be good as-is. The way the Ubuntu config works, you set up individual web site files inside the /etc/nginx/sites-available directory and then create symlinks to them in the /etc/nginx/sites-enabled directory. Out of convention, I name them like "000-somesite" to keep the priority clear. The first file to create is a site to listen on port 80, which will serve entirely as a redirect to SSL. You don't have to do this - instead, you could bring the content from the next file into this one instead of the redirection line. This is usually a good idea, though. This file is 001-http-redirect:

server {
	listen [::]:80;

	return https://$host$request_uri;
}

The only really oddball thing here is the "listen" line. Normally, that would just be "listen 80", but adding the brackets and colons allows it to work on IPv4 and IPv6 on all addresses.

The next file is the important one for doing the proxying, as well as SSL. It's 002-domino-ssl:

server {
        listen [::]:443;

        client_max_body_size 100m;

        ssl on;
        ssl_certificate /etc/nginx/ssl/ssl-unified-noanchor.pem;
        ssl_certificate_key /etc/nginx/ssl/ssl.key;

        location / {
                proxy_read_timeout 240;
                proxy_pass http://localhost:8088;
                proxy_redirect off;
                proxy_buffering off;

                proxy_set_header        Host               $host;
                proxy_set_header        X-Forwarded-For    $proxy_add_x_forwarded_for;
                proxy_set_header        $WSRA              $remote_addr;
                proxy_set_header        $WSRH              $remote_addr;
                proxy_set_header        $WSSN              $host;
                proxy_set_header        $WSIS              True;
        }
}

The client_max_body_size line is to allow uploads up to 100MB. One thing to be aware of when using proxies is that they can impose their own limits on request sizes just as Domino does, and nginx's default is relatively low.

nginx's keychain format is almost as simple as just pointing it to your certificate and private key, with one catch: to have intermediate signing certificates (like those from your SSL provider or registrar), you concatenate the certificates into a single file. This tutorial covers it (and this config) nicely.

The core of the reverse proxy comes in with that location / block. In a more-complicated setup, you might have several such blocks to point to different apps, app servers, or local directories, but in this case we're just passing everything directly through to Domino. The first four lines do just that, setting a couple options to account for very-long-loading pages, to point to Domino, and some other options.

The proxy_set_header lines are the payoff for the connector headers we set up in Domino. The first is to pass the correct host name on to Domino so it knows which web site document to use, the second is a fairly standard-outside-of-Domino header for reverse proxies, and then the rest are a set of the available WebSphere (hence "$WS") headers, specifying what Domino should see as the remote address, the remote host name (I don't have nginx configured to do reverse DNS lookups, so it's the same value), the host name again, and whether or not it should act as being over SSL.

Once that's set, create symlinks to these files in the sites-enabled directory from the sites-available directory and restart nginx:

# ln -s ../sites-enabled/001-http-redirect
# ln -s ../sites-enabled/002-domino-ssl
# service nginx restart

Assuming all went well, you should be all set! This gets you a basic one-server proxy setup. The main advantage is the superior SSL handling - nginx's SSL stack is OpenSSL and thus supports all the modern features you'd expect, including SHA-2 certificates and the ability to serve up multiple distinct SSL certificates from the same IP address (this would be done with additional config files using the server_name parameter after listen). Once you have this basis, it's easy to expand into additional features: multiple back-end servers for load balancing and failover, better error messages when Domino crashes (which is more frequent than nginx crashing), and nifty plugins like GeoIP and mod_pagespeed.

Edit 2015-09-16: In my original post, I left out mentioning what to do about those "sites-enabled" documents if you're not running on Ubuntu. There's nothing inherently special about those directories to nginx, so a differently-configured installation may not pick up on documents added there. To make them work in an installation that doesn't initially use this convention, you can add a line like this to the /etc/nginx/nginx.conf (or equivalent) file, at the end of the http code block:

http {
    # ... existing stuff here

    include /etc/nginx/sites-enabled/*;
}

Better Living Through Reverse Proxies

Thu May 30 15:59:02 EDT 2013

  1. Putting Apache in Front of Domino
  2. Better Living Through Reverse Proxies
  3. Domino's Server-Side User Security
  4. A Partially-Successful Venture Into Improving Reverse Proxies With Domino
  5. PSA: Reverse-Proxy Regression in Domino 12.0.1

A while ago, I set up Apache as a reverse proxy in front of Domino. Initially, that was in order to serve a WordPress site on the same server, but since then it has more than proven its worth in other ways, mostly in the areas of fault tolerance and SSL support.

IBM recognizes those benefits, which is why Domino 9 on Windows comes bundled with IHS; however, I am a responsible admin, so I don't run Windows on my main servers. Fortunately, stock Apache does the job nicely... or it does usually. Filled with enthusiasm for this kind of setup, I rolled out a small Linode VM dedicated entirely to running Apache as a proxy (sort of like their NodeBalancers, but manual). Unfortunately, I started running into a problem where sometimes sites fronted by it (such as this one) would not properly include their host information and would instead load the main I Know Some Guys site, which is the default on Domino. I wasn't able to find a fix that actually worked, so I decided to use that as an excuse to switch to a cute little number named Nginx.

So far, my experience with Nginx has been fantastic. The config files are like a cleaned-up version of Apache's and it matches it for each feature I've needed (namely, load balancing, easy config, and multiple SSL certificates). As a nice bonus, I didn't have to do any of the config massaging I had to for Apache in order to get XSP's funky resource-aggregation rules to work. If you have the means, I highly recommend it.

 

My latest foray into proxying also gave me an opportunity to look back into my main bugbears with the setup: Domino tracking the proxy server's IP instead of the original requester's and its lack of knowledge of SSL (which causes it to redirect from an SSL login page to a non-SSL one). Fortunately, it turns out that these problems have been sort-of-solved for years via a notes.ini setting added as part of Domino's terminal WebSphere infection: HTTPEnableConnectorHeaders.

By enabling that on my Domino server, I was able to start providing some of those headers. The remote-host headers ("$WSRA" and "$WSRH") work perfectly: setting that to the incoming host causes Domino to act just like that was the original requester, namely filling that in for the REMOTE_HOST field in classic and facesContext.getExternalContext().getRequest().getRemoteAddr() in XPages.

Unfortunately, I was stymied when I set "$WSIS" to True. Though it does indeed cause Domino to acknowledge that the incoming request is via SSL, it does it TOO well: Domino appears to revert to its behavior of only acknowledging a single SSL site, so it caused requests to essentially ignore the Host (and "$WSSN") headers. So that problem remains unsolved.

 

Still, I feel pretty good about my switch to Nginx and my abuse of the HTTP connector headers and look forward to tinkering some more. For reference, here is the config I use for the standard HTTP proxy ("arcturus" is the short name I gave for the main upstream target):

server {
    listen 80;
	
    location / {
        proxy_pass http://arcturus;
        proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504;
        proxy_redirect off;
        proxy_buffering off;
        proxy_set_header    Host            $host;
        proxy_set_header    X-Real-IP       $remote_addr;
        proxy_set_header    X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header    $WSRA           $remote_addr;
        proxy_set_header    $WSRH           $remote_addr;
        proxy_set_header    $WSSN           $host;
    }
}