Sync

Tue Oct 25 21:52:39 EDT 2011

So there's a new round of talk lately about syncing and the trouble involved, thanks to some changes in Google Reader's behavior and the desire to find a new safe haven for RSS syncing. The best example is, unsurprisingly, from Brent Simmons:

Google Reader and Mac/iOS RSS readers that sync

However, the whole time I was reading this article, my brain kept yelling at me, louder and louder as time passed:

This is Lotus Notes! The system you're describing is Lotus Notes! It does syncing and deletion stubs and read marks! IT'S LOTUS NOTES!

This kind of thing would indeed be really easy in Notes/Domino, particularly if you were actually using the Notes client (though it wouldn't be much to look at). Subsets of data, managing deleted elements, timed refreshes from the source, storing each feed entry as its own entity, and offline access that can have its changed synced back to the master are all things that Notes has done since its conception - the only problem is that it's so ugly and arcane that mass-market appeal is nigh-impossible.

Nonetheless, it got me thinking about the viability of using Domino as a syncing server for this. You wouldn't be able to use NSF files in your RSS clients, which would make the job a bit tougher, but the new "XWork" licensing model would fit into this nicely. Scalability would be a serious concern, but the simple nature of the data would keep view updates quick, and it'd just be a bit of cleverness in the database layout to direct users to the correct place. Toss a couple clustered servers in there and you should have some good load balancing, too. The Domino Data Services API might be enough to handle data access from the client, but, if it's not, a couple simple agents would do it.

I'm sort of tempted to try hashing something out.

The Domino Data Service

Wed Oct 05 10:22:32 EDT 2011

Tags: domino

Though I don't have a use for it currently, I can't help but get kind of excited about the Domino Data Services in 8.5.3 and the Extension Library. If you're writing a normal Domino application - using either legacy elements or XPages - you probably won't have terribly much use for it.

However, the really cool aspect of it is that it significantly smooths the process of using Domino as a backing data store for another front end written in PHP, Ruby, or anything else. This has always been sort of possible - you could use the Java API or a combination of ?ReadViewEntries, ?CreateDocument, and ?SaveDocument URL commands to access Domino data without actually being in Domino, but it wasn't exactly a smooth process. With the Data Services, now Domino is very similar to, say, CouchDB, but with reader fields and impenetrable licensing terms for non-vendors.

One nice little side effect of the fact that it uses the HTTP stack is that DSAPI modules work. When I was testing around, I was able to get a list of available forums as Anonymous, resulting in only the two visible ones. When I included my user authentication filter cookie, it started showing me the rest of the forums that the user could access, exactly like you'd want. While you could presumably just pass the username and password in each request using normal HTTP authentication, it's pretty cool that any alternate methods like this work as well.

It's all pretty exciting, and I'm itching to find a use for it.

My next two favorite features of 8.5.3

Wed Oct 05 08:07:30 EDT 2011

Tags: domino

Since 8.5.3 has been out for about 24 hours now, I naturally rolled it out on both my development and production servers. Fortunately, my irresponsibility was greatly rewarded: the largest problems I've had so far were a change in the way Java classes are accessed in JavaScript (I could no longer just call methods on non-public classes defined in the same file as a public one, so I had to split them out into their own files... which is what you're supposed to do anyway) and a minor CSS change where the top borders of my Dojo tabbed tables are now back to a light grey color, so I need to find the new CSS rule to change them back to brown.

I'm rather happy so far about two minor things in particular: CSS/JavaScript aggregation and OSGi auto-loading.

The CSS/JavaScript aggregation is almost a freebie: once you have Designer 8.5.3, you get a new option in the database properties sheet to turn this on and then 8.5.3 servers will happily obey it. I immediately noticed a decent load-speed increase of about 1/3 and one non-technical guild member said that the odd problem of dog-slowness that they (and not other people) had has been fixed. My favorite aspect of this is that it's a smart feature: due to the way you define Dojo modules in an XPage as <xp:dojoModule/> elements and not just text on a page like normal HTML, Domino knows ahead of time what you're using and can thus feel free to optimize it in transparent ways like this. It feels good seeing the same code go from one form to a more efficient one just by virtue of done the "right thing" when writing it to begin with.

The OSGi plugin auto-loading was mentioned briefly on Dec's Dom Blog back in June and I hadn't seen much reference to it since, so I was afraid it wouldn't necessarily make it in. Fortunately, it has: I created a new Update Site with the template from an 8.5.3 server, imported the latest Extension Library, ran the "Sign All" agent, and set the notes.ini parameter to look there. And lo and behold: it properly loads up the extensions from the NSF, so I was finally able to delete the filesystem versions that were previously necessary. This makes managing the Extension Library much smoother and it's one less potential gotcha when I upgrade my dev server first and then want to deploy it - since the Update Site has a replica on both servers, the upgrade is handled with the normal replication process and I don't have to remember to copy any files over from server to server. And the fact that it's an NSF theoretically gives you all kinds of other, more complicated deployment options, like server-based Reader field control or partial replication to control which servers see which plugins if you're so inclined. Very cool - I approve, IBM.

My Favorite Minor Feature in 8.5.3

Wed Sep 28 18:23:55 EDT 2011

Tags: domino

I don't have access to the beta versions of new Notes/Domino versions, so I haven't been able to tinker around with all the cool new things that are slated to appear in 8.5.3, but that hasn't stopped me from getting pretty excited about some of them. The big-ticket items are clear: the new Domino Data Services and relational database access (through the Extension Library, which may as well be standard) could make practical very different ways of using Domino either as a standalone data source with a different front end or as a standalone front end with a relational data source. While both types of setups are theoretically possible now, they're such a hassle that they're not worth it - but, with 8.5.3, they're almost top-tier choices for system architectures.

However, since I don't plan to rewrite my entire architecture, those new features probably won't affect my day-to-day life for a while yet. What has me most excited in a practical sense is much more lowly: being able to do a full-text search with sorted results. I've found that one of the big bottlenecks in my guild-forums app is the sheer size of the views, particularly the Posts one. I used to stuff pretty much all of the summary data into the view, but then I found that removing the non-sorted columns sped up responsiveness dramatically. That whetted my appetite for clearing out unneeded sorted columns - since each sorted column contains a full view index, having even a handful can increase the total index size dramatically. Since it appears that FTSearch's performance is almost (but not quite) as good as getting all entries by a key, I'll be able to remove the rarely-used sorted columns, speeding up all the common operations in exchange for a very minor hit in the rare case. Plus, it'll just feel good to put Domino's searching capabilities to proper use.

Getting Domino LDAP to Work for Authentication

Thu Aug 25 16:23:59 EDT 2011

Tags: domino

Recently, I've been toying with the idea of setting up a couple extra services on my guild's Domino server - voice chat, non-Sametime chat, what have you - and I figured I should give a shot to LDAP authentication with the Domino directory for these. However, this is something I've never done, and the documentation is a little rough - most LDAP info on the web refers to non-Domino servers, while most Domino-specific information was written in about 1996.

I'll leave out the depressing details of the various things I tried in my quest to get LDAP working as an authentication mechanism for my Linux server (as a relatively simple test case) and point you instead to this dead-but-still-archived page: http://web.archive.org/web/20040614140723/http://www.dominux.co.uk/ldap.html. The key information on that page is the list of fields that you have to add to your user documents to use them for this purpose. During my harried testing, all /var/log/auth.log was telling me was "Invalid credentials", but what it really meant was that the user account it found didn't have the right attributes. Thanks, Linux!

.recycle() in Back-end Java Classes in XPages

Tue Aug 02 14:12:59 EDT 2011

Though most of my Domino programming has been done in LotusScript (since it's one of the two "real" Domino languages), I had worked with Java here and there before diving into XPages, at least enough to know about recycle(). recycle() is a strange beast, a visitor from a non-memory-managed language popping up inside a famously memory-managed one. I get, conceptually, why it exists - since Lotus doesn't control the Java object lifecycle, Domino can never know when an object is garbage collected. And I'm sure there was some efficiency- or pragmatism-related reason why the back-end C++ objects were necessary when the Java API was created, but the result is that there's this weird anachronistic headache to deal with.

In the case of Java agents, it's not so bad - most of the time, the agent will be very procedural and so it's very easy to toss in some recycle()s in your loops and at the end of the code. However, it's much stickier with XPages, especially if you have a crazy back-end object system like I do that's meant to abstract away all of the implementation details of the Domino database. Once you reach that point, it's very hard to have code "know" when an object is no longer going to be needed. It becomes this balancing act between strict recycling on the one hand (resulting in many more round trips to the database than needed) and fast but leaky code on the other.

However, though each individual bit of code doesn't necessarily know if it's at the end of the lifecycle, there IS a well-defined set of lifecycle phases and a mechanism for hooking into that. Taking a note from how to implement a flashScope in XPages, I created a new view-scoped backing bean that inherits from a thread-safe Set containing lotus.domino.Base objects and adds a convenience method to call .recycle() on all of its contents. Then, I added a PhaseListener object to wait for after the "Render Response" phase and call that. Then, everywhere in my code that I create a Domino object, I add it to the Set before continuing along in the code. Since I'll definitely be done with all of those objects by the time the page has finished rendering, this should theoretically mean that all of my objects are recycled after each page load.

Hacking Together a Backup System

Sat Jul 23 17:10:40 EDT 2011

Tags: projects

Recently, my living room Mac mini's external drive, where it kept its iTunes library, met a horrible, clicking death. Naturally, I had devised a proper backup plan long before this happened - unfortunately, however, I had not yet implemented this plan. Crap.

So I decided to straighten out my backup system all around, ideally covering all of the machines in the apartment as well as my hosted Domino server. My ideal (at least for now) backup plan would fit within a couple attributes:

  • Cheap. I've been trying to spend less money overall lately, and picking up a new bill for hardware or services would counteract that somewhat. I wanted to come up with something that would work with the hardware I had on hand, plus one purchased replacement HD.
  • Automatic. I don't want to have to remember any manual backup process, since I most likely wouldn't.
  • Off-site backup isn't important. Sure, it would be nice to keep my data in the event of a physical catastrophe, but we're talking about TV shows and movies here, not anything vital.
  • Quick recovery or automatic failover aren't important. They'd be NICE, certainly, but I'm just looking for a basic "the data exists in at least one other place" setup. If a computer meltdown means that recovery will take a while or I'll have to rebuild the OS, that's fine.
  • Versioning isn't important. The occasions where I would want to restore intentionally-deleted files or modified documents are so few and far between that it's not worth going out of my way to achieve that.

My main laptop was far and away the easiest to set up. A while ago, my boss gave me an external USB drive which I've been using for Time Machine backups. Time Machine does pretty much everything I would want to, and even gets bonus points for ease of recovery and versioning. When I'm out of the office, it even kind of counts as off-site.

My Domino data was the next easiest, primarily since I work with it all the time. I set up a Parallels virtual machine to run a new Domino server, set up scheduled replication, and pointed my "create a replica of everything" agent at my production server. Voilà: up-to-the-hour backups without having to give it a second thought.

The hard drive I purchased to replace the failed one was a nice 2 TB one, giving me enough room to store my media library plus some Time Machine backups for the mini itself, the iMac in the bedroom, and the two other laptops floating around. It won't be enough space permanently, but it'll last me at least until I'm comfortable enough to buy another one. So that covers the other Macs themselves.

In addition to the media drive, the Mac mini also has a 750GB drive salvaged from my poor, video-card-exploded iMac. I cleaned off enough old crap from there that it will be able to serve as a mirror for the media files on its larger brother - again, at least for now. To implement that, I wrote a quick, two-line shell script:

rsync -aE --delete /Volumes/Tartaros/Movies /Volumes/Diaspar
rsync -aE --delete /Volumes/Tartaros/iTunes /Volumes/Diaspar

That basically mirrors the Movies and iTunes folders on Tartaros (the media drive) to equivalent folders on Diaspar (the iMac's old drive). The "-a" switch toggles "archive" mode, which enables a lot of useful behaviors for this case, "-E" enables support for HFS+ metadata like ACLs, forks, and extended attributes, while "--delete" removes any files in the target directory that no longer exist in the source. I added this to my crontab to run at 3 AM each day.

All in all, I think this setup should cover my basic needs pretty well. My next step, when I want to spend the money, will be to sign up for an online backup service like CrashPlan. There are some cheap options and they would hopefully be more reliable than my current scheme, which is still dependent on the fragile health of a handful of external USB drives.

Using PHP on a Domino Server via CGI

Sat Jul 16 21:34:21 EDT 2011

Mostly on a lark but partly for practical reasons, I wanted to set up PHP on my Linux-based Domino server. While an easy option would be to install Apache and have it use a non-80 port, I wanted to see if Domino's ancient CGI support still works in the modern world of today.

In short: yes. In fact, it's pretty easy; the bulk of the work I put into it was actually just because I refused to believe that the syntax on that IBM page and in Administrator's documentation wasn't a typo. On my Ubuntu-based server, the setup is quick:

  • Run "apt-get install php5-cgi"
  • Set "cgi.force_redirect = 0" in your php.ini file. In my case, this didn't exist, so I created it as /etc/php5/cgi/conf.d/php.ini with just that line. If you don't do this, you'll get errors about the CGI app not returning any output
  • Add a rule to your web site with the following setup (using the path to your PHP CGI binary as needed):
    • Type of rule: Directory
    • Incoming URL pattern: /*.php
    • Target server directory: /usr/bin/php5-cgi\*.php
    • Access level: Execute
  • Place your scripts in the domino/html directory in your server's data directory

    That backslash is actually there and there actually isn't any space in front of it. It's really weird, I know.

    If everything went well, you should be able to refresh HTTP and voilà: you can execute PHP scripts. I haven't investigated it super-thoroughly for edge-case gotchas, but it runs phpinfo() just fine, and you don't have to worry about making your PHP scripts executable or anything.

    One important thing to bear in mind is that this is just CGI. Your PHP code isn't going to suddenly have access to the Domino API in any new way - this isn't a new way to write Notes apps on the web. If you want some sort of API access, on Windows you could use OLE and on other platforms you could presumably compile a version of PHP with Java support, but that's about it. That said, you DO get the benefit of Domino handing off any authenticated user name (even using some crazy DSAPI method), so it's not COMPLETELY isolated from the rest of the server.

    Freaking Reader Fields

    Thu Jul 14 15:02:42 EDT 2011

    Tags: domino

    Periodically (read: every day), I wonder about switching from Domino to a SQL server for data storage on my guild web site. The primary reason for this is speed: I'm doing primarily relational things, so I've had to wrangle Domino quite a bit to do this with any amount of speed and code cleanliness. Additionally, while most of my documents are entirely distinct from each other, I had to make concessions here and there, such as storing the latest Post date in Topic documents so I can sort them that way, and, each time I have to do that, there's another little bit of code maintenance and clustering-unsafety.

    However, my ideas always come to a screeching halt when I remember Reader fields. They're simply too good, and the replacements I've found on the open source SQL databases have been, to put it kindly, lacking in comparison. They generally involve having some access level field or, best case, a multi-value field of names that are allowed to see the document, and then making sure that all of your queries or views honor that. Every method has some severe downside, ranging from inflexibility (access level) to nightmarish piles of code everywhere (multi-value name/group/role fields). Everywhere I accessed the database, I'd have to worry about security and document access, bloating up the code and just asking for data-leak bugs.

    Domino, for all of its faults, makes this something you just don't have to worry about. If you have a Reader field, you can toss names, groups, and roles in there with impunity, and the server will handle the rest like you'd want. You don't have to do your own directory lookups, security checks, or nested queries. If the current user isn't on the list, the document may as well not exist. Even if the user had the UNID of the document and designer access to the database, it'd be beyond their reach. This is enormously comforting. And even though it's just a guild web site and not a giant corporate database, I'd still rather deal with a bit of tricky code for performance than the headaches and drama involved with people seeing what they're not allowed to see.

    So, until I either get entirely fed up with Designer or I find an equivalent to Reader fields in a free SQL server, I'll be sticking with Domino.

    Java Takes Its DTDs Seriously

    Wed Jun 29 10:24:54 EDT 2011

    Around 8:45 PM last night, my main XPages app stopped responding. The browser would sit there waiting for the server for about 30 seconds or a minute until the server finally gave up and handed out a Command Not Handled Exception. When I first started looking into it, I saw a rogue process taking up the whole CPU, but after killing it and bouncing the server for good measure, the problem remained.

    I'll leave out the hour's worth of hair-pulling and cut right to the chase: I had added a doctype line to my faces-config file but hadn't gotten around to removing it. This normally isn't too much of a problem, but java.sun.com became unavailable yesterday (and still is at the moment). Thus, the server was opening the faces-config.xml file as XML and, as XML parsers are supposed to it, it was attempting to fetch the DTD to validate it. However, after waiting 30 seconds or so, it would give up the ghost, spit a misleading error to the console along the lines of "Can't parse configuration file:xspnsf://server:0/database.nsf/WEB-INF/faces-config.xml", and declare that it couldn't handle the command. As soon as I removed the DTD, everything started working perfectly again.

    I'm reasonably certain this is Larry Ellison's fault.