Quick Tip: Use Dojo Content Panes for Speedier Initial Page Loads

Mon Aug 18 19:12:57 EDT 2014

Tags: quick-tip

The XPages Extension Library is full of hidden gems and one I particularly like keeping in my back pocket is the <xe:djContentPane/> control. It's a fairly unassuming control; like the rest of the components in the "Dojo Layout" category, it takes its name and basic concept from the underlying Dijit. However, you don't have to use it in a full Dojo layout - and, in fact, all of the actual layout types have more-appropriate XSP components. Presumably, its initial use is to load content from a specified URL (via the href attribute), but I find the partialRefresh attribute MUCH more interesting. By setting that attribute to true, you actually carve out a little sub-tree of the page that loads in a separate request from the rest of the page.

That's "loads" in the JSF sense: any of the properties - including ${...}-bound ones - of any component inside of the content pane aren't evaluated until that second request. So what's the upshot of this? If the values inside are particularly expensive, such as heavy database access or a remote API, the rest of the page will load first and then the content pane's area will be a loading message until it finishes.

This tool comes in handy even if you don't touch a single other piece of Dojo UI. The content pane by default is just a <div/>, and you're free to replace the loading message. For example, to Bootstrap-ify it:

<xe:djContentPane id="timeEntriesPane" partialRefresh="true">
	<xp:this.loadingMessage><![CDATA[
		<div style="padding: 1em; text-align: center"><i class="icon-spinner icon-spin green bigger-200"></i></div>
	]]></xp:this.loadingMessage>
	
	...
</xe:djContentPane>

(That snippet is from a pane that loads open client time entries from FreshBooks, hence the id)

The nicest part is that using this technique is often just as simple as wrapping your content in the control and switching the attribute. There are only a few situations I've run into in regular use that require some consideration:

  • If you have some on-load client-side JavaScript that should operate on the contents, such as dojo.behavior. In that case, you can use the onDownloadEnd event (if I recall correctly) to trigger the behavior. You could likely set this via a theme (it's available as both a normal "Event" as well as an attribute on the tag).
  • If the code inside is dependent on query string parameters. Parameters are not included in the partial-refresh URL. I don't recommend the specific suggestion of dumping all query parameters into viewScope (that's asking for an injection vulnerability); instead, make sure that anything you get from the query string is either "baked into" the loading of the rest of the page (say, documentId="${param.documentId}" on a document data source instead of documentId="#{param.documentId}") or added via a data context (e.g. <xp:dataContext var="someParamHolder" value="${param.someParam}"/>).
  • If the content of the pane is error-prone. The partial refresh masks the error message (this may actually be preferable from a UI perspective): you just get an "error loading contents" message on the page. To see the normal stack trace, you have to either disable partial refresh or check the Network pane of your browser's debug tools.

With those in mind: give it a try! Used in the right situation, you can get a nice little boost of initial page load with only a little cost in overall load time - it can make for a surprisingly-better experience.

My Sessions at MWLUG This Year

Mon Aug 18 15:13:02 EDT 2014

Tags: mwlug

As I mentioned at the end of this morning's post about SSL and reverse proxies, I'm going to be giving a session on using a reverse proxy in front of Domino at this year's MWLUG next week. Specifically, it will be one of two sessions, both on Friday:

OS101: Load Balancing, Failover, and More With nginx

I'll be discussing the general reasons why you would use a reverse proxy - not just the aforementioned SSL benefit, but also load balancing, failover, multi-app integration, new features like GeoIP, config/upgrade management, and content rewriting. I'll also discuss a specific setup based on my use of using nginx + HAProxy to use a proxying server to stand in front of two Domino servers on nearby machines. Though the practical example will be based on nginx on a Linode instance, the overall concepts are useful with other front-end servers or devices.

AD105: Building a Structured App With XPages Scaffolding

I'll be discussing the process and reasons for building an app using my XPages Scaffolding project and its further-advanced cousin the frostillic.us Framework. As with the ongoing series, I'll be demonstrating the structure of a basic app using my techniques, where they differ from common practice, where they are the same, and why it's worthwhile to adopt a similar structure. And much like my other session, though the specific example will be with my framework, the concepts and priorities can be applied without it.

Domino SSL and Reverse Proxies

Mon Aug 18 10:11:19 EDT 2014

Tags: ssl

Domino's SSL stack has been long-in-the-tooth and awkward to deal with for a while. Until recently, this has mostly just resulted in the sort of stilted way you have to set up SSL keychains, using the Server Certificate Admin database initially and then "IKeyMan" more and more (specifically, an old version you need 32-bit Windows XP for, like a barbarian), but the job eventually got done.

However, as a post from Steve Pitcher points out, this is becoming rapidly impractical. While I generally second his point that Domino's SSL stack needs a revamp, I believe that the primary importance of that will be for the "secondary" protocols - SMTP, IMAP, LDAP - that require SSL for responsible use. For HTTP, however, I'm on board with the spirit of IBM's proposed workaround - IBM HTTP Server - though not the unsuitable reality of a Windows-only implementation.

The reason for this is my general infatuation with reverse proxying over the last few years. It started, actually, with the amazing article PHP: a fractal of bad design, which is required reading on its own, but the pertinent part is in the section on deployment. In it, the author dings PHP for its traditional deployment method of "just jam it directly into the web server" and instead proposes the "reverse proxy in front of an app server" scenario as generally preferable and easy to do. Having seen just that sort of setup with recommended Ruby on Rails deployments - where the Rails app has a small, local-only HTTP server that a "real" server proxies to - that percolated in my mind for a while. Domino is, after all, primarily an app server, and so this criticism of PHP applies just as much to it. A little while later, I had a reason to try that setup in order to host two distinct app types on the same server (ironically - or irresponsibly - deploying that very type of bad PHP setup in the process).

Over time, I improved my setup by using the WebSphere connector headers to cause Domino to view proxied requests as if they were coming directly, using SNI to allow multiple distinct SSL certificates on a single host, adding GeoIP headers at the nginx level, and setting up a sticky-session load balancer to share access between multiple servers and silently fail over when one goes down.

I'm now at the point where I pretty much consider it irresponsible to not use a reverse proxy in front of a production Domino deployment.

Is setting this up as easy as just giving ports 80 and 443 to Domino? Nope, not at all. Is it difficult, though? Not really. I managed to set it up readily, and I'm no admin. Other than a few rough edges IBM should shave off of Domino (such as using the faux-SSL header switching to the crummy single-Internet-Site behavior), I've had the stated benefits and more of an IHS deployment for a long time and without having to use Windows in production. I strongly recommend that everyone view Domino as not a web server - instead, it's a back-end HTTP app server that can serve as a makeshift web server for development, but is best deployed behind a proper front end.

And totally coincidentally, I'll be giving a session on this very topic at MWLUG later this month (in the Open Source track). I'll specifically be discussing nginx, but the same principles would apply with other viable choices like Apache, IHS, or IIS.

Quicker Tip: Lowering XPage Build Overhead When Using Jars

Thu Aug 14 15:27:57 EDT 2014

Tags: dde

(Caveat: I don't actually know if this matters with the Jar design element in 9.0+, since the only times I've needed it are with clients using obsolete versions)

Updated: See Below.

If you use Jar files stored in an NSF in your build path with XPages apps, you've likely noticed that it makes your build times interminable, particularly if it's a large library. From what I can tell, Designer seems convinced that it must download the entire Jar file during every build, in order to find out what's inside of it.

After waiting through enough of these ridiculous build times, I figured I'd try something: extract the .class files inside into a binary class folder. And, lo and behold, it works: if you follow this process, Designer no longer downloads everything, since it can see what classes are available using the NSF's VFS normally.

The process is straightforward if you're used to Eclipse-isms, and isn't too bad generally. All of this happens from Package Explorer:

  1. Put the Jar file on the local filesystem somewhere - if you just have it in the NSF, you can extract it via right-click -> Export... -> File System
  2. Create a folder in your NSF's VFS. WebContent/WEB-INF is a good spot for this. I tend to name the folders the same as the Jar file, minus ".jar", so that the same name and any in-name version info is retained
  3. Import the class files from the Jar into that folder. Do this by right-clicking the folder in Designer and going to Import... -> Archive File -> select the Jar from the filesystem -> make sure everything is checked
  4. Add the folder to the build path. You can do this by right-clicking basically anything in the NSF's VFS tree and going to Build Path -> Configure Build Path... -> Libraries tab -> Add Class Folder.... In that dialog, pick the newly-created folder in WebContent/WEB-INF

And with that, you should be able to delete the original Jar while still having the class files available, along with faster build times.


Update: It looks like there's a minor caveat: the class files are available for Designer, but not the server. Fortunately, it's not as bleak as that makes it sound: if you keep the Jar file in the NSF in the specific path WebContent/WEB-INF/lib (without adding it to the build path), then you can use the class-folder trick for Designer, but the server will pick up the classes from the attached Jar. So that's a drag from the cleanliness perspective, but still gets the job done. That may also mean things get muckier on 9.0+, since I believe that's the location it uses for Jar File resources.

Be a Better Programmer, Part 4

Thu Aug 14 09:44:26 EDT 2014

Tags: abstraction
  1. Be a Better Programmer, Part 1
  2. Be a Better Programmer, Part 2
  3. Be a Better Programmer, Part 3
  4. Be a Better Programmer, Part 4
  5. Be a Better Programmer, Part 5

This topic isn't, strictly speaking, in the same vein as the rest of this series; instead, it's more of a meandering "case study" sort of thing. But it has something of a unifying theme, if you'll bear with me:

Be Mindful of Your Layer of Abstraction

Basically everything about computers has to do with layers of abstraction, but what I have in mind at the moment is how it interacts with web programming. First, I'll dive into a bit of background.

The original premise of HTML was that it was an SGML-like markup language designed for representing textual documents, particularly academic ones. Thus, the early versions built up a bevy of elements for representing structured tabular data, abbreviations, block quotations, definition lists, ordered and unordered lists, and so forth. The ideal HTML document would be structurally sound, with each element meaning something that both a human and computer could understand and, presumably, format and put into a textbook.

However, since actual humans don't spend all their time writing academic papers and instead wanted to make layouts and colors and fonts, the HTML they put out wasn't beautifully structured - instead, tables went from representing tabular data to being used for layout, filled with little 1px-by-1px transparent GIFs and other semantically-meaningless crap.

Enter the Web Standards Project, which primarily advocated the use of cleaner (X)HTML and CSS. Among the goals of this advocacy was a desire to see cleaner, more meaningful markup again. This was phenomenally successful and, as browsers improved their support of CSS (grudgingly, in some cases), the miasma of nested tables, font tags, and other meaningless markup receded.

But then we get to today. Though modern HTML markup bears the trappings of "semantic" standards, the essence of it is hardly better than the old way (h/t David Leedy). The proliferation of framework-geared markup is a reflection of the fact that, even with the best of intentions, sticking to strictly meaningful markup is essentially impractical for an "app"-type web site (as opposed to a collection of documents). When you get into the HTML that various JavaScript tools generate at runtime, things just completely fly out the window. HTML5's new elements help a little, but don't solve the problem - is a dashboard page containing a grid of widgets really better represented by <article>s and <section>s than by <div>s? The equillibrium that "best practice" has reached now is that it's more or less okay to have a bunch of nonsense <div>s mucking up the works as long as you try to do a best match for actual content tags and throw in some nods to accessibility/inclusion.

So that brings me to my current views on how to structure the markup of web apps. I used to care a great deal, back when I wrote in PHP - since every tag on the page was something I wrote, either hard-coded or in HTML-generation code, I wanted to make sure that the result was pristine and would work properly even with CSS and JavaScript turned off. My early XPages attempts tried to continue with this, but it's not realistic. While you can (and should) still keep an eye on not spitting out too much HTML unnecessarily, there's no getting around the fact that your page is going to have more <div>s and <span>s than you'd like.

Over the past year, my view has shifted instead to the notion that the resultant HTML+CSS is basically compiled output - or, more accurately, is equivalent to the actual pixels and widgets pushed to the screen when you run a desktop app. In a desktop app, you write to the UI frameworks of the platform and then the frameworks do what they need to do to render the window - sometimes they'll use a different user-specified font, sometimes it'll be larger or smaller, sometimes it'll be high-contrast, etc.. Similarly, my view of an XPage's HTML is that my job is to make the XSP code as clean and declarative as possible and then it's the server's job to decide how that's supposed to be represented in HTML. The ideal XPage wouldn't contain a single nod to the fact that it's going to eventually be output as HTML.

This point of view has come alongside (or because of) my getting religion on custom renderers. Now that my best-case development strategy revolves around renderers, it feels wildly inappropriate to include any references to HTML or CSS classes inside XSP markup (or even themeIds to indirectly reference Bootstrap layout concerns). All that stuff should be determined by the framework at runtime (by way of the renderers and component-generation code), and the XSP should focus on defining the grouping and ordering of the primary components of the page.

But the trouble is that all these abstractions - from pseudo-semantic HTML, through the CSS framework, through the XSP markup - are terribly leaky. No matter how clean I try to make my XSP, there's no getting around the fact that a well-crafted layout needs some knowledge of, say, Bootstrap's column-layout model, or that I want all the containers referencing Task properties to be red instead of the default color scheme. And no matter how clever the UI framework is with its CSS trickery, at some point you're going to have to put in some extra layers of <div>s to get the right sort of alignment (or, worse, include a set of empty <div>s just to get that hamburger icon). And who even knows how you're supposed to feel about abstraction once you start using JavaScript frameworks that generate the entire page - or go even further? Those tend to lead to clean data feeds, so that's good, but phew.

So it's all rules of thumb! The best tack I've found is to pick a layer of abstraction that makes sense with my mental model - standard XSP components + renderers in this case, but it applies to other areas - but to still know about the lower layers, because you're going to have to dive into them sooner or later. That's not, strictly speaking, practical - the XSP framework itself is built on so many other interlocking parts that it's effectively impossible to master every layer (XSP, HTML+CSS+JS, JSF, JSP's leftovers, Java servlets, OSGi, Java, the Domino API, Domino-the-platform, the OS itself, and so forth). But you can know enough generally, and taking the time to do so is crucial.

In short: the whole thing is a mess, so just try to keep your head clear and your code clean.

Quick Tip: facetName-less Callbacks in XPages

Wed Aug 13 19:49:59 EDT 2014

Tags: xpages

When you're setting up a Custom Control, you likely know by now that you can set up callback areas to add content to a specified place inside your CC content when it's rendered. They typically look something like this:

<?xml version="1.0" encoding="UTF-8"?>
<xp:view xmlns:xp="http://www.ibm.com/xsp/core">
	<h1>Content Above</h1>
	
	<div>
		<xp:callback facetName="content" id="callback1"/>
	</div>
	
	<h1>Content Below</h1>
</xp:view>

You then drop content in there like so in your XPage:

<xc:someControl>
	<xp:this.facets>
		<xp:text xp:key="content" value="foo"/>
	</xp:this.facets>
</xc:someControl>

That works fine. However, you can benefit greatly by learning that both properties of the callback are optional.

Leaving off the id isn't particularly interesting, but there's a small benefit: removing it means that the callback doesn't generate a <div/> element in the resulant HTML.

Leaving off the facetName, however, is very valuable indeed. When you do that, the callback becomes the target for any non-facet child controls of the CC, in JSF parlance. To wit:

<xc:someControl>
	<p>foo bar</p>
</xc:someControl>

Much cleaner. And you can continue to use other callbacks with names in the control, much like you normally would:

<xc:someControl>
	<xp:this.facets>
		<xp:text xp:key="header" value="Some header text"/>
	</xp:this.facets>
	
	<p>foo bar</p>
</xc:someControl>

The syntax ends up mirroring standard controls like <xp:repeat/>, where the primary content area has no special name, but there are still facets available like "header" and "footer". It also saves you a valuable indent level or two, which can go a very long way to keeping your code readable.


A footnote about my in-practice use of this:

The primary way I use this technique is my "layout" control. Like with most people's apps, that control contains the main page structure (which I used to do as applicationLayout, then switched to Bootstrap-friendly HTML, and then switched back to applicationLayout with custom renderers). Regardless of the implementation, the layouts always have a main content callback and then one or two "column" callbacks. The XSP for the actual control changes a bit per app, but the Design Definition (which you should use) is consistent:

<?xml version="1.0" encoding="UTF-8"?>
<xp:view xmlns:xp="http://www.ibm.com/xsp/core">

	<xp:table style="width:100.0%">
		<xp:tr>
			<xp:td colspan="3" style="background-color:rgb(225,225,225)">Header</xp:td>

		</xp:tr>
		<xp:tr>
			<xp:td style="width: 200px"><xp:callback facetName="LeftColumn" id="callback1"></xp:callback></xp:td>
			<xp:td>
				<p><xp:callback id="callback2"/></p>
			</xp:td>
			<xp:td style="width: 200px"><xp:callback facetName="RightColumn" id="callback3"></xp:callback></xp:td>
		</xp:tr>
	</xp:table>
</xp:view>

(Note: never write code that looks like that when it's not for a Design Definition)

With that, I get a nice, clean, performant table in the WYSIWYG editor with drop targets for each control. I don't use the WYSYWIG editor (and neither should you), but it helps to have that when I open the page initially and it cuts down on crashes. In any event, using it on a page is much like above:

<xc:layout navigationPath="/Home">
	<xp:this.facets>
		<xc:linksbar xp:key="LeftColumn" />
	</xp:this.facets>
	
	Stuff goes here
</xc:layout>

How I Learned to Stop Worrying and Love C Structs

Mon Aug 04 22:38:04 EDT 2014

Tags: c java

Since a large amount of on-site client time has put my framework work on hold for a bit, I figured I'd continue my dalliance in the world of raw item data.

Specifically, what I've been doing is filling out the collection of structs with Java wrappers, particularly in the "cd" subpackage. For someone like me who has only done bits and pieces of C over the years, this is a fruitful experience, particularly since I'm dealing with just the core concept of the Notes API data structures without the messy business of actually worrying about memory or crashing programs.

If you're not familiar with structs, what they are is effectively just the data portion of a class. They're like a blueprint of related pieces of data - ints, doubles, other structs, etc. - used both for creating new elements and for a contract for the type of data received from an API. The latter is what I'm dealing with: since the raw item data in DXL is presented as just a series of bytes, the struct documentation is vital in figuring out what to expect in which spot. Here is a representative example of a struct declaration from the Notes API, one which includes bonus concepts:

typedef struct{
	LSIG	Header;		/* Signature and Length */
	WORD 	FileExtLen;		/* Length of file extenstion */
	DWORD	FileDataSize;	/* Size (in bytes) of the file data */
	DWORD	SegCount;		/* Number of CDFILESEGMENT records expected to follow */
	DWORD	Flags;		/* Flags (currently unused) */
	DWORD	Reserved;		/* Reserved for future use */
	/*	Variable length string follows (not null terminated).
		This string is the file extension for the file. */
} CDFILEHEADER;

So... that's a thing, isn't it? It represents the start of a File Resource (or CSS file, Java class, XPage, etc.). I'll start from the top:

  1. LSIG Header: An "LSIG" is actually another struct, but a smaller one: its job is to let a processor (like my code) identify the following record, as well as providing the total size of the record. When processing the byte stream, my code looks to the first two bytes of each record to determine what to expect for the next block.
  2. WORD FileExtLen: To start with, "WORD" here is just an alias for an "unsigned short" - a 16-bit integer ranging from 0 to 65535 (64K). The term "word" itself refers to the computer-architecture term. This field specifically denotes the number of bytes to expect after the record for the file extension of the file resource being defined.
    • The "unsigned" above refers to the way the numbers are stored in memory, which is to say a series of binary digits. By default, the highest-value bit is reserved for the "sign" - a 0 for positive and a 1 for negative. Normal "signed" numbers can store positive values only half the size of their unsigned counterparts, because going from "0111 1111" (+127) wraps around to "1000 0000" (-127). This is why the "half" values - 32K, 2G - are seen as commonly as their "full" counterparts. As to why negative numbers are represented with all those zeros, you'll either have to read up independently or just trust me for now.
  3. DWORD FileDataSize: A "DWORD" is double the size of a "WORD" (hence the "D", presumably): it's an unsigned 32-bit number, ranging from 0 to 4,294,967,295 (4G). This number represents the total size of the file attachment, and so also presumably represents the maximum size of a file resource in Domino.
  4. DWORD SegCount: This indicates the number of "segment" records expected to follow. Segment records are similar to this header we're looking at, but contain the actual file data, split across multiple chunks and across multiple items. That "multiple items" bit is due to the overall structure of rich-text items, and is why you'll often see rich text item names (like "Body" or "$FileData") repeated many times within a note.
  5. DWORD Flags: As the comment in the API indicates, this field is currently unused. However, it's representative of something that is used very commonly in other structures: a set of bits that isn't useful as a number, but instead has values set to correspond to traits of the entity in question. This is referred to as a bit field and is an efficient way to store a fixed number of flags. So if you had a four-bit flags field for a text item, the bits may indicate whether it's summary data, a names field, a readers field, and/or an authors field - for example "1100" for a field that's summary and names, but not readers or authors. That is, incidentally, basically how those flags are implemented, albeit with a larger bit field.
    • There is a secondary type of "flag" in Notes: the "$Flags" and "$FlagsExt" fields in design elements. Those flags are less efficient - 8 bits per flag instead of 1 - but are generally more extensible and somewhat conceptually easier.
    • These bit fields are what those oddball bitwise operators are for, by the way.
  6. DWORD Reserved: Many (most?) C API data structures contain at least one block of bits like this that is reserved for future use. These are so that IBM can add features without breaking all existing code: API users are expected to leave anything there intact and to create new structures with all zeros. You can see this in action in a couple places, such as the addition of rudimentary theme support to legacy design elements, which used up a byte out of the previously-reserved block.
  7. Variable data: This is the fun part. Many structures contain "variable" data following the "fixed" portion. Whereas the previous parts are a predictable size - a DWORD will always be four bytes - the parts following the block can be anywhere from 0 bytes to whatever is the max value of their referring entity. Fortunately, the "SIG" at the beginning of the record tells the API user the length ahead of time, so it's not required to read it all in when dealing with an API entity. Still, the code to read this can be complex and bug-prone, particularly when there are multiple variable parts packed together. In this case, though, it's relatively simple: we get the value of "FileExtLen" and read that many bytes into an array and convert that from LMBCS to a respectable Unicode string.

Dealing with a stream of structs is... awkward at first, particularly when you're used to Java amenities like being able to just pour out and read back in serialized objects without even thinking twice about it. After a while, though, it gets easier, and you get an appreciation for a lower-level type of programming. And while you still may not be happy about Domino's 32K summary-data limit, you at least get an understanding for why it's there.

So if you're in a position to dive into this sort of thing once in a while, I recommend you do so. Though the value of what I'm writing specifically is mixed - I doubt the world needs, say, a Java wrapper for a CD record reflecting a DECS field association - the benefit to my brain is immense. Dealing with web and Java programming can cause you to become very disconnected from the fundamentals of programming, and something like this can bring you back to solid ground. Give it a shot!