Use Interfaces All The Time

Wed Jun 27 09:10:00 EDT 2012

Tags: java

In addition to being useful concepts generally, Java interfaces can be used literally in code as a way of keeping your code as clean and generic as possible. While you can't ever create a new object based on an interface, you CAN use interface names as object types for variables and parameters. For example, this is a legal way to create a Vector of Objects:

List<Object> someList = new Vector<Object>();

"That's great and all," you say, "but why bother doing that?" And indeed, in a simple case like a mostly-procedural Java agent, you won't get much benefit from doing that. But imagine a method like this:

public List<String> retrieveValuesFromDatabase();

Since all that method guarantees is that it's going to return some sort of List, it has a lot of discretion as to the form that list will take. Maybe the programmer starts out with Vector but then decides that ArrayList is better for the purpose - they're free to change it without troubling the user of the method one bit. Maybe they want to get a little crazier and make their own custom class that implements List and provides a live view of a database - again, that can happen with no change to the user's code.

While the primary benefit comes when large libraries use interfaces in their APIs, even moderately-sized structured programs written by one programmer can benefit. Take a (contrived and unsafe) class like this:

public class ExampleClass {
	private List<String> cache;

	public ExampleClass() {
		this.cache = new ArrayList<String>();
	}
	public List<String> getCache() {
		return this.cache;
	}
	public void setCache(List<String> cache) {
		this.cache = cache;
	}
}

The same benefits mentioned above apply here: because you're referring to your instance variable just as a List, you're free to change the actual implementing class by changing only one line. The benefits are only compounded when you add more and more-complicated methods to your class.

Additionally, using interfaces when they're not strictly necessary can help avoid bad habits and traps. Our go-to example, Vector class, pre-dates the Java Collections Framework and was only retrofitted to the List interface when the framework came out in Java 1.2. It shows its age in a couple ways, not the least of which is the inclusion of a couple methods it uses that are not part of List, such as elementAt(...) and addElement(...). These are best replaced entirely with get(...) and add(...), which are common to all Lists... and are easier on the eyes, to boot.

It's easy to run into problems with APIs that don't use interfaces extensively, like, say, the Domino API*. Presumably due to the fact that the "new" lotus.domino classes were implemented in a pre-1.2 JRE, they use Vector extensively. Most of the time, this is fine - anything expecting a List will happily accept a Vector, but the reverse doesn't hold. You don't have to go far to create a problem. Multi-value controls in XPages, when bound to a Java object, prefer the use of ArrayList (but work great when pointed to an existing List of any stripe). As a result, this will result in an exception:

<?xml version="1.0" encoding="UTF-8"?>
<xp:view xmlns:xp="http://www.ibm.com/xsp/core">
	<xp:this.dataContexts>
		<xp:dataContext var="newRegData" value="${javascript: new java.util.HashMap()}"/>
	</xp:this.dataContexts>
	
	<xp:checkBoxGroup value="#{newRegData.multi}">
		<xp:selectItems value="#{javascript:['one', 'two']}"/>
	</xp:checkBoxGroup>
	
	<xp:button id="createDoc" value="Create Doc">
		<xp:eventHandler event="onclick" submit="true"><xp:this.action><![CDATA[#{javascript:
			var doc = database.createDocument()
			doc.replaceItemValue("MultiValue", newRegData["multi"])
			doc.save()
		}]]></xp:this.action></xp:this.eventHandler>
	</xp:button>
</xp:view>

The problem is that, while ArrayList is still a List, replaceItemValue(...) doesn't give two hoots about interfaces, forcing you to do something like this:

var vec = new java.util.Vector()
vec.addAll(newRegData["multi"])
doc.replaceItemValue("MultiValue", vec)

Not pretty, and that's one that could be fixed without changing the outward API.

The upshot of all this is that you can derive a lot of benefit from using interfaces extensively, even when they're not strictly necessary. I even have a habit of writing lines like "List<Object> values = (List<Object>)doc.getItemValue("SomeItem");", because I am a madman. You may not go as far as I do, but it's still usually a good idea to follow the policy of using interfaces whenever the methods you want are available that way.

* Technically, the the lotus.domino.* "classes" are all interfaces and the implementing classes vary based on the type of connection you're using (e.g. local vs. CORBA). As usual, Domino provides both good and bad examples simultaneously.

Putting java.util to Use

Fri Jun 22 11:46:00 EDT 2012

Tags: java

I'm in the process of figuring out a good way to combine data from several sources into a single activity stream, which means that they should be categorized by date and then sorted by time. While that's a piece of cake with a single view, it gets hairy when you have several views, or perhaps several different types of sources entirely. Fortunately, abstract data types are here to help.

You're already using Lists and Maps, right? For this, I decided to use Maps and one of my personal favorites, the Set. If you're not familiar with them, Sets are like Lists, but only contain one of each element and don't (normally) guarantee any specific order - DocumentCollections are a type of Set (albeit not actually implementing the interface).

I created the categories by using a Map with the Date as the key and Sets of entries as the value. That would work well enough using HashMaps and HashSets, but they would require manual sorting in the XPage to display them in the right order. Fortunately, Java includes some more-specific types for this purpose: SortedMap and SortedSet. These are used the same way as normal Maps and Sets, but automatically maintain an order (based on either your own Comparator or the "natural" ordering based on the objects' compareTo(...) methods). Better still, the specific TreeMap and TreeSet implementation classes have methods to get at the keys and values, respectively, in descending order.

Once I had my collection objects picked out, all I had to do was start filling them in. I used stock Date objects for the Map's keys and wrote a compareTo(...) method for the entries I'm keeping in the Set. Then, on the containing activity stream class, I just had to write a "serializer" method to write out the current state of the objects into a List for access.

While I may change around the way I do this (I may end up putting the category headers inline so I just use one SortedSet for performance), it provides a pretty good example of when you can use some of Java's built-in classes to do some of the grunt work for you.

Re-using Classic Domino Outlines, Rough Draft

Tue Jun 19 17:34:00 EDT 2012

Tags: xpages

My current big project at work involves, among many other things, viewing individual project databases through a centralized "portal". These all use the same base template, but can be individually customized, usually with new or changed views. Additionally, I'm going to be granting web users varying roles that correspond to existing or planned access roles in the target database. The result is that I'm spending a good amount of time trying to dynamically adapt some classic Notes elements to show up on XPages (hence my DynamicViewCustomizer, which I've since updated and should post).

In addition to views, I wanted to use the existing Outline design elements in the databases, since they do a fine job and there's no reason to re-invent the wheel if it's not necessary. I looked around a bit and didn't see a way to readily re-use them (though I can't rule out the possibility that such a capability exists somewhere), so I decided I'd roll up my sleeves and write my own adapter.

The route I took was to use the xe:beanTreeNode ExtLib control inside the xe:navigator used in most OneUI apps. Though what I conceptually wanted to do was to feed it a tree structure made up of JavaScript arrays and nodes, that's not how it works: you pass it the name of either an already-created managed bean or the full class name of a bean-compatible (read: zero-argument constructor) implementing ITreeNode. Presumably, the right way to do this would be to create a managed bean and pass in configuration parameters via managed-property values. I didn't do that for the time being, though, instead baking a bit of environmental awareness into the Java code - so sue me. I also print stack traces to the server console, which you shouldn't do.

In any event, what my code does is to open the consistently-named outline design element (in this case, "TempOut" for "template outline"), reads through its entries, and recursively builds the node tree. There are a couple environmental assumptions in there, primarily in the constructor, in order to work with the surrounding app, but the core of it should be pretty transportable. One big note is that I haven't gotten around to adding in the "special" outline entries to show the remaining views and folders, nor do I handle "action" entries. Still, it may be a useful reference for using bean tree nodes.

package mcl;

import com.ibm.xsp.extlib.tree.ITreeNode;
import com.ibm.xsp.extlib.tree.impl.*;
import lotus.domino.*;

import javax.faces.context.FacesContext;
import java.net.URLEncoder;

public class ProjectOutline extends BasicNodeList {
	private static final long serialVersionUID = 2035066391725854740L;
	
	private String contextQueryString;
	private String dbURLPrefix;
	private String viewName;

	public ProjectOutline() throws NotesException {
		try {
			ProjectInfo projectDatabase = (ProjectInfo)JSFUtil.getVariableValue("projectDatabase");
			if(projectDatabase.isValidDB()) {
				Outline tempOut = projectDatabase.getOutline("TempOut");
			
				dbURLPrefix = "/__" + projectDatabase.getReplicaID() + ".nsf/";
				contextQueryString = String.valueOf(JSFUtil.getVariableValue("contextQueryString"));
				viewName = (String)FacesContext.getCurrentInstance().getExternalContext().getRequestParameterMap().get("viewName");
				
				OutlineEntry entry = tempOut.getFirst();
				while(entry != null) {
					entry = processNode(projectDatabase, tempOut, entry, null);
				}
			} else {
				addChild(new BasicLeafTreeNode());
			}
		} catch(NullPointerException npe) {
			addChild(new BasicLeafTreeNode());
		} catch(Exception e) {
			e.printStackTrace();
		}
	}
	
	
	private OutlineEntry processNode(ProjectInfo projectDatabase, Outline outline, OutlineEntry entry, BasicContainerTreeNode root) throws NotesException {
		int level = entry.getLevel();
		
		switch(entry.getType()) {
		case OutlineEntry.OUTLINE_OTHER_UNKNOWN_TYPE:
			// Must be a container
			
			BasicContainerTreeNode containerNode = createSectionNode(entry);
			if(root == null) {
				addChild(containerNode);
			} else {
				root.addChild(containerNode);
			}
			// Look for its children
			OutlineEntry nextEntry = outline.getNext(entry);
			while(nextEntry.getLevel() > level) {
				nextEntry = processNode(projectDatabase, outline, nextEntry, containerNode);
			}
			
			return nextEntry;
		case OutlineEntry.OUTLINE_TYPE_NAMEDELEMENT:
			View view = projectDatabase.getView(entry.getNamedElement());
			if(view != null) {
				ITreeNode leafNode = createViewNode(entry);
				if(root == null) {
					addChild(leafNode);
				} else {
					root.addChild(leafNode);
				}
				view.recycle();
			}
			
			return outline.getNext(entry);
		}
		
		return outline.getNext(entry);
	}
	
	private BasicContainerTreeNode createSectionNode(OutlineEntry entry) throws NotesException {
		BasicContainerTreeNode node = new BasicContainerTreeNode();
		node.setLabel(entry.getLabel());
		node.setImage(dbURLPrefix + urlEncode(entry.getImagesText()) + "?Open&ImgIndex=1");
		
		return node;
	}
	private ITreeNode createViewNode(OutlineEntry entry) throws NotesException {
		BasicLeafTreeNode node = new BasicLeafTreeNode();
		node.setLabel(entry.getLabel());
		node.setImage(dbURLPrefix + urlEncode(entry.getImagesText()) + "?Open&ImgIndex=1");
		node.setHref("/Project_View.xsp?" + contextQueryString + "&viewName=" + urlEncode(entry.getNamedElement()));
		
		node.setSelected(entry.getNamedElement().equals(viewName));
		node.setRendered(resolveHideFormula(entry));
		
		return node;
	}

	private String urlEncode(String value) {
		try {
			return URLEncoder.encode(value, "UTF-8");
		} catch(Exception e) { return value; }
	}
	private boolean resolveHideFormula(OutlineEntry entry) throws NotesException {
		String hideFormula = entry.getHideFormula();
		if(hideFormula != null && hideFormula.length() > 0) {
			Session session = JSFUtil.getSession();
			Database database = entry.getParent().getParentDatabase();
			Document contextDoc = database.createDocument();
			// @UserAccess gave me trouble, so I just did a simple string replacement, since I know that's the only way I used it
			hideFormula = hideFormula.replace("@UserAccess(@DbName; [AccessLevel])", "\"" + String.valueOf(database.getCurrentAccessLevel()) + "\"");
			double result = (Double)session.evaluate(hideFormula, contextDoc).get(0);
			contextDoc.recycle();
			return result != 1;
		}
		return true;
	}
}

Reverse-Engineering File Formats for Fun

Mon Jun 11 09:00:00 EDT 2012

Tags: java

Well, okay, it wasn't for fun; it was for work. About a year and a half ago, I had occasion to parse the contents of shared object files from a Flash Media Server ("*.fso" files), and I figured it might be interesting to go back over the kind of things I had to do to accomplish that.

The first thing I did was to search around to see if anyone else had solved the same problem. However, while there are plenty of parsers for "Flash shared objects", not the least of which is in Flash itself, it seemed that, unless I was doing it wrong, that format is different from the one used by servers, so I couldn't use any of the ones I found. So I was left with a folder full of binary files and a task to accomplish. Sounds like a job for programming!

Fortunately, I knew the shape of the data inside the files: they were essentially arrays of maps. I could see the keys and some of the values in there, so the files were binary but not compressed, which gave me hope. However, since the data was variable-length, I couldn't just find the record delimiter and use offsets - I had to go through and find the various byte-value delimiters for each aspect and write a proper parser.

I opened up a couple of the files in vi sessions to compare them and, using the handy ga command, checked the byte values of the non-ASCII characters. I found that the file always starts with 0 then 3, and there appeared to be a "general" delimiter of 0, 0, 0 to break up header sections of the file and each record. Then I saw a number that was different between each file - ah-hah, the record count!

After another delimiter, I found a number followed by the name of the file (for some reason). It turned out that the number corresponded to the length of the file name, indicating that the format uses Pascal-style length-prefixed Strings. Good to know! After the name came a second instance of the record count for some reason, leaving the rest of the file to the actual records.

Conceptually, the records are easy: there should be about $record_count instances of some sort of record delimiter with the data packed in there in, essentially, key-value pairs. That's ALMOST how it is, but it got a bit weird: all records were ended with the same "general" delimiter used in the header, but they ALSO had their own two-byte record delimiter. That is, except when they used a three-byte variant for some reason. And there are a couple unknown values in there - they were different in each record and I'm sure they serve some use, but I couldn't figure out what it was. Once I figured out the hairy bits, though, the data was pretty much as I expected: there was a record index (for some reason), then the expected key-value pairs. Most of them were strings, but not all - there was an "urgent" key in some entries that had no corresponding value (though now that I back look at the code, I suspect that it was a boolean value of \1 to indicate "true").

The end result of all this was by far the most C-like Java I've ever written:

FSOParser.java

I felt pretty good after finishing that. High-level object tress are fun and all, but it's good to, from time to time, get your hands dirty with lower-level stuff like reading files as integer arrays.

PS: I don't know why my FSORecord class didn't just extend HashMap. I blame the brain haze from staring at binary files for too long. Conversely, I think I did have a reason to use arrays of int instead of byte - I think it had to do with Java byte being always signed but the data in the file being unsigned.

Ruby Builder First Draft: Intriguing Failure

Sat Jun 09 09:56:00 EDT 2012

Tags: ruby

For a while now, I've been fiddling with trying to make an Eclipse builder that smoothly translates Ruby files into Java classes and adds them to the project to be compiled, the idea being that, rather than only using Ruby inline in XPages or via "script libraries", you'd be able to write all of your supporting Java classes in it as well.

I'd been giving it about an hour or so of frustration every couple of weeks, but yesterday I decided to hunker down and make it work. After patching the standard "jrubyc" code to work without a real filesystem, wrestling with Ant builder class paths and runtime environments, and with a great deal of assistance from the handy XPages Log File Reader template, I got it working! I now have it so that you can write Ruby files in a ruby-src folder and build the project and the builder reads those files and plants "compiled" Java versions into a folder on the build path, which then causes Eclipse to build them into proper classes, usable on an XPage. Whoo!

However, they're not really usable yet. The way the "compiler" works is that it generates a class that extends RubyObject and, in a static block, loads up the global Ruby runtime and pre-compiles a giant string of Ruby. Then, each method you exposed to Java in the Ruby code calls the equivalent method in the Ruby runtime. It makes sense, but leads to some specific problems. For one, the global Ruby runtime's classloader doesn't know about the classes available in the XPages environment, such as the javax.faces.* classes and any of your own NSF-hosted ones. Moreover, because the object extends RubyObject and Java doesn't do multiple inheritance, it can't extend any OTHER Java classes. The internal Ruby class can extend whatever it wants and works well, but that doesn't help when you want to use the generated classes in Java code. It can implement Interfaces, but then you have to actually have Java-facing versions of every method, which can get hairy.

I have some ideas, though. The "jrubyc" compiler is handy, but it doesn't work any special magic - it just reads through the Ruby source for key elements like java_implements and java_signature and uses those to build a wrapper class that just executes a big string of Ruby code. There's nothing there I couldn't do myself, hooking into the existing parser where necessary and otherwise writing the rest myself. That way, I could generate a standard Java class on the outside but make it create a proxy object internally that's the actual RubyObject that handles all of the method calls. It could be a bit more difficult than I'm thinking, but it'd probably be possible, and it'd let me use the runtime classloader from FacesContext and extend classes properly at will.

XPages MVC: Experiment II, Part 4

Mon Jun 04 19:57:00 EDT 2012

Tags: xpages mvc
  1. XPages MVC: Experiment I
  2. XPages MVC: Experiment II, Part 1
  3. XPages MVC: Experiment II, Part 2
  4. XPages MVC: Experiment II, Part 3
  5. XPages MVC: Experiment II, Part 4

To finish up my series on the infrastructure of my guild forums app, I'd like to mention a couple of the down sides I see with its current implementation, which I'd generally want to fix or avoid if re-implementing it today.

Roll-Your-Own

One of the strengths of this kind of MVC setup is that it works to separate the front-end code from the data source. It would be (relatively) easy for me to replace the model and collection classes with versions that use a SQL database or non-Notes document storage should I so choose. That's a double-edged sword, though: because I'm not using any of the built-in data sources and controls with Domino knowledge, I had to do everything myself. This means a loss of both some nice UI features - like the rich text editor's ability to upload images inline - and the XPage data sources' persistence and caching features.

The collection code deals with View and ViewEntryCollection classes directly, but they can't be serialized, so I had to write my own methods to detect when the object is no longer valid (say, when doing a partial refresh) and re-fetch the collection. This was good in the sense that I learned more about what is and is not efficient in Domino. For example, getNthEntry(...) on a ViewEntryCollection grabbed via getAllEntriesByKey(...) is fast. Conversely, while retrieving data in a view is usually significantly faster than getting the same data from a document, there's a point where the view index size is large enough that, provided you're fetching only a few documents at a time, it's better to use the document. With a lot of work (and a LOT of collection caching), I ended up with something that's quite fast... but since I wrote all the code myself as part of a side project, it was also pretty bug-prone for the first couple weeks after deployment.

Maybe I'd be able to find ways to piggyback more on the built-in functionality if I re-did it, but, as it stands, it's kind of hairy.

Inconsistent Separation

This isn't a TERRIBLE problem, but I'm kind of annoyed with some of the inconsistent choices I made about where DB-specific code goes. For example, collection managers have very little Domino-specific code... except when they need to know about sorting and searching. Similarly, almost all of the code in the model objects deals with pure Java objects and the clean collections API... except the save() methods, which are big blobs of Domino API work. That makes some sense, but the model classes don't handle creation from the database - that's in the collection classes. Like I said, it's not the end of the world, and it's consistent within its own madness, but there are some weird aspects and internal leaks I wouldn't mind cleaning up.

Lack of Data Sources

This is sort of the flip side to my first problem. All of my collection managers are just objects and the collections are just Lists. This is good in the sense that these things work well in Server JavaScript and with all of the built-in UI elements, but it'd be really nice to write my own custom data sources so I can explicitly declare what data I want on the page - using a xp:dataContext or bit of JavaScript in a value property just feels "dirty", like I'm not embracing it completely. However, this problem isn't as much an architectural one as it is a lack of education - I haven't bothered to learn to write my own data sources yet (even though I expect it's more or less straightforward, for Java), so I could remedy that easily enough.

No Proper Controller

This is just another manifestation of the root cause that has me looking into all this MVC stuff to begin with. Even though my collections and models are better than dealing with raw Domino objects, there's still too much of a tie between the UI and the back-end representation, as well as the requisite dependence on the Domino HTTP stack to handle routing requests. Of course, I'm still working on the correct solution to this.

 

Overall, my structure as written has been serving me well. New data elements are pretty easy to set up - I wouldn't mind not having to write three classes per, but hey, it's Java - and working with them is a breeze. Though it took a while to get everything working, now that it is, I can make tons of UI changes without worrying much about the actual data representation. I can change the way data is stored or add on-load or on-save computation beyond the capabilities of Formula language without changing anything in the XPages themselves. So in those senses, my forum back-end code is a huge step up from doing it directly xp:dominoDocuments, but it still doesn't feel completely "right".