Showing posts for tag "sntt"

Quick-Tip Thursday: Avoid Future Base64 Trouble In Java

Thu Nov 14 09:35:21 EST 2019

Tags: java sntt

TL;DR: Don't use com.ibm.misc.BASE64Encoder or com.ibm.misc.BASE64Decoder.

If you've been doing Java development for a while, you've likely run into a situation where you need to Base64-encode or -decode something. It's used commonly in MIME documents, data URIs, and various other areas typically related to data transfer.

Unfortunately, using Base64 in Java was awkward for a long time, with no natural choice in the core JDK until Java 8. The trouble over the years, though, is that there are choices in previous releases, and it became somewhat common in the community to use sun.misc.BASE64Encoder for this need. The problem with that is that sun.* packages are officially not part of the JDK and can't be relied upon to exist in every Java implementation. Unfortunately, a combination of a lack of enforcement of this in the tooling and the fact that those classes are in practice pretty universal has let them creep into Java code.

On the Domino side, we have a second problem: com.ibm.misc.BASE64Encoder.

Sidebar: The Different Flavors of Java

For the most part, setting aside Android, it's safe to think of "Java" as a monolithic entity, where having a JVM of a given version number on one system will be equivalent to the same on another. There are subtle differences, though, and several vendors have long maintained their own variants of Sun/Oracle's JVM (which is named HotSpot).

IBM has maintained one such variant, called J9, and infused all of their Java-using products, Domino included, with it. J9 has since been open-sourced and donated to Eclipse and, renamed to OpenJ9, has carved out a niche for itself by being particularly lean and speedy to spin up.

The Snag We Hit With J9

For the most part, the differences in JVM flavors don't matter to developers, but IBM played a bit of a trick on us by making duplicates of some sun.* classes under the com.ibm.* package. Unlike sun.*, which at least has a little community knowledge of being internal-only (and also just looks odd compared to standard classes), com.ibm.* has no such connotation, and there's nothing in Designer or the classes themselves to indicate that com.ibm.misc.BASE64Encoder (internal and non-portable) is any different from, say, com.ibm.commons.util.io.base64.Base64 (portable via IBM Commons).

So, over the years, use of that class has crept into XPages and Java agent code - it does the job and it's always been safe to assume that Domino-the-IBM-product would use J9-the-IBM-JVM. Domino isn't an IBM product anymore, however, and, to add to potential future trouble, even OpenJ9 builds from AdoptOpenJDK don't come with those com.ibm.misc.* classes.

The upshot is that, if your Java code were to run on a JVM other than an fully-IBM-style one (whether intentionally or otherwise), you're liable to run into trouble with code that casually used those JVM-internal classes.

The Options

If you're running a version of Domino using Java 8 (9.0.1FP8+), which you'd darn well better be at this point, you're in luck: the built-in java.util.Base64 class has a nice API and is guaranteed to be present on every JVM. This is your best bet, since it doesn't require any other dependencies beyond the core JDK and it has some nice features like variant encoders for MIME- and URL-safe use.

If you're targeting a version of Domino that still uses Java 6 (don't), you do have one other safe-enough option that's available in both XPages and Java agents: javax.xml.bind.DatatypeConverter. This class is technically part of the JAX-B specification but is included in JVMs from version 6 through version 10. This is less of a clean choice than the java.util.Base64 class both because it's not as nice of an API and because use on Java 11+ will require bringing in an external dependency. Still, if you're stuck on an old release, it'll at least get you through any potential rough patches in the near future.

Finally, in XPages, you have the aforementioned com.ibm.commons.util.io.base64.Base64 class, which will continue to be present due to being included in a base library of the XPages stack, but which is rendered obsolete by java.util.Base64.

So I recommend taking some time to look through any running Java code you have for this potential hiccup and making the switch. It's admittedly a little awkward to do, but it's still a good idea.

SNTT(uesday): Stepping Up My Tycho Game

Tue Jan 15 11:21:45 EST 2019

Tags: sntt tycho

I'm always on the lookout for ways to improve my projects' build process to get more-convenient results, cut down on IDE/compiler complaints, or to generally reduce the amount of manual work.

In the last couple weeks, I've figured out two changes to make that clean up my setup nicely: better source bundles and easier update sites.

Source Bundles

In OSGi parlance, a "source bundle" is a companion bundle/plugin for a normal bundle that contains the source code associated with it - for example, org.openntf.domino is paired with org.openntf.domino.source. With a bundle like this present, an IDE (Designer included) can pick up on the presence of the source code and use it for Javadoc and showing the original source of a class. It's extraordinarily convenient, rather than having to reference the source online or in another project (or not at all).

For a while, I've configured my Tycho projects to automatically generate these source bundles during build, and then I have ".source" features that reference them, which are then included in the final update site. This works very well, but it leaves the nagging problem that Eclipse complains about not being able to find the auto-vivified source bundles, and it also requires either putting the source bundles in the main features (which is a bit inefficient in e.g. a server deployment) or maintaining a separate ".source" feature.

It turns out that the answer has been in Tycho all along: instead of just generating source bundles, you can tell it to generate entire source features on the fly. You can do this by using the aptly-named tycho-source-feature-plugin:

<plugin>
	<groupId>org.eclipse.tycho.extras</groupId>
	<artifactId>tycho-source-feature-plugin</artifactId>
	<version>${tycho-version}</version>
	<executions>
		<execution>
			<id>source-feature</id>
			<phase>package</phase>
			<goals>
				<goal>source-feature</goal>
			</goals>
		</execution>
	</executions>
	<configuration>
		<includeBinaryFeature>false</includeBinaryFeature>
	</configuration>
</plugin>

With this, the build will auto-create the features as it goes, including pulling in the source of any referenced third-party bundles, and then you can include them in the final update site. For example, if the feature you're building is com.example.foo.feature, you can include com.example.foo.feature.source in your output.

Eclipse Repositories

Historically, the way Domino-targeted update sites are built is that they're referred to as the project type eclipse-update-site, which takes a site.xml and turns it into the final update site. This works well enough, but it has a couple problems. For one, it's deprecated and ostensibly slated for removal down the line, and it's best to not rely on anything like that. But otherwise, even when it works, it's fiddly: if you want to, for example, bring in a third-party feature, you have to explicitly specify the version of the feature you're bringing in, rather than letting the build environment pick up on what it is. This can turn into a drag over time, and it's always felt like unnecessary maintenance.

The immediate replacement for eclipse-update-site is eclipse-repository, which is very similar (you can "convert" by just changing the project type and renaming site.xml to category.xml) and solves the second problem. In a category.xml file, you can specify just the feature ID, leaving the version out or specified as 0.0.0, and it'll figure it out during the build.

However, this has a minor down side: though Designer can deal with these repositories without issue, the NSF Update Site template doesn't know about the generated artifacts.jar and content.jar files. You can use "Import Features", but that loses the feature categories, which are very useful when maintaining a large update site.

Fortunately, the site.xml format is extremely basic, so I created a Maven plugin a while ago to auto-generate one of these files. I improved it yesterday to pick up on the categories specified in the original category.xml file. This let me tweak the eclipse-repository project to shim in this generation before the final packaging:

<build>
	<plugins>
		<plugin>
			<groupId>org.darwino</groupId>
			<artifactId>p2sitexml-maven-plugin</artifactId>
			<version>1.1.0</version>
			<executions>
				<execution>
					<id>generate-sitexml</id>
					<goals>
						<goal>generate-site-xml</goal>
					</goals>
					<phase>package</phase>
				</execution>
			</executions>
		</plugin>
		<plugin>
			<groupId>org.eclipse.tycho</groupId>
			<artifactId>tycho-p2-repository-plugin</artifactId>
			<executions>
				<execution>
					<id>archive-repository</id>
					<goals>
						<goal>archive-repository</goal>
					</goals>
					<phase>package</phase>
				</execution>
			</executions>
		</plugin>
	</plugins>
</build>

Now it's sort of a "best of both worlds" deal: I can use the non-deprecated form of the repository and its improved features, while still using the stock NSF Update Site.

This Maven plugin is in OpenNTF's Maven repository, so you can add it in by adding the repo to your root project's pom:

<pluginRepositories>
	<pluginRepository>
		<id>artifactory.openntf.org</id>
		<name>artifactory.openntf.org</name>
		<url>https://artifactory.openntf.org/openntf</url>
	</pluginRepository>
</pluginRepositories>

 

SNTT: Reconstructing DateRanges

Thu Feb 27 09:53:46 EST 2014

Tags: sntt

In the OpenNTF API chat, I was reminded yesterday of how terribly broken DateRange retrieval is in Java. Specifically, though you can create date ranges and use them in view lookups, retrieving them from documents or view entries is FUBAR. To wit:

Field Value 02/27/2014, 02/28/2014, 02/27/2014 - 02/28/2014, 01/01/2014 12:00 AM - 01/02/2014 12:00 PM
doc.getItemValue("SomeField") [02/27/2014, 02/28/2014, 02/27/2014, 02/28/2014, 01/01/2014 12:00:00 AM EST, 01/02/2014 12:00:00 PM EST]

Note: this is consistent with LotusScript
doc.getItemValueDateTimeArray("SomeField") [02/27/2014, 02/28/2014, null, null]
session.evaluate(" SomeField ", doc) [02/27/2014, 02/28/2014, 02/27/2014, 02/28/2014, 01/01/2014 12:00:00 AM EST, 01/02/2014 12:00:00 PM EST]
entry.getColumnValues().get(columnIndex) [02/27/2014, 02/28/2014, 02/27/2014, 02/28/2014, 12/29/**** 12:22:46 PM EST, 10/03/**** 04:53:38 PM EDT]

Note: the final values are inconsistent across multiple executions

So it appears that getItemValueDateTimeArray chokes and gives up when it encounters date ranges, instead just plopping a null value in the Vector. View entries appear to wander off into some invalid-data wilderness, perhaps not properly parsing the ITEM_VALUE_TABLE attached to the entry. Or maybe it's in the ITEM_TABLE. I don't remember; it's complicated. In any event, view entries are a mostly-lost cause.

Document item values, though, are salvagable. Since getItemValue returns the start/end values and getItemValueDateTimeArray returns nulls in the spots that represent DateRanges (which are always at the end, but that's incidental), the two can be matched up to reconstruct the real value. I wrote a method to do just this - it doesn't do any checking to make sure that the item value actually contains DateTimes (and would throw a ClassCastException if it contains something else), it should do the job when you know the item contents:

@SuppressWarnings("unchecked")
private List<Base> noSeriouslyGetItemValueDateTimes(final Session session, final Document doc, final String itemName) throws NotesException {
	List<Base> result = new Vector<Base>();

	List<DateTime> dateTimes = doc.getItemValue(itemName);
	List<DateTime> dateTimesArray = doc.getItemValueDateTimeArray(itemName);
	int foreignIndex = 0;
	for(DateTime dt : dateTimesArray) {
		if(dt != null) {
			result.add(dateTimes.get(foreignIndex++));
			dt.recycle();
		} else {
			DateTime dt1 = dateTimes.get(foreignIndex++);
			DateTime dt2 = dateTimes.get(foreignIndex++);
			DateRange range = session.createDateRange(dt1, dt2);
			result.add(range);
		}
	}

	return result;
}

I'll probably add a similar method to the OpenNTF API (or maybe fix getItemValueDateTimeArray), but this should do for now.