Showing posts for tag "mvc"

Modes Of App Development With XPages Jakarta EE

Fri Jul 28 11:46:50 EDT 2023

I've been working on my workshop for this year's CollabSphere, and one of the main decisions I have to make is what I'm going to focus on. The idea of the workshop is to give a bit more brass-tacks information about how to use the project: rather than just a list of features, it'll be about the specific business of building an app using it.

But how does one build an app in it? There's certainly no lack of tools available, but that leads to the opposite problem: what's the right one for your project? What's likely to be the most common path people take?

The Types

As I've been working on it, I've grouped things into four main categories, and I figured it'd be useful to enumerate them here to coordinate my thoughts and provide some general information. There aren't hard lines between these: you can use any mixture of some or all of the parts in an app, and do different mixes in different apps. These are just what I expect to be the main groupings:

  • "XPages Plus", using some new capabilities in existing or new apps with XPages-based UIs
  • REST services, focusing on providing REST endpoints for JavaScript-based apps or other servers
  • MVC and JSP, focusing on clean, lightweight UIs for document-based apps, but less ideal for complex business logic
  • JSF, building the same sorts of apps XPages is adept at, but using newer technology

"XPages Plus"

The first route is how the project got started: you keep building XPages apps but sprinkle in a few new capabilities to improve them.

For example, you could replace your managed beans defined in faces-config.xml with CDI beans, allowing you to get the quick benefit of annotation-based definitions and then the bigger benefits of @Inject, producer methods, and interceptors.

You could also start using newer EL features, like the long-desired ability to pass parameters to methods.

This path wouldn't necessarily require a lot of reworking of your app or changing the way you think about XPages development, but would still be something of a minor development refresh and can set you up well for future improvements.

Your data access will likely still be through the traditional xp:dominoDocument and xp:dominoView components, but you could also write beans that access data with lotus.domino or ODA, or switch to using the NoSQL driver.

REST Services

Alternatively, you could decide you want to focus your apps around REST services with either a JavaScript app in, for example, React as the front end, or providing services to remote servers.

With this, you'd largely stop using XPages design elements entirely, instead defining your services in Java classes with JAX-RS annotations. This brings huge advantages over other ways to write REST services on Domino, with the JAX-RS annotations allowing for clear, logical definition of services, their parameters, and their output. Moreover, the ancillary tooling brings things like automatic OpenAPI definitions, which would be annoying to maintain using things like the XPages-side REST controls.

This path is good if you're specifically aiming to build a JavaScript-based app, either because you just like it, because your organization decided to go that route, or if you have a larger team that splits the duties of front-end and back-end developers. It can also naturally blend into the next one.

Your data access here won't be through the XPages components, but you could still use lotus.domino or ODA classes, or switch to the NoSQL driver. That actually goes for the next two, too, so we'll just count that as assumed.

MVC and JSP

I'll admit that part of the reason I want to consider this a top-tier route is because I just personally really like it. I've had a blast writing apps like this blog and the OpenNTF site using this path, with its much-cleaner code and back-to-basics approach to HTML.

Regardless of my personal enjoyment of it, though, this has some nice advantages. The fact that MVC builds on top of JAX-RS means that it melds well with the REST-services approach above. For example, you might primarily write REST services for a JS app, but then do a set of "admin" pages with MVC. Or you might use this as part of the prototype phase: structure your app the same way you will when you expand to a multi-tier team, but start out by doing a quick UI with MVC on top of the same or related endpoints.

With this path, your app will start with Java classes with JAX-RS annotations, and then you'd mix it with JSP files inside WebContent/WEB-INF. One down side to this approach is that Designer doesn't provide much help for writing JSP files. In the tooling, I bind .jsp and .tag files to the HTML editor, so you at least get normal HTML assistance, but that won't help you with specific JSP tags and EL. Fortunately, the set of tools you'll likely use in JSP is comparatively small, so you'll eventually memorize things like <c:forEach items="..." var="...">...</c:forEach> in much the same way that you could eventually write out an <xp:repeat/> in your sleep in XPages.

JSF

This one, technically tricky though it may be, is conceptually straightforward: write the same sort of apps you do with XPages, but do it with modern JSF instead. This makes a lot of sense, since JSF shares XPages's acumen with complicated forms with partial refreshes and changing state data, but has benefited from some development that didn't happen on the XPages side.

It's not a direct replacement: in particular, JSF has no knowledge of Domino data sources, so there's no xp:dominoDocument or xp:dominoView. You'd still need to do your data access via beans, as in the previous two options, likely using either lotus.domino/ODA or the NoSQL driver. Additionally, Designer really doesn't help you here - again, I map .xhtml and .jsf files to the HTML editor, but JSF components have a lot of properties to set, and so you'll be spending a lot of time referencing documentation.

Still, it's clear why this is proving to be a popular path. The development model is the same as in XPages, while the JSF stack (especially including PrimeFaces) brings a lot of amenities that aren't in XPages and are also more portable to other environments.

Conclusion

So, for now, I'm thinking of splitting up the workshop to cover each of these paths a bit. That runs the risk of feeling like too much of a grab bag, but I don't want to give the opposite impression, that the project only allows for some specific path. It's a broad platform update, accommodating many development approaches, and I want to keep that clear. Fortunately, each path has a pretty-clean pitch, and the shared components (CDI, bean validation, the REST client, etc.) build on each other well, so the idea that it's a pool of features that you can swim in is, I think, compelling.

Rewriting The OpenNTF Site With Jakarta EE: UI

Mon Jun 27 15:06:20 EDT 2022

  1. Rewriting The OpenNTF Site With Jakarta EE, Part 1
  2. Rewriting The OpenNTF Site With Jakarta EE: UI

In what may be the last in this series for a bit, I'll talk about the current approach I'm taking for the UI for the new OpenNTF web site. This post will also tread ground I've covered before, when talking about the Jakarta MVC framework and JSP, but it never hurts to reinforce the pertinent aspects.

MVC

The entrypoint for the UI is Jakarta MVC, which is a framework that sits on top of JAX-RS. Unlike JSF or XPages, it leaves most app-structure duties to other components. This is due both to its young age (JSF predates and often gave rise to several things we've discussed so far) and its intent. It's "action-based", where you define an endpoint that takes an incoming HTTP request and produces a response, and generally won't have any server-side UI state. This is as opposed to JSF/XPages, where the core concept is the page you're working with and the page state generally exists across multiple requests.

Your starting point with MVC is a JAX-RS REST service marked with @Controller:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
package webapp.controller;

import java.text.MessageFormat;

import bean.EncoderBean;
import jakarta.inject.Inject;
import jakarta.mvc.Controller;
import jakarta.mvc.Models;
import jakarta.ws.rs.GET;
import jakarta.ws.rs.NotFoundException;
import jakarta.ws.rs.Path;
import jakarta.ws.rs.PathParam;
import jakarta.ws.rs.Produces;
import jakarta.ws.rs.core.MediaType;
import model.home.Page;

@Path("/pages")
public class PagesController {
    
    @Inject
    Models models;
    
    @Inject
    Page.Repository pageRepository;
    
    @Inject
    EncoderBean encoderBean;

    @Path("{pageId}")
    @GET
    @Produces(MediaType.TEXT_HTML)
    @Controller
    public String get(@PathParam("pageId") String pageId) {
        String key = encoderBean.cleanPageId(pageId);
        Page page = pageRepository.findBySubject(key)
            .orElseThrow(() -> new NotFoundException(MessageFormat.format("Unable to find page for ID: {0}", key)));
        models.put("page", page); //$NON-NLS-1$
        return "page.jsp"; //$NON-NLS-1$
    }
}

In the NSF, this will respond to requests like /foo.nsf/xsp/app/pages/Some_Page_Name. Most of what is going on here is the same sort of thing we saw with normal REST services: the @Path, @GET, @Produces, and @PathParam are all normal JAX-RS, while @Inject uses the same CDI scaffolding I talked about in the last post.

MVC adds two things here: @Inject Models models and @Controller.

The Models object is conceptually a Map that houses variables that you can populate to be accessible via EL on the rendered page. You can think of this like viewScope or requestScope in XPages and is populated in something like the beforePageLoad phase. Here, I use the Models object to store the Page object I look up with JNoSQL.

The @Controller annotation marks a method or a class as participating in the MVC lifecycle. When placed on a class, it applies to all methods on the class, while placing it on a method specifically allows you to mix MVC and "normal" REST resources in the same class. Doing that would be useful if you want to, for example, provide HTML responses to browsers and JSON responses to API clients at the same resource URL.

When a resource method is marked for MVC use, it can return a string that represents either a page to render or a redirection in the form "redirect:some/resource". Here, it's hard-coded to use "page.jsp", but in another situation it could programmatically switch between different pages based on the content of the request or state of the app.

While this looks fairly clean on its own, it's important to bear in mind both the strengths and weaknesses of this approach. I think it will work here, as it does for my blog, because the OpenNTF site isn't heavy on interactive forms. When dealing with forms in MVC, you'll have to have another endpoint to listen for @POST (or other verbs with a shim), process that request from scratch, and return a new page. For example, from the XPages JEE example app:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
@Path("create")
@POST
@Consumes(MediaType.APPLICATION_FORM_URLENCODED)
@Controller
public String createPerson(
        @FormParam("firstName") @NotEmpty String firstName,
        @FormParam("lastName") String lastName,
        @FormParam("birthday") String birthday,
        @FormParam("favoriteTime") String favoriteTime,
        @FormParam("added") String added,
        @FormParam("customProperty") String customProperty
) {
    Person person = new Person();
    composePerson(person, firstName, lastName, birthday, favoriteTime, added, customProperty);
    
    personRepository.save(person);
    return "redirect:nosql/list";
}

That's already fiddlier than the XPages version, where you'd bind fields right to bean/document properties, and it gets potentially more complicated from there. In general, the more form-based your app is, the better a fit XPages/JSF is.

JSP

While MVC isn't intrinsically tied to JSP (it ships with several view engine hooks and you can write your own), JSP has the advantage of being built in to all Java webapp servers and is very well fit to purpose. When writing JSPs for MVC, the default location is to put them in WEB-INF/views, which is beneath WebContent in an NSF project:

Screenshot of JSPs in an NSF

The "tags" there are the general equivalent of XPages Custom Controls, and their presence in WEB-INF/tags is convention. An example page (the one used above) will tend to look something like this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
<%@page contentType="text/html" pageEncoding="UTF-8" trimDirectiveWhitespaces="true" session="false" %>
<%@taglib prefix="t" tagdir="/WEB-INF/tags" %>
<%@taglib prefix="c" uri="http://java.sun.com/jsp/jstl/core" %>
<%@taglib prefix="fn" uri="http://java.sun.com/jsp/jstl/functions" %>
<t:layout>
    <turbo-frame id="page-content-${page.linkId}">
        <div>
            ${page.html}
        </div>
        
        <c:if test="${not empty page.childPageIds}">
            <div class="tab-container">
                <c:forEach items="${page.cleanChildPageIds}" var="pageId" varStatus="pageLoop">
                    <input type="radio" id="tab${pageLoop.index}" name="tab-group" ${pageLoop.index == 0 ? 'checked="checked"' : ''} />
                    <label for="tab${pageLoop.index}">${fn:escapeXml(encoder.cleanPageId(pageId))}</label>
                </c:forEach>
                    
                <div class="tabs">
                    <c:forEach items="${page.cleanChildPageIds}" var="pageId">
                        <turbo-frame id="page-content-${pageId}" src="xsp/app/pages/${encoder.urlEncode(pageId)}" class="tab" loading="lazy">
                        </turbo-frame>
                    </c:forEach>
                </div>
            </div>
        </c:if>
    </turbo-frame>
</t:layout>

There are, by shared lineage and concept, a lot of similarities with an XPage here. The first four lines of preamble boilerplate are pretty similar to the kind of stuff you'd see in an <xp:view/> element to set up your namespaces and page options. The tag prefixing is the same idea, where <t:layout/> refers to the "layout" custom tag in the NSF and <c:forEach/> refers to a core control tag that ships with the standard tag library, JSTL. The <turbo-frame/> business isn't JSP - I'll deal with that later.

The bits of EL here - all wrapped in ${...} - are from Expression Language 4.0, which is the current version of XPages's aging EL. On this page, the expressions are able to resolve variables that we explicitly put in the Models object, such as page, as well as CDI beans with the @Named annotation, such as encoderBean. There are also a number of implicit objects like request, but they're not used here.

In general, this is safely thought of as an XPage where you make everything load-time-bound and set viewState="nostate". The same sorts of concepts are all there, but there's no concept of a persistent component that you interact with. Any links, buttons, and scripts will all go to the server as a fresh request, not modifying an existing page. You can work with application and session scopes, but there's no "view" scope.

Hotwired Turbo

Though this app doesn't have much need for a lot of XPages's capabilities, I do like a few components even for a mostly "read-only" app. In particular, the <xe:djContentPane/> and <xe:djTabContainer/> controls have the delightful capability of deferring evaluation of their contents to later requests. This is a powerful way to speed up initial page load and, in the case of the tab container, skip needing to render parts of the page the user never uses.

For this and a couple other uses, I'm a fan of Hotwired Turbo, which is a library that grew out of 37 Signals's Rails-based development. The goal of Turbo and the other Hotwired components is to keep the benefits of server-based HTML rendering while mixing in a lot of the niceties of JS-run apps. There are two things that Turbo is doing so far in this app.

The first capability is dubbed "Turbo Drive", and it's sort of a freebie: you enable it for your app, tell it what is considered the app's base URL, and then it will turn any in-app links into "partial refresh" links: it downloads the page in the background and replaces just the changed part on the page. Though this is technically doing more work than a normal browser navigation, it ends up being faster for the user interface. And, since it also updates the URL to match the destination page and doesn't require manual modification of links, it's a drop-in upgrade that will also degrade gracefully if JavaScript isn't enabled.

The second capability is <turbo-frame/> up there, and it takes a bit more buy-in to the JS framework in your app design. The way I'm using Turbo Frames here is to support the page structure of OpenNTF, which is geared around a "primary" page as well as zero or more referenced pages that show up in tabs. Here, I'm buying in to Turbo Frames by surrounding the whole page in a <turbo-frame/> element with an id using the page's key, and then I reference each "sub-page" in a tab with that same ID. When loading the frame, Turbo makes a call to the src page, finds the element with the matching id value, and drops it in place inside the main document. The loading="lazy" parameter means that it defers loading until the frame is visible in the browser, which is handy when using the HTML/CSS-based tabs I have here.

I've been using this library for a while now, and I've been quite pleased. Though it was created for use with Rails, the design is independent of the server implementation, and the idioms fit perfectly with this sort of Java app too.

Conclusion

I think that wraps it up for now. As things progress, I may have more to add to this series, but my hope is that the app doesn't have to get much more complicated than the sort of stuff seen in this series. There are certainly big parts to tackle (like creating and managing projects), but I plan to do that by composing these elements. I remain delighted with this mode of NSF-based app development, and look forward to writing more clean, semi-declarative code in this vein.

Journeys Debugging Open Liberty and MVC

Tue Nov 30 16:40:24 EST 2021

I mentioned in my last post that I've been tinkering with a modern structure for OpenNTF's web site as a side project. In that, I talked about how I've been going with Jakarta MVC for the front end, but ran into an odd problem with the latest versions in Open Liberty, and that was the impetus to tinkering with ERB.

Well, I decided to go back and take a swing at trying to make JSP work in this case, since it's (still) a good engine for this purpose, and it could be a fun experiment. I was indeed able to do it, and I think the path I took is worth chronicling here.

Context

In previous projects, such as this blog, I've used older versions of the software stack involved - basically, Jakarta 8, which is before the "big bang" switch from the javax.* to jakarta.* package namespace. Since this is a clean new app, I really want to lean to the newest versions across the board, so I pegged my plans to that.

Though Jakarta EE 9 and 9.1 (the Java 11 official version) have been out for a bit, the switchover comes with the sort of turbulence one would expect. Until just this past week, Open Liberty supported JEE 9 only in beta releases - these have historically been plenty stable for me, but it's always asking for trouble. Even with that non-beta version out, I found myself still on the beta track: I'm addicted to using MicroProfile Config, and MicroProfile's move to JEE 9 support is still itself in the RC stage.

So, okay, betas it is.

The Problem

Once I set everything up on JEE 9 and MP 5, I hit this exception when trying to render a JSP via an MVC Controller object:

java.lang.RuntimeException: SRV.8.2: RequestWrapper objects must extend ServletRequestWrapper or HttpServletRequestWrapper
  at com.ibm.wsspi.webcontainer.util.ServletUtil.unwrapRequest(ServletUtil.java:89)
  at [internal classes]
  at org.eclipse.krazo.engine.ServletViewEngine.forwardRequest(ServletViewEngine.java:135)
  at org.eclipse.krazo.engine.JspViewEngine.processView(JspViewEngine.java:58)
  at org.eclipse.krazo.core.ViewableWriter.writeTo(ViewableWriter.java:159)
  at org.eclipse.krazo.core.ViewableWriter.writeTo(ViewableWriter.java:1)
  at org.jboss.resteasy.core.interception.jaxrs.ServerWriterInterceptorContext.lambda$writeTo$1(ServerWriterInterceptorContext.java:79)
  ... 4 more

The short of it is that Krazo (the MVC implementation) passes JSP rendering along to the app container, rather than doing its own JSP work, which makes perfect sense. This, however, hits trouble within Liberty's ServletUtil class, which attempts to "unwrap" the incoming HttpServletRequest object to find the core Liberty-specific object to use extended methods on.

Normally, this sort of thing would work fine: every app server has its own variant of HttpServletRequest for its own uses, and it's perfectly reasonable to do this kind of unwrapping. However, for some reason, this was going awry.

The specific code from Krazo that calls down into Liberty code does do new HttpServletRequestWrapper(request), but that's also legal: the unwrapRequest method is intended specifically to unwrap spec-standard HttpServletRequestWrapper objects like that. So that's not our culprit, and I had to dig deeper.

Investigation

To start my investigation, I knew I'd need to work with the Krazo layer. Fortunately, though many of the moving parts here are baked into the Liberty server, Krazo is not - I include it as a Maven dependency in my app. So I cloned the Krazo source, added it to my workspace, and set my dependency on the SNAPSHOT version, allowing me to do my work inside Krazo's classes.

Context

So what was going on? At first, I thought that maybe something had snuck in a javax.* class somewhere - old code that wasn't fully migrated to JEE 9. That would certainly cause the trouble: javax.servlet.ServletRequestWrapper and jakarta.servlet.ServletRequestWrapper are, to the JVM's eyes, entirely-unrelated classes with no compatibility whatsoever. And, indeed, looking at the source of ServletUtil could give one that impression right away, since the code uses javax.*.

That's not the trouble, however. Though the class is written to javax.*, I gather that it's run through Eclipse Transporter during packaging of the app server, and the actual class that's involved uses jakarta.*. Okay, that's good to know and makes sense, but it also doesn't get us any closer to the root problem.

For my next step, I wanted to figure out what, specifically, it was looking for. The unwrapRequest method takes a Class<?> parameter to find the needed request type, but the stack trace above hid the path it took to get there. By attaching a debugger to the server, I gleaned that it was being called by the unwrapRequest variant above it that looks specifically for a com.ibm.wsspi.webcontainer.servlet.IExtendedRequest.

Okay, so I have the name of the interface it's looking for - I can work with this. My next step was to try to get a programmatic handle on it. The basic approach, when you don't have the class as part of your project, it to look it up via:

1
Class<?> requestClass = Class.forName("com.ibm.wsspi.webcontainer.servlet.IExtendedRequest");

That doesn't work here, though: though the app server definitely has that class, it (properly) shields the running application from accessing the class directly, so that the app runtime isn't contaminated by the surrounding server code.

What I needed to do next was find a ClassLoader that does know about it, and the best way to do that is to find a class provided from outside the running app and ask that. Fortunately, the incoming request is exactly that. So:

1
Class<?> requestClass = Class.forName("com.ibm.wsspi.webcontainer.servlet.IExtendedRequest", false, request.getClass().getClassLoader());

What that does is ask whatever ClassLoader the request object comes from - that is to say, the container's loader - to find the class. And it worked! Now I could test to verify whether the core request matches the type needed:

1
2
Class<?> requestClass = Class.forName("com.ibm.wsspi.webcontainer.servlet.IExtendedRequest", false, request.getClass().getClassLoader());
System.out.println("does it match? " + requestClass.isInstance(request));

As expected, that resolved to false. In this case, that's good, since it'd have been a much-worse problem if it hadn't. But what is the request object, anyway? Well:

1
2
System.out.println(request.getClass());
// output: jdk.proxy15.$Proxy68

Right, okay, that makes sense: all sorts of stuff uses proxy objects in Java, not the least of which being the CDI environment running the whole show.

Cracking Open Proxies

So I had some good information at this point: the HttpServletRequest that Krazo is handed is a proxy object, but Liberty has a hard requirement that its dispatcher is given an instance of IExtendedRequest, which this is not. That means that something in the stack is taking the original Liberty request object and making a proxy for it - fair enough, but inconvenient for me.

My next thought was that maybe I could track down the type of proxy object it is and, with that knowledge, get the underlying delegate request. That's a common-enough pattern: have an instance property in your proxy class that contains the delegate, and (if I'm lucky) have it accessible via a getter. Java's java.lang.reflect.Proxy class has a static method for determining the object that actually handles called methods:

1
2
System.out.println(Proxy.getInvocationHandler(request));
// output: org.jboss.resteasy.core.ContextParameterInjector$GenericDelegatingProxy@fc04422d

This was starting to come together all the more. Liberty recently switched from Apache CXF to RESTEasy for its JAX-RS implementation, and that could explain why this is trouble now when it wasn't before. More importantly for my immediate needs, that also gave me a lead to track down the proxy class.

However, though it was easy enough to find, my heart sank a bit at what I found: rather than having an easy instance property to get the real request, it uses an object from its container class and in turn asks that for the request. Maybe I'd be able to get to that via reflection, but the prospect of figuring out how to work with nested class contexts caused me to try to look around elsewhere instead.

CDI

Another potential answer came to me in a flash: CDI! Access to the CDI environment is standardized, and maybe I could fetch the original request from there. It'd be extremely likely that it'd just hand me back a similar proxy, but it'd be worth a shot. So here we go:

1
2
3
HttpServletRequest cdiReq = CDI.current().select(HttpServletRequest.class).get();
System.out.println(cdiReq);
// output: com.ibm.ws.webcontainer40.srt.SRTServletRequest40@1a22712c

Oh! Good! That's one of Liberty's internal types! Is it the object I need, though? Well... no. Crap. requestClass.isInstance(cdiReq) is false, so this didn't really get me very far. That's a shame, too, since that solution could have involved no implementation-specific code at all.

Internal Liberty Classes

My next thought was that I should try to find another way to get around to finding the true request object. I looked back through the debug stack to find where it was originally calling unwrapRequest to get a bit more context:

1
2
3
WebContainerRequestState reqState = WebContainerRequestState.getInstance(false);

wasReq = (IExtendedRequest) ServletUtil.unwrapRequest(request);

Okay, so what's this with WebContainerRequestState? That sure smells like an object that's meant to be a request-wide object to get to all sorts of state. If I were to write such a class, I'd use it to stash the incoming request as well as any other incidental data that I wouldn't want to ferret away in a way that might leak into the app. I was a little wary that WebAppRequestDispatcher didn't use it to get the IExtendedRequest, but maybe I'd luck out.

And boy, did I! Looking down the source of the file, I found my mark: public IExtendedRequest getCurrentThreadsIExtendedRequest().

The (Provisional) Solution

Now, I had all the tools I needed. Towards the bottom of Krazo's ServletViewEngine class, I conjured up this reflective incantation:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
try {
  Class<?> requestStateClass = Class.forName("com.ibm.wsspi.webcontainer.WebContainerRequestState", false, request.getClass().getClassLoader());
  Method getInstance = requestStateClass.getDeclaredMethod("getInstance", boolean.class);
  Object requestState = getInstance.invoke(null, false);
  Method getCurrentThreadsIExtendedRequest = requestStateClass.getDeclaredMethod("getCurrentThreadsIExtendedRequest");
  request = (HttpServletRequest)getCurrentThreadsIExtendedRequest.invoke(requestState);
} catch (Throwable e1) {
  // Not on WAS
}
rd.forward(new HttpServletRequestWrapper(request), new HttpServletResponseWrapper(response));

So what I'm doing here is breaking into the container's ClassLoader in order to get a handle on WebContainerRequestState. From there, I'm able to call the method to get the current instance, and then in turn call the method to get the IExtendedRequest. I overwrite the request variable we're working with, and then pass that along to the dispatcher. If any of that were to fail, I just throw up my hands, assume I'm not on Liberty, and continue on as before.

And... it works! It actually works! The pages now render properly, with all the niceness of modern JSP at my fingertips! It was fun to toy with the idea of ERB, but I like this better for an otherwise-pure-Java app for sure.

Next Steps

So I have a solution that works for me, but it's so ugly and implementation-specific that I can't exactly be comfortable with it. Still, the trouble comes from an implementation-specific source, so that may be required. Maybe I'll have to leave it like this.

More responsibly, though, what I should do is narrow this down into a reproducible case without all the other moving parts in the app to make sure that it's actually a bug/incompatibility, and thus something that I can report. This is all open-source software, after all, and it'd do nobody any good for me to let a potential actual problem linger. I'll just have to properly identify where the true culprit is first. Is it because Krazo uses RequestDispatcher in a somewhat-unusual way? Is it because RESTEasy is too aggressive about wrapping requests with no proper way to get to the delegate? Is it that Liberty should handle this case better internally? Or maybe it's just some side effect of the other things I have going on. Research is warranted.

In the mean time, that was a fun one. I don't know that I'll have a need for this specific solution again, but it was good to find, and it's always good to get some troubleshooting practice like this for sure.

Writing A Custom ViewEngine For Jakarta MVC

Tue Nov 02 14:31:20 EDT 2021

One of the very-long-term side projects I have going on is a rewrite of OpenNTF's site. While the one we have has served us well, we have a lot of ideas about how we want to present projects differently and similar changes to make, so this is as good a time as any to go whole-hog.

The specific hog in question involves an opportunity to use modern Jakarta EE technologies by way of my Domino Open Liberty Runtime project, as I do with my blog here. And that means I can, also like this blog, use the delightful Jakarta MVC Spec.

However, when moving to JEE 9.1, I ran into some trouble with the current Open Liberty beta and its handling of JSP as the view template engine. At some point, I plan to investigate to see if the bug is on my side or in Liberty's (it is beta, in this case), but in the mean time it gave my brain an opportunity to wander: in theory, I could use ERB (a Ruby-based templating engine) for this purpose. I started to look around, and it turns out the basics of such a thing aren't difficult at all, and I figure it's worth documenting this revelation.

MVC ViewEngines

The way the MVC spec works, you have a JAX-RS endpoint that returns a string or is annotated with a view-template name, and that's how the framework determines what page to use to render the request. For example:

1
2
3
4
5
6
7
8
9
@Path("home")
@GET
@Produces(MediaType.TEXT_HTML)
public String get() {
  models.put("recentReleases", projectReleases.getRecentReleases(30)); //$NON-NLS-1$
  models.put("blogEntries", blogEntries.getEntries(5)); //$NON-NLS-1$

  return "home.jsp"; //$NON-NLS-1$
}

Here, the controller method does a little work to load required model data for the page and then hands it off to the view engine, identified here by returning "home.jsp", which is in turn loaded from WEB-INF/views/home.jsp in the app.

In the framework, it looks through instances of ViewEngine to find one that can handle the named page. The default spec implementation ships with a few of these, and JspViewEngine is the one that handles view names ending with .jsp or .jspx. The contract for ViewEngine is pretty simple:

1
2
3
4
public interface ViewEngine {
  boolean supports(String view);
  void processView(ViewEngineContext context) throws ViewEngineException;
}

So basically, one method to check whether the engine can handle a given view name and another one to actually handle it if it returned true earlier.

Writing A Custom Engine

With this in mind, I set about writing a basic ErbViewEngine to see how practical it'd be. I added JRuby to my dependencies and then made my basic class:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
@ApplicationScoped
@Priority(ViewEngine.PRIORITY_APPLICATION)
public class ErbViewEngine extends ViewEngineBase {

  @Inject
  private ServletContext servletContext;

  @Override
  public boolean supports(String view) {
    return String.valueOf(view).endsWith(".erb"); //$NON-NLS-1$
  }

  @Override
  public void processView(ViewEngineContext context) throws ViewEngineException {
    // ...
  }
}

At the top, you see how a custom ViewEngine is registered: it's done by way of making your class a CDI bean in the application scope, and then it's good form to mark it with a @Priority of the application level stored in the interface. Extending ViewEngineBase gets you a handful of utility classes, so you don't have to, for example, hard-code WEB-INF/views into your lookup code. The bit with ServletContext is there because it becomes useful in implementation below - it's not part of the contractual requirement.

And that's basically the gist of hooking up your custom engine. The devil is in the implementation details, for sure, but that processView is an empty canvas for your work, and you're not responsible for the other fiddly details that may be involved.

First-Pass ERB Implementation

Though the above covers the main concept of this post, I figure it won't hurt to discuss the provisional implementation I have a bit more. There are a couple ways to use JRuby in a Java app, but the way I'm most familiar with is using JSR 223, which is a generic way to access scripting languages in Java. With it, you can populate contextual objects and settings and then execute a script in the target language. The Krazo MVC implementation actually comes with a generic Jsr223ViewEngine that lets you use any such language by extension.

In my case, the task at hand is to read in the ERB template, load up the Ruby interpreter, and then pass it a small script that uses the in-Ruby ERB class to render the final page. This basic implementation looks like this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
@Override
public void processView(ViewEngineContext context) throws ViewEngineException {
  Charset charset = resolveCharsetAndSetContentType(context);

  String res = resolveView(context);
  String template;
  try {
    // From Apache Commons IO
    template = IOUtils.resourceToString(res, StandardCharsets.UTF_8);
  } catch (IOException e) {
    throw new ViewEngineException("Unable to read view", e);
  }

  ScriptEngineManager scriptEngineManager = new ScriptEngineManager();
  ScriptEngine scriptEngine = scriptEngineManager.getEngineByExtension("rb"); //$NON-NLS-1$
  Object responseObject;
  try {
    Bindings bindings = scriptEngine.createBindings();
    bindings.put("models", context.getModels().asMap());
    bindings.put("__template", template);
    responseObject = scriptEngine.eval("require 'erb'\nERB.new(__template).result(binding)", bindings);
  } catch (ScriptException e) {
    throw new ViewEngineException("Unable to execute script", e);
  }

  try (Writer writer = new OutputStreamWriter(context.getOutputStream(), charset)) {
    writer.write(String.valueOf(responseObject));
  } catch (IOException e) {
    throw new ViewEngineException("Unable to write response", e);
  }
}

The resolveCharsetAndSetContentType and resolveView methods come from ViewEngineBase and do basically what their names imply. The rest of the code here reads in the ERB template file and passes it to the script engine. This is extremely similar to the generic JSR 223 implementation, but diverges in that the actual Ruby code is always the same, since it exists just to evaluate the template.

If I continue down this path, I'll put in some time to make this more streamable and to provide access to CDI beans, but it did the job to prove that it's quite doable.

All in all, I found this exactly as pleasant and straightforward as it should be.

My Current Model Framework, Part 2: An Example

Fri Feb 21 15:27:19 EST 2014

Tags: xpages mvc
  1. My Current Model Framework, Part 1
  2. My Current Model Framework, Part 2: An Example

To go along with the updated release of my Scaffolding project yesterday, I've created an example database created with that template and containing a basic use of my model framework:

model-framework-example.zip

This demonstrates a few things about the framework:

  • Setting up the two classes that go with each object type - the object itself (e.g. model.Department) and the "manager" class that handles fetching objects and collections (e.g. model.DepartmentManager) and adding the collections to faces-config.xml.
  • Tying the manager class to the views in a database using views with a common prefix. In this case, they point to the current database, though I recommend storing the data in another DB.
  • Adding relations between objects. A Person object is tied to a given Department by way of the getDepartment method, while a Department lists its people via the getPeople method.
  • Using the collections on an XPage with xp:viewPanel controls, including showing "columns" that are not in the view - the objects automatically fetch from the document when the column isn't present (which is inefficient but useful sometimes).
  • Attaching a model object to an XPage using a xp:dataContext and using a controller class to save it.

I have a few ideas in mind to improve day-to-day use of the model framework (for example, I may set up proper data sources for individual objects so they can tie into the normal save events), so they may change over time, but this is how I develop most apps today.

My Current Model Framework, Part 1

Sun Nov 17 12:19:18 EST 2013

Tags: xpages mvc
  1. My Current Model Framework, Part 1
  2. My Current Model Framework, Part 2: An Example

As I do from time to time, I've recently been taking another stab at a standard model framework for my XPage apps. My latest one has been proving its worth in a couple apps I've been writing lately, and I'd like to go over the general goals and advantages of the way it works.

The framework is focused on a couple main ideas:

  • Low Overhead. The Java language requires a certain amount of overhead to get anything done, but I want to minimize that. The task of defining a new model object consists of two classes: one for the object itself and one for the collection manager (e.g. connecting to Domino views), and each is geared towards only writing the code required to describe the necessities.
  • Embracing XSP. The framework is intended for writing XPages apps, and so I want to make sure it works smoothly with EL and standard controls like inputs, repeats, and view panels.
  • Embracing document databases. Since I use almost entirely a document database for storage, I want to make sure to take advantage of that, and that primarily means flexibility in data modeling. Accordingly, data objects allow arbitrary field access, with any strictures from your model class layered on top of that.
  • Embracing Domino. Since the document database I use is Domino, I want to make sure its peculiar features are retained as well: reader/author fields, arbitrary object storage via MIME, (ideally) RT and attachments, full-text search, categorized views, and so forth.
  • Conceptually simple. Though there's some ugly code involved in parts like the Domino view wrapper, the conceptual layout of the framework is kept very simple with few moving parts, making it easy to recognize the function of each aspect at a glance, whether when building it, when looking at another's code, or when returning to your own code months down the line.

For this post, I'll give an example of a model class, while later I'll go into the collection managers and some real-use examples. This is what a basic model class looks like:

package model;

import java.util.Arrays;
import java.util.Collection;
import java.util.List;

import org.openntf.domino.*;

import frostillicus.model.AbstractDominoModel;
import frostillicus.model.DominoColumnInfo;

public class Request extends AbstractDominoModel {
	private static final long serialVersionUID = 589766180414699322L;

	public Request(final Database database) {
		super(database);
		setValue("Form", "Request");
	}

	public Request(final ViewEntry entry, final List<DominoColumnInfo> columnInfo) {
		super(entry, columnInfo);
	}

	public Request(final Document doc) {
		super(doc);
	}

	@Override
	protected Collection<String> nonSummaryFields() {
		return Arrays.asList(new String[] { "Body" });
	}

}

There's more Java overhead than I'd like, but that's the name of the game. Fortunately, each method has a role and can be used as hooks for differing behavior.

The first constructor is the one used to create a new document of the appropriate type in the provided database, so it sets the form field appropriately - it would also set any other appropriate defaults. The second constructor is used when traversing a view - model objects pull values from view entries when available, but also transparently pass along requests to the document when the requested field isn't in the view. The final constructor wraps an existing document.

The "nonSummaryFields" method is one of the hooks available to define the Domino data model, in this case providing a list of fields that should be flagged as non-summary when written, to help work around Domino limits. I also have hooks, not needed here, for authors/readers/names fields and form-style query/post events.

Conspicuously absent are any field definitions. By default, model objects act much like XPage DominoDocuments, passing setValue and getValue calls on to the underlying document more or less directly. However, the framework allows for hooks here by writing getters and setters in the standard format, and their presence changes the behavior of getValue and setValue. When just a getter is present, the field becomes read-only; when a setter is present, the field can be written, but now does data-type validation. The getters and setters allow for arbitrary validation and additional behavior for specific fields (say, changing related fields when one changes) without having to write out giant blocks of "getFoo() { return foo; }" and "setFoo(String foo) { this.foo = foo; }", and this has been a huge win for me. Even though Eclipse helps with the initial creation, the code still has to exist, and it takes a cognitive toll.

I also use these overriding methods to create relations between models through their collection managers. For example:

public Client getClient() {
	String clientId = (String)getValue("ClientID");
	if(clientId == null || clientId.isEmpty()) {
		return null;
	}
	return (Client)JSFUtil.getClientManager().getValue(clientId);
}

This allows for XPage El bindings like #{task.client.name} - no extra panels with data sources, no inline lookup code.

Next time, I will go into the collection-manager side, which provides fairly high-performance access to views while remaining simple and flexible.

More On "Controller" Classes

Mon Jan 07 19:39:09 EST 2013

Tags: xpages mvc java
  1. "Controller" Classes Have Been Helping Me Greatly
  2. More On "Controller" Classes

Since my last post on the matter , I've been using this "controller" class organization method in a couple other projects (including a refresh of the back-end of this blog), and it's proven to be a pretty great way to go about XPages development.

As I mentioned before, the "controller" term comes from the rough equivalent in Rails. Rails is thoroughly MVC based, so a lot of your programming involves creating the UI of a page in HTML with a small sprinkling of Ruby (the "view"), backed by a class that is conceptually tied to it and stores all of the real business logic associated with that specific page (the "controller"). XPages don't have quite this same assumption built in, but the notion of pairing an XPage with a single backing Java class is a solid one. Alternatively, you can think of the "controller" class as being like a stylesheet or client JavaScript file: the HTML page handles the design, and only contains references to functionality defined elsewhere.

What this means in practice is that I've been pushing to eliminate all traces of non-EL bindings in my XSP markup in favor of writing the code in the associated Java class - this includes not only standard page events like beforeRenderResponse , but also anywhere else that Server JavaScript (or Ruby) would normally appear, like value and method bindings. Here's a simple example, from an XPage named Test and its backing class:

Test.xsp
<p><xp:button id="clickMe" value="Click Me">
	<xp:eventHandler event="onclick" submit="true" refreshMode="partial" refreshId="output"
		action="#{pageController.refreshOutput}"/>
</xp:button></p>
	
<p><xp:text id="output" value="#{pageController.output}"/></p>
controller/Test.java
package controller;

import frostillicus.controller.BasicXPageController;
import java.util.Date;

public class Test extends BasicXPageController {
	private static final long serialVersionUID = 1L;

	private String output = "default";
	public String getOutput() { return this.output; }

	public void refreshOutput() {
		this.output = new Date().toString();
	}
}

In a simple case like this, it doesn't buy you much, but imagine a more complicated case, say code to handle post-save document processing and notifications, or complicated code to generate the value for a container control. The separation between the two pays significant dividends when you stop having to worry scrolling through reams of code in the XSP markup when you're working on the page layout, while having a known location for the Java code associated with a page makes it easier to track down functionality. Plus, Server JavaScript is only mildly more expressive than Java, so even the business logic itself stays just about as clean (moving from Ruby to Java, on the other hand, imposes a grotesque LOC penalty).

It sounds strange, but the most important benefit I've derived from this hasn't been from performance or cleanliness (though those are great), but instead the discipline it provides. Now, there's no question where business logic associated with a page should go: in a class that implements XPageController with the same name as the page and put in the "controller" package. If I want pages to share functionality, that's handled either via a common method stored in another class or, as appropriate, via class inheritance. No inline code, no Server JavaScript script libraries. If something is going to be calculated, it's done in Java.

There is one area that's given me a bit of trouble: custom controls. For example, the linksbar on this blog requires computation to generate, but it's not associated with any specific page. For now, I put that in the top-level controller class, but that feels a bit wrong. The same solution also doesn't apply to the code that handles rendering each individual post in the list, which may be on the Home page, the Month page, or individually on the Post page, in which case it also has edit/save/delete actions associated with it. For now, I created a "helper" class that I instantiate (yes, with JavaScript, which is depressing) for each instance. I think the "right" way to do it with the architecture is to create my controls entirely in Java, with renderers and all. That would allow me to handle the properties passed in cleanly, but the code involved with creating those things is ugly as sin. Still, I'll give that a shot later to see if it's worth the tradeoff.

"Controller" Classes Have Been Helping Me Greatly

Wed Dec 26 16:54:16 EST 2012

Tags: xpages mvc java
  1. "Controller" Classes Have Been Helping Me Greatly
  2. More On "Controller" Classes

I mentioned a while ago that I've been using "controller"-type classes paired with specific XPages to make my code cleaner. They're not really controllers since they don't actually handle any server direction or page loading, but they do still hook into page events in a way somewhat similar to Rails controller classes. The basic idea is that each XPage gets an object to "back" it - I tie page events like beforePageLoad and afterRenderResponse to methods on the class that implements a standard interface:

package frostillicus.controller;

import java.io.Serializable;
import javax.faces.event.PhaseEvent;

public interface XPageController extends Serializable {
	public void beforePageLoad() throws Exception;
	public void afterPageLoad() throws Exception;

	public void afterRestoreView(PhaseEvent event) throws Exception;

	public void beforeRenderResponse(PhaseEvent event) throws Exception;
	public void afterRenderResponse(PhaseEvent event) throws Exception;
}

I have a basic stub class to implement that as well as an abstract class for "document-based" pages:

package frostillicus.controller;

import iksg.JSFUtil;

import javax.faces.context.FacesContext;

import com.ibm.xsp.extlib.util.ExtLibUtil;
import com.ibm.xsp.model.domino.wrapped.DominoDocument;

public class BasicDocumentController extends BasicXPageController implements DocumentController {
	private static final long serialVersionUID = 1L;

	public void queryNewDocument() throws Exception { }
	public void postNewDocument() throws Exception { }
	public void queryOpenDocument() throws Exception { }
	public void postOpenDocument() throws Exception { }
	public void querySaveDocument() throws Exception { }
	public void postSaveDocument() throws Exception { }

	public String save() throws Exception {
		DominoDocument doc = this.getDoc();
		boolean isNewNote = doc.isNewNote();
		if(doc.save()) {
			JSFUtil.addMessage("confirmation", doc.getValue("Form") + " " + (isNewNote ? "created" : "updated") + " successfully.");
			return "xsp-success";
		} else {
			JSFUtil.addMessage("error", "Save failed");
			return "xsp-failure";
		}
	}
	public String cancel() throws Exception {
		return "xsp-cancel";
	}
	public String delete() throws Exception {
		DominoDocument doc = this.getDoc();
		String formName = (String)doc.getValue("Form");
		doc.getDocument(true).remove(true);
		JSFUtil.addMessage("confirmation", formName + " deleted.");
		return "xsp-success";
	}

	public String getDocumentId() {
		try {
			return this.getDoc().getDocument().getUniversalID();
		} catch(Exception e) { return ""; }
	}

	public boolean isEditable() { return this.getDoc().isEditable(); }

	protected DominoDocument getDoc() {
		return (DominoDocument)ExtLibUtil.resolveVariable(FacesContext.getCurrentInstance(), "doc");
	}
}

I took it all one step further in the direction "convention over configuration" as well: I created a ViewHandler that looks for a class in the "controller" package with the same name as the current page's Java class (e.g. "/Some_Page.xsp" → "controller.Some_Page") - if it finds one, it instantiates it; otherwise, it uses the basic stub implementation. Once it has the class created, it plunks it into the viewScope and creates some MethodBindings to tie beforeRenderResponse, afterRenderResponse, and afterRestoreView to the object without having to have that code in the XPage (it proved necessary to still include code in the XPage for the before/after page-load and document-related events).

So on its own, the setup I have above doesn't necessarily buy you much. You save a bit of repetitive code for standard CRUD pages when using the document-controller class, but that's about it. The real value for me so far has been a clarification of what goes where. Previously, if I wanted to run some code on page load, or attached to a button, or to set a viewScope value, or so forth, it could go anywhere: in a page event, in a button action, in a dataContext, in a this.value property for a xp:repeat, or any number of other places. Now, if I want to evaluate something, it's going to happen in one place: the controller class. So if I have a bit of information that needs recalculating (say, the total cost of a shopping cart), I make a getTotalCost() method on the controller and set the value in the XPage to pageController.totalCost. Similarly, if I need to set some special values on a document on load or save, I have a clear, standard way to do it:

package controller;

import com.ibm.xsp.model.domino.wrapped.DominoDocument;
import frostillicus.controller.BasicDocumentController;

public class Projects_Contact extends BasicDocumentController {
	private static final long serialVersionUID = 1L;

	@Override
	public String save() throws Exception {
		DominoDocument doc = this.getDoc();
		doc.setValue("FullName", ("CN=" + doc.getValue("FirstName") + " " + doc.getValue("LastName")).trim() + "/O=IKSGClients");

		return super.save();
	}

	@Override
	public void postNewDocument() throws Exception {
		super.postNewDocument();

		DominoDocument doc = this.getDoc();
		doc.setValue("Type", "Person");
		doc.setValue("MailSystem", "5");
	}
}

It sounds like a small thing - after all, who cares if you have some SSJS in the postNewDocument event on an XPage? And, besides, isn't that where it's supposed to go? Well, sure, when you only have a small amount of code, doing it inline works fine. However, as I've been running into for a while, the flexibility of XPages makes them particularly vulnerable to becoming tangled blobs of repeated and messy code. By creating a clean, strict system from the start, I've made it so that the separation is never muddied: the XPage is about how things appear and the controller class is about how those things get to the XPage in the first place.

We'll see how it holds up as I use it more, but so far this "controller"-class method strikes a good balance between code cleanliness without getting too crazy on the backing framework (as opposed so some of the "model" systems I've tried making).

My Current Data-Source Musings

Mon Nov 26 18:41:48 EST 2012

Tags: mvc

My quest to find the ideal way to access data in XPages continues. Each time I make a new project, I come up with a slight variation on my previous theme, improving in one or more of the various conflicting factors involved. The problem, as always, is that the standard xp:dominoView and xp:dominoDocument data sources alone are insufficient to properly separate business logic from presentation. On the other hand, accessing view data in a way that is both efficient and flexible via just the standard Java API is a non-trivial prospect. The advantage to using xp:dominoView in particular is that IBM already did an incredible amount of work making performance passable when dealing with cases of resorting, key-lookups, FT searches, response hierarchies, and the like.

The ideal method would build upon the efficient-access and serialization work already done while retaining that Rails-like collection-fetching and relation code from my forum attempts. Accordingly, I'm trying something a bit different with my latest project: I extended xp:dominoView to be able to take another parameter, modelClass, that is the class name of the objects it should return, rather than just DominoViewEntry. In turn, the objects I'm using wrap the real DominoViewEntry and allow for transparent access to the document (as needed, e.g. when a requested field is not a column) and the usual benefits of writing Java classes: namely, I can write code to handle field-change side effects without having to care about the logic behind that in the XPage itself.

This is sort of a half-formed idea so far, and I'm not 100% comfortable with it, but I think it's promising. Ideally, there wouldn't be any data sources written into the XPage itself at all (since it should be just a View), but that's sort of a necessity given the current state of things. This should be a reasonable next-best thing, and maybe a stepping stone to a new, faster, and cleaner data-access method.

The end goal of all this is to make it so that creating a new data "type" is very straightforward but still more declarative and structured than just making a new form name in the xp:dominoDocument data source. There should be classes to go with each type, but those classes should be very simple and easy to write. The quest continues!

XPages MVC: Experiment II, Part 4

Mon Jun 04 19:57:00 EDT 2012

Tags: xpages mvc
  1. XPages MVC: Experiment I
  2. XPages MVC: Experiment II, Part 1
  3. XPages MVC: Experiment II, Part 2
  4. XPages MVC: Experiment II, Part 3
  5. XPages MVC: Experiment II, Part 4

To finish up my series on the infrastructure of my guild forums app, I'd like to mention a couple of the down sides I see with its current implementation, which I'd generally want to fix or avoid if re-implementing it today.

Roll-Your-Own

One of the strengths of this kind of MVC setup is that it works to separate the front-end code from the data source. It would be (relatively) easy for me to replace the model and collection classes with versions that use a SQL database or non-Notes document storage should I so choose. That's a double-edged sword, though: because I'm not using any of the built-in data sources and controls with Domino knowledge, I had to do everything myself. This means a loss of both some nice UI features - like the rich text editor's ability to upload images inline - and the XPage data sources' persistence and caching features.

The collection code deals with View and ViewEntryCollection classes directly, but they can't be serialized, so I had to write my own methods to detect when the object is no longer valid (say, when doing a partial refresh) and re-fetch the collection. This was good in the sense that I learned more about what is and is not efficient in Domino. For example, getNthEntry(...) on a ViewEntryCollection grabbed via getAllEntriesByKey(...) is fast. Conversely, while retrieving data in a view is usually significantly faster than getting the same data from a document, there's a point where the view index size is large enough that, provided you're fetching only a few documents at a time, it's better to use the document. With a lot of work (and a LOT of collection caching), I ended up with something that's quite fast... but since I wrote all the code myself as part of a side project, it was also pretty bug-prone for the first couple weeks after deployment.

Maybe I'd be able to find ways to piggyback more on the built-in functionality if I re-did it, but, as it stands, it's kind of hairy.

Inconsistent Separation

This isn't a TERRIBLE problem, but I'm kind of annoyed with some of the inconsistent choices I made about where DB-specific code goes. For example, collection managers have very little Domino-specific code... except when they need to know about sorting and searching. Similarly, almost all of the code in the model objects deals with pure Java objects and the clean collections API... except the save() methods, which are big blobs of Domino API work. That makes some sense, but the model classes don't handle creation from the database - that's in the collection classes. Like I said, it's not the end of the world, and it's consistent within its own madness, but there are some weird aspects and internal leaks I wouldn't mind cleaning up.

Lack of Data Sources

This is sort of the flip side to my first problem. All of my collection managers are just objects and the collections are just Lists. This is good in the sense that these things work well in Server JavaScript and with all of the built-in UI elements, but it'd be really nice to write my own custom data sources so I can explicitly declare what data I want on the page - using a xp:dataContext or bit of JavaScript in a value property just feels "dirty", like I'm not embracing it completely. However, this problem isn't as much an architectural one as it is a lack of education - I haven't bothered to learn to write my own data sources yet (even though I expect it's more or less straightforward, for Java), so I could remedy that easily enough.

No Proper Controller

This is just another manifestation of the root cause that has me looking into all this MVC stuff to begin with. Even though my collections and models are better than dealing with raw Domino objects, there's still too much of a tie between the UI and the back-end representation, as well as the requisite dependence on the Domino HTTP stack to handle routing requests. Of course, I'm still working on the correct solution to this.

 

Overall, my structure as written has been serving me well. New data elements are pretty easy to set up - I wouldn't mind not having to write three classes per, but hey, it's Java - and working with them is a breeze. Though it took a while to get everything working, now that it is, I can make tons of UI changes without worrying much about the actual data representation. I can change the way data is stored or add on-load or on-save computation beyond the capabilities of Formula language without changing anything in the XPages themselves. So in those senses, my forum back-end code is a huge step up from doing it directly xp:dominoDocuments, but it still doesn't feel completely "right".

XPages MVC: Experiment II, Part 3

Mon May 28 20:00:00 EDT 2012

Tags: xpages mvc
  1. XPages MVC: Experiment I
  2. XPages MVC: Experiment II, Part 1
  3. XPages MVC: Experiment II, Part 2
  4. XPages MVC: Experiment II, Part 3
  5. XPages MVC: Experiment II, Part 4

Continuing on from my last post, I'd like to go over a couple specifics about how I handle fetching appropriate collections of objects from a view and a couple areas where I saved myself some programming hassle.

As I mentioned before, both the "manager" and "collection" classes inherit from abstract classes that handle a lot of the dirty work. The AbstractCollectionManager class is by far the smaller of the two, containing mostly convenience methods and a couple overloaded methods for generating the collection objects. The central one is pretty straightforward:

protected AbstractDominoList<E> createCollection( String sortBy, Object restrictTo, String searchQuery, int preCache, boolean singleEntry) { AbstractDominoList<E> collection = this.createCollection(); collection.setSortBy(sortBy); collection.setRestrictTo(restrictTo); collection.setSearchQuery(searchQuery); collection.setSingleEntry(singleEntry); collection.preCache(preCache); return collection; }

The first three properties actually determine the nature of the collection, while the last two set some optimization bits when the code knows ahead of time about how many elements it expects (such as doing a by-id lookup or a "recent news" list that will only ever display 5 entries). The createCollection call at the start is overridden method in each class that returns a basic collection object to avoid Java visibility issues.

The AbstractDominoList class is much longer; I won't go into too much detail, but a couple parts are pertinent. As I've mentioned, it implements the List interface (via extending AbstractList), which means it just needs to provide get and size methods. The way these work is by checking to see if it houses a valid cached ViewEntryCollection and, if it does, translating the method call to get the indexed entry as an object or the size. If the collection hasn't been fetched yet or if it's no longer valid (say, if it was recycled), it uses the view-location information from the implementing class and any parameters on the object to construct another one.

First, it fetches and configures the view properly:

Database database = this.getDatabase(); View view = database.getView(this.getViewName()); view.refresh(); String sortColumn = this.sortBy == null ? null : this.sortBy.toLowerCase().endsWith("-desc") ? this.sortBy.substring(0, this.sortBy.length()-5) : this.sortBy; boolean sortAscending = this.sortBy == null || !this.sortBy.toLowerCase().endsWith("-desc"); if(this.searchQuery != null) { view.FTSearchSorted(this.searchQuery, 0, sortColumn, sortAscending, false, false, false); } else if(this.sortBy != null && !this.sortBy.equals(this.getDefaultSort())) { view.resortView(sortColumn, sortAscending); } else { view.resortView(); }

Then, it uses code similar to this to store the collection in its cache (I say "similar to" because there are additional branches and methods, but this is the idea):

if(this.restrictTo == null) { ViewEntryCollection collection = view.getAllEntries(); this.cachedEntryCollection = new DominoEntryCollectionWrapper(collection, view); } else { ViewEntryCollection collection = view.getAllEntriesByKey(this.restrictTo, true); this.cachedEntryCollection = new DominoEntryCollectionWrapper(collection, view); }

The other part of the get method is translating the ViewEntry into an object, which is done via a method similar to this monster:

protected E createObjectFromViewEntry(ViewEntry entry) throws Exception { Class currentClass = this.getClass(); Class modelClass = Class.forName(JSFUtil.strLeftBack(currentClass.getName(), ".") + "." + JSFUtil.strLeftBack(currentClass.getSimpleName(), "List")); // Create an instance of our model object E object = (E)modelClass.newInstance(); object.setUniversalId(entry.getUniversalID()); object.setDocExists(true); // Loop through the declared columns and extract the values from the Entry Method[] methods = modelClass.getMethods(); String[] columns = this.getColumnFields(); Vector values = entry.getColumnValues(); for(Method method : methods) { for(int i = 0; i < columns.length; i++) { if(values != null && i < values.size() && columns[i].length() > 0 && method.getName().equals("set" + columns[i].substring(0, 1).toUpperCase() + columns[i].substring(1))) { String fieldType = method.getGenericParameterTypes()[0].toString(); if(fieldType.equals("int")) { if(values.get(i) instanceof Double) { method.invoke(object, ((Double)values.get(i)).intValue()); } } else if(fieldType.equals("class java.lang.String")) { method.invoke(object, values.get(i).toString()); } else if(fieldType.equals("java.util.List<java.lang.String>")) { method.invoke(object, JSFUtil.toStringList(values.get(i))); } else if(fieldType.equals("class java.util.Date")) { if(values.get(i) instanceof DateTime) { method.invoke(object, ((DateTime)values.get(i)).toJavaDate()); } } else if(fieldType.equals("boolean")) { if(values.get(i) instanceof Double) { method.invoke(object, ((Double)values.get(i)).intValue() == 1); } else { method.invoke(object, values.get(i).toString().equals("Yes")); } } else if(fieldType.equals("double")) { if(values.get(i) instanceof Double) { method.invoke(object, (Double)values.get(i)); } } else if(fieldType.equals("java.util.List<java.lang.Integer>")) { method.invoke(object, JSFUtil.toIntegerList(values.get(i))); } else { System.out.println("!! unknown type " + fieldType); } } } } return object; }

Yes, I am aware that it's O(m*n). Yes, I am aware I could move values != null outside the loop. Yes, I am aware I do the same string manipulation on each column name many times. Shut up.

Anyway, that code finds the model class and loops through its methods, looking for set* methods for each column (the column list being specified in the implementation class). When it finds one that matches, it checks the data type of the property, and invokes the method with an appropriately-cast or -converted column value. The result is that the implementation class only needs to provide the code needed for fetching any data that isn't in the view, such as rich text. At one point, I think I had it so that it read the view column titles and used those as field names, but I stopped doing that for some reason... maybe it was too slow, even for this method.

In any event, next time, I'll go over some of the disadvantages that I've found with this method that don't involve big-O notation.

XPages MVC: Experiment II, Part 2

Thu May 24 20:43:00 EDT 2012

Tags: xpages mvc
  1. XPages MVC: Experiment I
  2. XPages MVC: Experiment II, Part 1
  3. XPages MVC: Experiment II, Part 2
  4. XPages MVC: Experiment II, Part 3
  5. XPages MVC: Experiment II, Part 4

Continuing on from my last post, I'd like to go over the collection and model structures I used in my guild-forums app.

I set up the data in the Notes DB in a very SQL-ish way (in part because I considered not using Domino initially). Each Form has a single equivalent view (for the most part), which is uncategorized and sorted by default by an ID column. There is one column for each column that I want to sort by and, in the case of simple documents, one for each applicable field, containing only the field name; additionally, I created $-named "utility" columns for when I wanted to sort by a given bit of computed data or have a second secondary sort column for a given column. For the most part, the data is normalized, but I had to give in to reality in a few situations, for things like sorting Topics by the their latest Post dates.

For collection classes in Java, each collection class file contains one public class for the "manager" and one list implementation class, both of which extend from Abstract versions, so the actual implementations may have very little code. For example, here is the "Groups" Java file (minus some Serialization details):

public class Groups extends AbstractCollectionManager<Group> { protected AbstractDominoList<Group> createCollection() { return new GroupList(); } } class GroupList extends AbstractDominoList<Group> { public static final String BY_ID = "$GroupID"; public static final String BY_NAME = "$ListName"; public static final String BY_CATEGORY = "ListCategory"; public static final String DEFAULT_SORT = GroupList.BY_ID; protected String[] getColumnFields() { return new String[] { "id", "name", "category", "description" }; } protected String getViewName() { return "Player Groups"; } protected String getDatabasePath() { return "wownames.nsf"; } protected Group createObjectFromViewEntry(DominoEntryWrapper entry) throws Exception { Group group = super.createObjectFromViewEntry(entry); Document groupDoc = entry.getDocument(); group.setMemberNames((List>String<)groupDoc.getItemValue("Members")); return group; } }

The actual heavy lifting is done by the parent class, letting the specific classes be composed almost entirely of descriptions of where to find the view, what columns map to model fields, and any extra code that is involved with that object mapping. Other classes are larger, but follow the same pattern. For example, here's a snippet from the manager for Posts:

public class Posts extends AbstractCollectionManager<Post> { // [-snip-] public List<Post> getPostsForTag(String tag) throws Exception { return this.search("[Tags]=" + tag); } public List<Post> getPostsForUser(String user) throws Exception { return this.getCollection(PostList.BY_CREATED_BY, user); } public List<Post> getScreenshotPosts() throws Exception { return this.search("[ScreenshotURL] is present", PostList.BY_DATETIME_DESC); } }

The parent abstract class provides utility methods like getCollection and search that take appropriate arguments for sorting, searching (I love you, FTSearchSorted), and keys.

As for the individual model objects, they start out as your typical bean: a bunch of fields with simple data types and straightforward getters and setters. Additionally, they contain "relational" methods to fetch associated other objects, such as the Forum that contains the Topic, or a list of the Posts within it. Finally, the model class is responsible for actually saving back to the Domino database, as well as taking care of any business rules, like making sure that the Topic document is updated with a new Post's date/time.

The result is that a new object type is easy (-ish) to set up, with all the code being written contributing specifically towards the task at hand. And, since the nuts and bolts of XPages are Lists and Maps, working with these objects is smooth, taking advantage of Expression Language's cleanliness and Server JavaScript's bean-notation support in the UI and Java's clunky-but-still-useful collections features.

Next time, I'll go over some of the specifics of how I deal with fetching collections and individual records, use reflection to save me some coding hassle, and other tidbits from the giant messes of code that make up my abstract classes. After that, I think I'll make a fourth post to detail some down sides to my approach as implemented.

XPages MVC: Experiment II, Part 1

Wed May 23 15:38:00 EDT 2012

Tags: xpages mvc
  1. XPages MVC: Experiment I
  2. XPages MVC: Experiment II, Part 1
  3. XPages MVC: Experiment II, Part 2
  4. XPages MVC: Experiment II, Part 3
  5. XPages MVC: Experiment II, Part 4

As I've mentioned a couple times, my largest XPages app to date is the site I did for my guild's forums. The forum part itself isn't particularly amazing, considering it's almost harder to NOT write a forum in Domino than it is to write one, but it gave me a chance to try my hand at abstracting data access away from the XPage.

I put "Part 1" in the title because I figure it will be best to break the topic up into at least three parts: the overall idea and the result, the structure of the model and collection classes, and some nitty-gritty bits about how I get the data out of the back-end classes.

My first draft of this site was straightforward, from an XPages perspective: I mapped XPages to the corresponding Forms and used a bit of Server JavaScript here and there to pull in bits of support information. This worked pretty well at first, but quickly ran into code and speed scalability issues. Just looking at a "Topic" page, the following data needs to be looked up from outside the Topic document itself:

  • The name and ID of the parent Forum document
  • The name and ID of the parent Forum's parent Category document
  • Whether or not the topic is marked as a "favorite" for the current user, which is stored in a prefs doc for the user
  • The collection of Post documents for the Topic
  • For each Post document:
    • Summary and rich text data from the document
    • The display name, title, and signature of the posting user, which are stored in their Person document
    • The number of posts from the posting user that are visible to the current user, updated live (heh)
    • Whether or not the current user has rights to edit the Post document
  • Any surrounding page elements, such as the count of unread topics from the front page, the "who's online" list, etc.

As you can imagine, writing each one of those as inline SSJS turned into a hairy, slow mess. And that's just for the Topic page! That's leaving out the other pages, such as the forum index, which has to show a tree hierarchy of Categories and Forums with up-to-date Topic and Post counts.

I tried wrapping these things up into SSJS functions in a script library, which helped a LITTLE, but it still sucked. The clear solution was to wrap these all in proper model objects and, after a number of revisions, I came up with a pretty good solution.

Conceptually, the back-end structure consists of:

  • Collection Managers. These are analogous to Views, in that there's one per document type and they communicate with a similarly-named uncategorized View with many sortable columns. I write methods in these managers to allow for returning an appropriate collection for the circumstance (e.g. getPostsForForumId(...) and getPostsForTag(...)), as well as generic methods for getting arbitrary collections for a given key, sort column, and FT search query.
  • Collections. These are analogous (and often wrap) ViewEntryCollections. They implement the List interface to allow for use in xp:repeats and handle efficiently getting the needed model objects and re-fetching the View and Collection if they're recycled away.
  • Model Objects. These are what you'd think: Post, Topic, etc.. These provide getters/setters for the fields of the document, getters for associated collections (e.g. topic.getPosts()), and handle saving the document back to the DB.

The net result is that pages went from having giant blocks of lookup code (or SSJS function calls that pointed to giant block functions) to nice little EL expressions like #{category.forums} and #{forum.latestPost.createdBy}. As importantly, this allowed for a lot of caching to make these relational operations speedy when dealing with live data in a Domino DB.

Next time, I'll go into some specifics about how I structured the collection and model classes.

XPages MVC: Experiment I

Tue May 22 17:28:00 EDT 2012

Tags: xpages mvc
  1. XPages MVC: Experiment I
  2. XPages MVC: Experiment II, Part 1
  3. XPages MVC: Experiment II, Part 2
  4. XPages MVC: Experiment II, Part 3
  5. XPages MVC: Experiment II, Part 4

Now that I've had a bit of time, I've started trying out some ideas for new ways to do XPage development. Specifically, I'm trying out the "XPages as the Controller" setup I pondered last time. The general goal is this:

  • There would be one XPage for each zone of concern (I don't know the right terminology): Posts.xsp, Users.xsp, etc.
  • The XPage itself would have almost nothing on it - it would exist as a trigger for the server to load an appropriate Controller class, which would in turn handle the output, usually by picking a custom control by name.
  • The custom control will handle all the appearance logic... and ONLY the appearance logic. The Controller will feed in any variables and data sources that are needed, and then the custom control will just reference those values entirely via EL.
  • The Controller will marshal will either create Domino View data sources directly or will use some wrapper Model class, in order to allow the XSP environment to handle efficiency and serialization.

So far, I've made some interesting progress. I originally set out to use a ViewHandler class to find the action and feed in the variables, but I ran into trouble getting to the viewScope or an equivalent in time for on-page-load value bindings to access them - either I was too early (before createView) or too late (after createView). What I really wanted was the beforePageLoad event, so I did just that: I created a view-scoped managed bean called "controller" that has some code to check what XPage is being loaded and to find a Controller class of the equivalent name (e.g. "Posts.xsp" → "xsp.controller.Posts"). Then, I do beforePageLoad="#{controller.control}" (I apologize for the names - I may change it to "router"), and it works nicely. The controller manager thing loads the class, asks it to figure out the current action ("list", "tag", etc.), and then tries to call a method by that name via reflection.

It sounds a bit weird, and the implementation is sort of half-baked, but the result is that you end up with a class with a structure like this (with actual implementation stuff removed):

public class Posts extends AbstractController { public void list() { ... } public void show() { ... } public void tag() { ... } }

Each of those methods corresponds to an action that can either be implied (like "list" being the default for Posts.xsp, established via other code) or derived from the URL, like "/Posts.xsp/show/1" to show the post with ID "1". The code in the method would fetch the appropriate object or collection and attach it to the XPage (as a dynamically-added data source, for example). Then, the XPage would load up an appropriate custom control for the action (for example, "pages_list" or "pages_show"), which would know what values to look for in the view scope.

I'm going to keep working on this, but I can think of a couple potential muddy areas:

  • I'm not sure how real data-backed custom controls (e.g. a list of posts by month in the sidebar) work into this. You can add beforePageLoad code in a custom control, but the view root is still the main page you're loading, so it's not as clean.
  • I'm not sure if it's worth trying to shuttle data source actions through the controller (like "save" for a document) or best to reference the methods in the model from the XPage directly. Since the environment is already "impure", I don't know how much bending-over-backwards is warranted.
  • The whole thing may be an exercise in cramming Rails-isms where they don't fit, like when someone familiar with SQL first starts programming for Domino. Just because something works doesn't mean it's correct for the platform.

I'm sure there will be more sticking points, too. Overall, though, this feels mostly good, or at least conceptually better than the other development paradigms I've tried with XPages. As the post title indicates, though, I expect this to be experiment number 1 of many.

More Musing About Controllers

Sun May 13 08:45:00 EDT 2012

Tags: xpages mvc

I've been thinking more about this MVC thing, thanks to re-learning Rails. I'm fairly convinced that moving as much code as possible out of the XPage and into wrapper objects is a big improvement (convinced enough that it feels silly that wasn't doing it already), but it still feels not quite right.

Two parts of the MVC trinity are straightforward in modern Domino development: Forms and wrapper objects are the Model and XPages are the View. The Controller is where things get muddy. Off the top of my head, routing and controlling are handled by:

  • Domino's built-in URL handling
  • "Display XPage Instead" on Forms and Views
  • Web rules in the Directory
  • navigationRules on the XPage itself
  • The XPage's names themselves, as used in URLs
  • Anything fancy you do with document mode switching and multi-page XPages

It's kind of a mess, and very few of these are really programmable. You can do a lot with web rules, but the application itself doesn't know about them, so you end up with code that's coupled with a specific server configuration - very unsafe. Domino's built-in URLs work well enough for basic operations - show a view, open a document, edit a document - but don't understand more complex concepts, like a shopping cart "Checkout" (yes, I just re-did the depot tutorial from the Rails book).

Honestly, it might be enough to move the action code out of the XPage and into the wrapper classes. That's a big step in readability, maintainability, flexibility, and error prevention, and most sites get along just fine with a lot less. Still, it doesn't feel right.

I don't really have a better option, though. This morning, I've been kicking the idea around that you could treat the XPage itself as the Controller, sacrificing URL and code cleanliness for some power. In this setup, a main XPage would correspond to an aspect of your application (say, "AdminSite") or management tools for a data type (say, "Posts"). You would construct your URLs by chaining path info past the ".xsp" part and then use the main page (or, ideally, a shared library across all pages) to figure out what the real request is. You'd end up with an extra level to your URLs, like "Posts.xsp/search?q=foo". It's horrifyingly ugly, especially right after "whatever.nsf", but it makes a sort of sense: ":application/:controller/:action?:parameters". The alternative now is to have multiple XPages that each deal with very similar things and, if you have a large app, name them in a structured way anyway, like "Posts_List.xsp" and "Posts_Search.xsp?q=foo".

You'd have to put the individual action code in custom controls with loaded parameters to match the action - this would add yet another type of element to the custom controls list ("subforms", actual custom controls, and now the View component of each action), but it may be a necessary evil. The Extension Library has something like this, though not exact, in its "Core_DynamicPage" demo.

The whole idea seems a bit half-baked at the moment but worth investigation, so I'll have to tinker with it a bit when I have some time.