Rob Harrop

Alumni
Recent Blog posts by Rob Harrop

SpringSource dm Server 1.0 RC2 Released

Engineering | September 11, 2008 | ...

I'm happy to announce the availability of RC2 of the SpringSource dm Server, previously known as the SpringSource Application Platform. This release is feature complete and barring any serious issues will become 1.0 GA in two weeks time.

This release fixes a few critical bugs, upgrades to Tomcat 6.0.18 and updates all code, documentation and supporting materials to reflect the new name.

Due to the renaming of the product, PlatformOsgiBundleXmlWebApplicationContext has been renamed to ServerOsgiBundleXmlWebApplicationContext and moved from the com.springsource.platform.web.dm package to the com.springsource.server.web.dm package. Thus, if you are setting the contextClass for Spring MVC's ContextLoaderListener or DispatcherServlet in web.xml in a Shared Services WAR, be sure to change the fully qualified path to com.springsource.server.web.dm.ServerOsgiBundleXmlWebApplicationContext

Using EclipseLink on the SpringSource Application Platform

Engineering | July 17, 2008 | ...

This week the EclipseLink team announced the release of EclipseLink 1.0. I've been using EclipseLink on S2AP for a while now; in fact, I used EclipseLink when developing our JPA load-time-weaving support.

We've yet to upgrade our internal usage to 1.0 - our beta9 was tagged just before the announcement - but I wanted to demonstrate how effectively the pairing works in an OSGi environment.

In the 1.2.0 version of the S2AP Petclinic sample, we released the EclipseLink implementation of the Clinic back-end. The back-end is a drop-in replacement for the JDBC back-end that was previously the only option.

To build the EclipseLink version of Petclinic, simply open a terminal window in the Petclinic root directory and run:

cd org.springframework.petclinic.eclipselink
ant collect-provided jar
	<p>
		This will create the Petclinic EclipseLink PAR file in  <span style="font-family:courier">org.springframework.petclinic.eclipselink/target/artifacts/org.springframework.petclinic.eclipselink.par</span> and will put all the required bundles in  <span style="font-family:courier">org.springframework.petclinic.eclipselink/target/par-provided/bundles/</span>.
	</p>
	<h2 id="running-petclinic-eclipselink">
		Running Petclinic EclipseLink
	</h2>
	<p>
		To run the Petclinic EclipseLink application, copy all the provided…

Running Spring Applications on OSGi with the SpringSource Application Platform

Engineering | May 02, 2008 | ...

A lot of people have been asking what exactly the SpringSource Application Platform does for Spring applications to make them run well under OSGi, over and above what you can get out of the box with OSGi and Spring Dynamic Modules. Adrian’s post yesterday highlighted some of the general issues, now lets look at a few of the details.

The three most challenging aspects of running Spring applications on OSGi are:

  • Load-time weaving
  • Classpath scanning
  • Thread context classloader management

The remaining, but less interesting, issues include: JSP support, TLD scanning, annotation matching and resource lookups. Overall, there was a decent-sized set of issues that needed to be solved to make applications deploy smoothly.

Load-time weaving

Load-time weaving was one of the most problematic features to support in a robust manner. At the basic level, it requires hooking into the Equinox ClassLoader so that standard ClassFileTransformers can be attached and used during the defineClass calls. On top of this, many uses of LTW require access to a throwaway ClassLoader that can be used to inspect types to decide what needs to happen during the weave, without affecting the real ClassLoader.

This base level of support was actually reasonably simple to achieve. The difficulty comes in when weaving is driven by classes in one bundle, but classes in another bundle need to be woven. This is pretty common in enterprise applications where one bundle contains domain entities and another contains types that use a JPA EntityManager. The Platform takes care of this complexity by ensuring that all bundles in an application can be woven with the appropriate ClassFileTransformers.

When you start propagating weaving across to other bundles, you really need to know when to stop. If you simply apply weaving to all bundles, then applications will interfere with each other. The Platform prevents this from happening by explicitly scoping weaving so that it applies only to modules in the application.

Another issue with LTW is that it complicates refresh. When a bundle is refreshed, OSGi will refresh all the bundles that depend on it. This means that, in the example I gave above, refreshing the domain bundle will cause the EntityManager bundle to be refreshed. However, refreshing the EntityManager does not refresh the domain bundle, meaning that weaving is possibly out of sync. The Platform handles this by propagating refresh to other bundles that are affected by weaving.

Classpath scanning

With classpath scanning, the main issue is that Equinox doesn’t expose standard jar: and file: resources. The Platform puts an adapter in the middle so that libraries see the resource protocols that they expect. This has a nice side-effect of making a lot of third-party libraries work - it’s not just a fix for classpath scanning.

Thread context classloader management

Many third-party libraries use the thread context ClassLoader to access application types and resources. Each bundle in OSGi has it’s own ClassLoader, so therefore, only one bundle can be exposed as the thread context ClassLoader at any time. This means that if a third-party library needs to see types that are distributed across multiple bundles, it isn’t going to work as expected.

The Platform fixes this by creating a ClassLoader that imports all the exported packages of every module in your application. This ClassLoader is then exposed as the thread context ClassLoader, enabling third-party libraries to see all the exported types in your application.

This is just a small cross-section of the issues that are addressed by the Platform but hopefully it gives you an idea of what the Platform means for Spring Framework users.

Introducing the SpringSource Application Platform

Engineering | April 30, 2008 | ...

After many months of feverish coding, I am pleased to announce the beta release of the SpringSource Application Platform 1.0.

At the beginning of 2007 we began discussing possible alternatives to the monolithic and heavyweight application servers with which Enterprise Java has become synonymous. Customers were looking for a platform that was lightweight, modular and flexible enough to meet their development and deployment needs.

The Spring and Tomcat pairing demonstrates that developers and operators can successfully use a lightweight platform in production. Despite the success of this combination, the lack of modularity and explicit support for non-web applications limits its applicability and flexibility.

We set about building the SpringSource Application Platform to address these requirements and remove these limitations.

At the heart of the Platform is the Dynamic Module Kernel (DMK). The DMK is an OSGi-based kernel that takes full advantage of the modularity and versioning of the OSGi platform. The DMK builds on Equinox and extends its capabilities for provisioning and library management, as well as providing core function for the Platform.

SpringSource Application Platform Architecture

To maintain a minimal runtime footprint, OSGi bundles are installed on demand by the DMK provisioning subsystem. This allows for an application to be installed into a running Platform and for its dependencies to be satisfied from an external repository. Not only does this remove the need to manually install all your application dependencies, which would be tedious, but it keeps memory usage to a minimum.

The DMK itself requires a minimal set of bundles to run, and is configured with a profile to control exactly the set of additional modules that are loaded. For example, the DMK does not require Tomcat to be present, but the default Platform profile includes Tomcat to allow web applications to be deployed. If you want to run a Platform without Tomcat, you can simply edit the profile and it won’t be installed. (If you try this - remember that removing web support means that web modules will no longer deploy, so delete the contents of the pickup directory so the platform won’t attempt to install the Admin and splash screen applications when it starts.) The default Platform configuration with the admin console installed takes only 15MB of memory.

One frustration I have always had with Enterprise Java is that applications are often shoe-horned into contrived silos, and that explicit support for different application types is lacking. Consider an application for an online store. This application has a web front-end, a message-driven order processing module, a batch-driven stock reordering module and a B2B web service module. Today, many applications like this would be packaged as a WAR or an EAR and the modules would look very similar, with little support for the differences in the module types. Interestingly, many people would refer to this as a web application, rather than an application with a web module.

In the SpringSource Application Platform, applications are modular and each module has a personality that describes what kind of module it is: web, batch, web service, etc. The Platform deploys modules of each personality in a personality-specific manner. For example, web modules are configured in Tomcat with web context. Each module in the application can be updated independently of the other modules whilst retaining the identity of being part of the larger application. Whatever kind of application you are building, the programming model remains standard Spring and Spring DM.

In the 1.0 Platform release we support the web and bundle personalities, which enable you to build sophisticated web applications. Future releases will include support for more personalities as detailed later.

Building Applications

The Platform supports applications packaged in three forms:

  1. Java EE WAR
  2. Raw OSGi bundles
  3. Platform Archive (PAR)

Standard WAR files are supported directly in the Platform. At deployment time, the WAR file is transformed into an OSGi bundle and installed into Tomcat. All the standard WAR contracts are honoured, and your existing WAR files should just drop in and deploy without change.

Any OSGi-compliant bundle can be deployed directly into the Platform, and can take full advantage of on-the-fly provisioning for any dependencies referred to by Import-Package and Require-Bundle.

The PAR format is the recommended approach for packaging and deploying applications for the Platform. A PAR is simply a collection of OSGi bundles (modules) grouped together in a standard JAR file, along with a name and a version that uniquely identify the application. The PAR file is deployed as a single unit into the Platform. The Platform will extract all the modules from the PAR and install them. Third-party dependencies will be installed on-the-fly as needed.

The PAR format has three main benefits over deploying the bundles directly into the Platform. Firstly, it’s just easier. An average-sized enterprise application might contain 12+ bundles - deploying these by hand will be far too cumbersome. Secondly, the PAR file forms an explicit scope around all the bundles in the application which prevents applications that use overlapping types or services from clashing with each other. This scope is also used by some advanced features such as load-time weaving to ensure that weaving of one application doesn’t interfere with weaving of another. Lastly, the PAR forms a logical grouping to define what modules are part of an application and what third-party dependencies the application has. This grouping is used by the management tools to provide a detailed view of the application. A typical PAR application looks like this:

PAR File Structure

Dependencies between modules in an application are typically expressed using Import-Package and Export-Package. Dependencies on third-party libraries can be expressed in the same way, but for many libraries this can be error-prone and time-consuming. When using a library such as Hibernate, you will typically have to import more packages than you initially expect. To get around this you could use Require-Bundle, but this has some semantic rough edges such as split packages where a logical package is split across two or more class loaders causing problems at runtime. The Platform introduces two new mechanisms for referring to third-party dependencies: Import-Bundle and Import-Libary. Import-Bundle is analogous to Require-Bundle except it prevents split packages and the other problems with Require-Bundle. Import-Library provides a mechanism to refer to all the packages exported by a group of bundles, for instance all bundles in the Spring Framework, in a single declaration:

Bundle-SymbolicName: com.myapp.dao.jdbc
Bundle-Version: 1.0.0
Import-Bundle: org.apache.commons.dbcp;version="1.2.2.osgi"
Import-Library: org.springframework.spring;version="2.5.4.A"

Here I have a module bundle that depends on the Commons DBCP bundle and the Spring Framework library. The Spring Framework library contains all the bundles needed to use Spring in your application.

Import-Library and Import-Bundle expand under the covers into Import-Package and are therefore consistent with standard OSGi semantics.

The Platform understands the personality of a module, and from this can infer how to configure the module’s execution environment. When deploying a web module, all the servlet infrastructure needed for a typical Spring MVC application is created automatically, removing the need to recreate this boilerplate code across applications. In the 1.0 final release, more smarts will be added to the web module personality to support additional technologies such as Spring Web Flow.

Whatever packaging format you choose, the programming model is simply Spring Framework and Spring Dynamic Modules, with the other Spring Portfolio products running on top.

Serviceability

Serviceability was a critical consideration for the whole engineering team. We have spent an inordinate amount of time worrying about the format of log messages and the size of stack traces to make sure that diagnosing problems with your applications is as easy as possible. Any time we find a task that is repetitive and time-consuming we look for a way to automate it or remove it altogether.

To aid with problem diagnosis the Platform has a strong split between log and trace messages. Log messages are intended for end-user consumption and allow you to get access to the most important failure information without having to trawl through gigabytes of trace content. All application failures are displayed and coded in the log output - the codes serving as a convenient way to access knowledge base or support content. To understand why this is so useful, consider the following output from Platform startup:

[2008-04-29 12:12:01.124] main                     <SPKB0001I> Platform starting.
[2008-04-29 12:12:04.037] main                     <SPKE0000I> Boot subsystems installed.
[2008-04-29 12:12:06.013] main                     <SPKE0001I> Base subsystems installed.
[2008-04-29 12:12:07.396] platform-dm-1            <SPPM0000I> Installing profile 'web'.
[2008-04-29 12:12:07.674] platform-dm…

The Power of Batch

Engineering | June 23, 2007 | ...

In the last session of SpringOne yesterday, Dave Syer, Scott Wintermute, Lucas Ward and Wayne Lund all presented on Spring Batch. I didn't actually attend (since I had an early cab ride), but I stuck my head in and was yet again astounded by the amount of interest.

Back at JavaOne we had an immense amount of interest in this solution as well, with plenty of visitors calling by the booth to quiz us about batch.

It's all too easy in this world of Ajax and Rich Internet Applications to forget that a large number (a majority maybe?) of large scale enterprise applications are batch-oriented. Batch…

Installing WebSphere Application Server 6.1 on Ubuntu

Engineering | January 19, 2007 | ...

Recently I've been doing some work with a client on WAS 6.1. Since we have a number of Spring users on WAS and I need to test the application, I decided it was time to get a copy of WAS running on one of my work laptops. I say 'one of' because I'm currently working on both my Mac (with OSX) and my ThinkPad (with Ubuntu) - more recently I've just been using the ThinkPad because I can have Oracle XE and WAS running without the need for a VM tool like Parallels. I still prefer the Mac, but to be honest there isn't much difference day-to-day - I just miss some of the more useful Mac tools like Spotlight, Quicksilver, TextMate and NewsFire.

Anyway, back to the main topic - installing WAS 6.1 on Ubuntu. I'm using Ubuntu Edgy and my first attempts at an install failed completely and I just couldn't figure out why. Thankfully a quick Google turned up this article. I was completely unaware that /bin/sh was linked to dash instead of bash - what on earth possessed them. I didn't really like the suggested solution of running the installer, letting it fail and then changing all the scripts in the installed directory. Instead, I just relinked /bin/sh with a quick sudo unlink /bin/sh followed by sudo ln -s /bin/bash /bin/sh. After that, the installer ran like a dream and I was up and running with a WAS install in about 15 minutes.

Even on my ThinkPad with Oracle XE running at the same time, WAS runs pretty quickly. One of the nicest things about WAS is that the tools provided (admin console, command-line tools) are really robust. The Admin Console is noticeable for its performance - many other servers have consoles that are painfully slow.

A Bridge Too Far

Engineering | January 16, 2007 | ...

In my last entry I presented a technique for creating strategy classes that take full advantage of any generic metadata that is present in your application. At the end of that entry I showed this code snippet:

EntitlementCalculator calculator = new DividendEntitlementCalculator();
calculator.calculateEntitlement(new MergerCorporateActionEvent());

You'll remember that DividendEntitlementCalculator was defined as:

public class DividendEntitlementCalculator implements EntitlementCalculator<DividendCorporateActionEvent> {

    public void calculateEntitlement(DividendCorporateActionEvent event) {

    }
}

As such it is not correct to pass an instance of MergerCorporateActionEvent into the calculateEntitlement method of the DividendEntitlementCalculator class. However, as I mentioned in my last entry that code will compile. Why? Well, EntitlementCalculator.calculateEntitlement() is defined to accept any type that extends CorporateActionEvent so it should compile. So in this scenario what happens at runtime and how does Java enforce type safety? Well, as you might imagine running this code gives you a ClassCastException saying that you cannot cast a MergerCorporateActionEvent to DividendCoporateActionEvent. In this way, Java can enforce type safety for you application - there is no way that the MergerCorporateActionEvent can creep into a method where DividendCorporateActionEvent is expected.

The real question here is: 'Where does that ClassCastException come from?' The answer is pretty simple - the Java compiler adds the code to create and throw it as appropriate by introducing a bridge method. Bridge methods are synthetic methods that the compiler will generate and add to your classes to ensure type safety in the face of generic types.

In the case shown above EntitlementCalculator.calculateEntitlement can be called with any object that is type compatible with CorporateActionEvent. However, DividendEntitlementCalculator accepts only objects that are type compatible with DividendCorporateActionEvent, but, since you can call the DividendEntitlementCalculator via the EntitlementCalculator interface it too must accept CorporateActionEvent. So what does this translate to in the compiled class file? We have the user supplied method:

public void calculateEntitlement(DividendCorporateActionEvent event) {
    System.out.println(event);
}

Which translates to this bytecode:

public void calculateEntitlement(bigbank.DividendCorporateActionEvent);
  Code:
   Stack=2, Locals=2, Args_size=2
   0:   getstatic       #2; //Field java…

Exploiting Generics Metadata

Engineering | September 29, 2006 | ...

It is a common misconception that I hear when talking with clients that all information about generic types is erased from your Java class files. This is entirely untrue. All static generic information is maintained, and only generic information about individual instances is erased. So if I have a class Foo that implements List<String>, then I can determine that Foo implements the List interface parameterised by String at runtime. However, if I instantiate an instance of ArrayList<String> at runtime, I cannot take that instance and determine its concrete type parameter (I can determine that ArrayList requires type parameters). In this entry I’m going to show you a practical usage for some of the available generics metadata that simplifies the creation of strategy interfaces and implementations that differ by the type of object they process.

A pattern that I see occurring in many applications is the use of some kind of strategy interface with concrete implementations each of which handles a particular input type. For example, consider a simple scenario from the investment banking world. Any publicly traded company can issue Corporate Actions that bring about an actual change to their stock. A key example of this is a dividend payment which pays out a certain amount of cash, stock or property per shared to all shareholders. Within an investment bank, receiving notification of these events and calculating the resultant entitlements is very important in order to keep trading books up to date with the correct stock and cash values.

As a concrete example of this, consider BigBank which holds 1,200,000 IBM stock. IBM decides to issue a dividend paying $0.02 per share. As a result, BigBank needs to receive notification of the dividend action and, at the appropriate point in time, update their trading books to reflect the additional $24,000 of cash available.

The calculation of entitlement will differ greatly depending on which type of Corporate Action is being performed. For example, a merger will most likely result in the loss of stock in one company and the gain of stock in another.

If we think about how this might look in a Java application we could assume to see something like this (heavily simplified) example:


public class CorporateActionEventProcessor {

    public void onCorporateActionEvent(CorporateActionEvent event) {
        // do we have any stock for this security?

        // if so calculate our entitlements
    }
}

Notifications about events probably come in via a number of mechanisms from external parties and then get sent to this CorporateActionEventProcessor class. The CorporateActionEvent interface might be realised via a number of concrete classes:


public class DividendCorporateActionEvent implements CorporateActionEvent {

    private PayoutType payoutType;
    private BigDecimal ratioPerShare;

    // ...
}

public class MergerCorporateActionEvent implements CorporateActionEvent {

    private String currentIsin; // security we currently hold
    private String newIsin; // security we get
    private BigDecimal…

Get ahead

VMware offers training and certification to turbo-charge your progress.

Learn more

Get support

Tanzu Spring offers support and binaries for OpenJDK™, Spring, and Apache Tomcat® in one simple subscription.

Learn more

Upcoming events

Check out all the upcoming events in the Spring community.

View all