Lost Virginity: WeakHashMap first timer

It was almost three years ago at OOPSLA in Nashville, where I heard about the WeakHashMap for the first time. The class is quite useful if you need a Map implementation, where the keys are compared using their memory references and not using equals. Another important property of the WeakHashMap is that the Map entries are removed "automagically" if no other object (than the WeakHashMap itself) holds a reference to the key object. The garbage collection will in that case remove Map entry and collect the key.

In the past 3 years I never used the WeakHashMap in any project. That changed yesterday. In the game that we are currently developing, we are using a mechanism where the game client is sending game events to the server. The server will then evaluate the game events and alter the users in memory before their state is made persistent in the database. Here is an example:

public interface GameEvent {



    /**

     * Subclasses may implement this to run validation logic, before the GameEvent is processed.

     *

     * @param user the {@link User} to apply this {@link GameEvent} on

     * @return {@link AuditResult} never <code>null</code>

     */

    AuditResult validate(User user);



    /**

     * Subclasses may implement this to run implementation specific logic, potentially altering the

     * given {@link User}. 

     *

     * @param user the {@link User} to apply this {@link GameEvent} on

     * @return {@link AuditResult} never <code>null</code>

     */

    AuditResult process(User user);

}





public class ConsumeFood implements GameEvent {



    private final int amount;



    public ConsumeFood(final int amount) {

        this.amount = amount;

    }



    @Override

    public AuditResult validate(final User user) {

        if (user.getFood() < this.amount) {

            return AuditResult("User doesn't have this amount of food.");

        }

        return AuditResult.SUCCESS;

    }



    @Override

    public AuditResult process(final User user) {

        user.addEnergy(this.amount);

        user.subtractFood(this.amount);

        return AuditResult.SUCCESS;

    }

}


Once the game client sends us the ConsumeFood game event, we subtract food from the player and add energy instead. We also have a wrapper class around a collection of game events and the execution logic looks like this:

public class GameEvents {

    

    .. other methods ...



    protected AuditResult process(final User user, final GameEvent change) {

        final AuditResult validateResult = change.validate(user);

        if (validateResult == AuditResult.SUCCESS) {

            return change.process(user);

        } else {

            return validateResult;

        }

    }

}


First we validate that we can apply the game event, then we process the event and alter the player. Since the number of different game events is continuously growing and growing, I thought it might be useful to measure the execution time of the validate and the process method in each game event. The way I implemented this a while ago, was through delegation. I added a wrapper class which was wrapping the real game event and timing the validate and process method:

import org.springframework.util.StopWatch;



public final class TimingGameEvent implements GameEvent {

    private final GameEvent gameEvent;



    public TimingGameEvent(final GameEvent gameEvent) {

        this.gameEvent = gameEvent;

    }



    /**

     * Delegates the processing to the encapsulated {@link GameEvent}. Uses a {@link StopWatch} to time the

     * execution time.

     */

    @Override

    public AuditResult process(final User user) {

        final StopWatch stopWatch = new StopWatch("process-stop-watch");

        stopWatch.start();

        try {

            return this.gameEvent.process(user);

        } finally {

            stopWatch.stop();

            this.processTimeInMs = stopWatch.getLastTaskTimeMillis();

        }

    }



    /**

     * Delegates the validation to the encapsulated {@link GameEvent}. Uses a {@link StopWatch} to time the

     * execution time.

     */

    @Override

    public AuditResult validate(final User user) {

        final StopWatch stopWatch = new StopWatch("validate-stop-watch");

        stopWatch.start();

        try {

            return this.gameEvent.validate(user);

        } finally {

            stopWatch.stop();

            this.validationTimeInMs = stopWatch.getLastTaskTimeMillis();

        }

    }

}


This worked well. However, this week we got another requirement from business. We needed to implement some sort of gameplay recorder. Each game event that the server receives must be recorded, so we can replay these events later. My first idea was to add another wrapper around the already existing TimingGameEvent wrapper class but this would have made it difficult to serialize the real game event to a File. Yes we decided to serialize to and deserialize from a String, which is stored in a plain textfile and each line represents one game event. I discarded the idea to add other wrappers around the game event and suggested a refactoring. Instead of using delegating wrappers, why not use a listener mechanism. Each listener would be notified before and after execution of the validate and the process method in each game event. Listeners could register themselves and it would be easier to extend in the future. On the negative side, of course measuring the execution times would not be as accurate anymore, as there could be other listeners which want to be notified before the game event is validated and processed. This however was not a big issue, since we were not interested in the exact time in milliseconds but rather in long running methods of a couple of seconds. I also added a mechanism to make sure the execution timing listener would get notified just before the game event method was executed and right after it was returning. More on that later.

Here is the listener interface I came up with:

public interface GameEventLifecycleListener {



    void onValidationStart(final User user, final GameEvent gameEvent);



    void onValidationFinish(final User user, final GameEvent gameEvent, 

             final AuditResult auditResult);



    void onProcessStart(final User user, final GameEvent gameEvent);



    void onProcessFinish(final User user, final GameEvent gameEvent, 

             final AuditResult auditResult);

}


Refactoring the TimingGameEvent class from above to a TimingGameEventLifecycleListener wasn't straight forward. Each invocation of the validate or the process method will now result in two listener notifications. So how do you know when to “press” stop on the StopWatch?

This is where the WeakHashMap comes in handy. Remember that each game event is going through the same chain? First onValidationStart is called, then onValidationFinish, onProcessStart and finally onProcessFinish. So the Listener could maintain a Map of all event implemented using a WeakHashMap. The first notification callback will add the game event to this Map. Subsequent notifications can assume that the game event will be present in the WeakHashMap. After the game event has passed through the chain and no object is referencing the game event anymore, it will automatically be removed from the WeakHashMap. Here is a part of the TimingGameEventLifecycleListener which will show you the concept.

import org.springframework.core.Ordered;



public class TimingGameEventLifecycleListener extends AbstractGameEventLifecycleListener {



    /**

     * By default the WeakHashMap is not thread-safe, so it needs to be wrapped in a synchronizedMap. This however

     * is quite slow, hence the ExecutionTimingGameEventLifecycleListener should not be running in production

     * all the time.

     */

    private final Map<GameEvent, TimedExecution> timedExecutions = Collections.synchronizedMap(

        new WeakHashMap<GameEvent, TimedExecution>()

    );



    @Override

    public void onValidationStart(final User user, final GameEvent gameEvent) {

        final TimedExecution timeValidation = 

          new TimedExecution(gameEvent.getClass());

        this.timedExecutions.put(gameEvent, timeValidation);

        // other stuff

    }



    @Override

    public void onValidationFinish(final User user, final GameEvent gameEvent, 

          final AuditResult auditResult) {

        final TimedExecution timeValidation = this.timedExecutions.get(gameEvent);

        if (timeValidation != null) {

     timeValidation.stopTimer();

        }

    }



    ... other notification methods ...



    @Override

    public int getOrder() {

        return Ordered.HIGHEST_PRECEDENCE;

    }

}


So the WeakHashMap can be nice in the role of a cache between different Listener methods. Another thing you may notice in the code above, is that the Listener is deriving from AbstractGameEventLifecycleListener instead of implementing GameEventLifecycleListener. I added a abstract base class for two reasons. First it is better to provide empty default implementations of all notification methods. Concrete Listeners like the TimingGameEventLifecycleListener can then only overwrite the methods they are interested in (okay in this case we are interested in all four notification methods, but other Listeners might not be). The second reason is that we want to force the Listeners to be in a specific order. Every Listener can for himself decide "how important" he is by implementing the getOrder() method defined in the org.springframework.core.Ordered interface which the AbstractGameEventLifecycleListener is implementing. Normally this interface is used by Spring to apply an order to Aspects. Though you might choose to keep your domain clean of Spring framework classes. Here is the AbstractGameEventLifecycleListener:

public class GameEvents {



    private final GameEvent[] events;

    private final NavigableSet<AbstractGameEventLifecycleListener> listeners;



    public void addListeners(final

          Collection<AbstractGameEventLifecycleListener> listeners) {

        this.listeners.addAll(listeners);

    }



    public GameEvents(final GameEvent[] events) {

        final int length = events == null ? 0 : events.length;

        this.listeners = new TreeSet<AbstractGameEventLifecycleListener>();

        this.events = new GameEvent[length];

        if (length > 0) {

            System.arraycopy(events, 0, this.events, 0, length);

        }

    }



    // other methods



    protected AuditResult process(final User user, final GameEvent gameEvent) {

        final AuditResult validateResult = gameEvent.validate(user);

        if (validateResult == AuditResult.SUCCESS) {

            return gameEvent.process(user);

        } else {

            return validateResult;

        }

    } 





    /**

     * Runs the {@link GameEvent#validate(User)} function of the given 

     * {@code gameEvent}, notifying all {@link AbstractGameEventLifecycleListener}s

     * before and after. The listener having the highest 

     * precidence is notified last before and first after the validation method.

     * @param user the {@link User} to validate the game event for

     * @param gameEvent the gameEvent to validate

     * @return the result of the validation

     */

    AuditResult runValidate(final User user, final GameEvent gameEvent) {

        for (Iterator<AbstractGameEventLifecycleListener> 

            iterator = this.listeners.descendingIterator(); 

  iterator.hasNext(); ) {

            final AbstractGameEventLifecycleListener listener = iterator.next();

            listener.onValidationStart(user, gameEvent);

        }



        final AuditResult validateResult = gameEvent.validate(user);



        for (final AbstractGameEventLifecycleListener listener : this.listeners) {

            listener.onValidationFinish(user, gameEvent, validateResult);

        }



        return validateResult;

    }



    /**

     * Runs the {@link GameEvent#process(User)} function of the given 

     * {@code gameEvent}, notifying all {@link AbstractGameEventLifecycleListener}s

     * before and after. The listener having the highest 

     * precedence is notified last before and first after the validation method.

     * @param user the {@link User} to process the gameEvent for

     * @param gameEvent the audit gameEvent to process

     * @return the result of processing the gameEvent

     */

    AuditResult runProcess(final User user, final GameEvent gameEvent) {

        for (Iterator<AbstractGameEventLifecycleListener> iterator =

             this.listeners.descendingIterator(); 

  iterator.hasNext(); ) {

            final AbstractGameEventLifecycleListener listener = iterator.next();

            listener.onProcessStart(user, gameEvent);

        }



        final AuditResult validateResult = gameEvent.process(user);



        for (final AbstractGameEventLifecycleListener listener : this.listeners) {

            listener.onProcessFinish(user, gameEvent, validateResult);

        }



        return validateResult;

    }

}


I said earlier, it is desirable to notify the TimingGameEventLifecycleListener last before validation starts and first after it finishes (to get more accurate timings). The GameEvents class, which is notifying the listeners, will honor the order using a NavigableSet that can be iterated in forward and backward order. Take a look at the updated version of the GameEvents class to see how it is implemented:

public abstract class AbstractGameEventLifecycleListener 

 implements GameEventLifecycleListener, Ordered, Comparable<GameEventLifecycleListener> {



    @Override

    public void onValidationStart(final User user, final GameEvent gameEvent) { }



    @Override

    public void onValidationFinish(final User user, final GameEvent gameEvent, 

       final AuditResult auditResult) { }



    @Override

    public void onProcessStart(final User user, final GameEvent gameEvent) { }





    @Override

    public void onProcessFinish(final User user, final GameEvent gameEvent, 

       final AuditResult auditResult) { }



    /**

     * Compares the order of the two {@link GameEventLifecycleListener}s 

     * using {@link Ordered}.

     * @param other another {@link GameEventLifecycleListener}

     * @return int

     */

    @Override

    public int compareTo(final GameEventLifecycleListener other) {

        return Integer.valueOf(this.getOrder()).compareTo(other.getOrder());

    }

}


One thing I wasn't able to come up with, was a good unit test to verify that the WeakHashMap is indeed not holding key references forever. This is extremely difficult to test as it involves testing for garbage collection and no, I am not suggesting running System.gc() from your test. I found something similar on this blog post. Apparently the Netbeans API offers something called assertGC(..) but it wasn't really fitting for my use case. So if you have a good suggestion how to test the behavior of a WeakHashMap, I am happy to hear it.

* UPDATE * UPDATE * After a few weeks running this the WeakHashMap and seeing some weird errors in the logs every now and then, I realized it's not the right Map implementation to use. The WeakHashMap is not what you want to use here, because the keys are not really compared using object identity. Initially I thought this was the case, when reading through the Javadoc of the WeakHashMap. What you really want is a hybrid Map, that combines the WeakHashMap with a IdentityHashMap. This hybrid Map will compare the keys based on objects identity and also use weak key references. The bad news is, there is no such map in the JDK (Java 6 at least). The good news is, there is a WeakIdentityHashMap in the Hibernate Search project and a ReferenceIdentityMap in the Commons Collections Project which can be used.

Testing JMX between two Web Applications using Maven

The problem: you have two web applications and each is developed inside a separate Maven module. You need to communicate from one web application to the other and you don't want to implement a service but use JMX instead. This is a scenario we are having here at the moment. The first web application (application A) contains the game server logic of our new game. The second web application (application B) contains a debug tool which we will not deploy into production. I have selected JMX for the communication, mainly because I didn't wanted to add another technology and we are already using JMX in the first application. Both web application are Spring powered.

First here is a nice Spring feature, which completely hides the JMX complexity for the client application behind a proxy.


<bean id="clientConnector" class="org.springframework.jmx.support.MBeanServerConnectionFactoryBean">
<property name="serviceUrl" value="[SERVICE_URL]"/>
</bean>

<bean id="gameplayRecordable" class="org.springframework.jmx.access.MBeanProxyFactoryBean">
<property name="objectName" value="[MBEAN]" />
<property name="proxyInterface" value="any.java.Interface" />
<property name="server" ref="clientConnector" />
</bean>


First you define a client connector which connects you to the RMI server port of the other web application. Then you define a MBeanProxyFactoryBean using this client connector. The objectName must be the name of your MBean inside the MBean container. If you are not sure about the name, use jconsole to connect to the process of the first web application and look it up. Another important property of the MBeanProxyFactoryBean is the proxyInterface. This is an interface that the proxy will implement. The proxy will try to map each method call on that interface in application B to a JMX call in application A. I can really recommend to share the same Interface in both applications as it makes stuff really simple.

This was simple so far. Now lets say you want to write an integration test to automatically test the whole shebang. The test should start up a JMX enabled Jetty from Maven. This Jetty instance should explode the war file of application A (hosting the MBean you want to invoke). Once Jetty is up, the test should execute, connect to application A via the MBeanProxyFactoryBean and validate the results. First lets enable remote JMX access in the configuration of the maven-jetty-plugin:


<profiles>
<profile>
<id>itest</id>
<build>
<plugins>
<plugin>
<groupId>org.mortbay.jetty</groupId>
<artifactId>maven-jetty-plugin</artifactId>
<version>${version.jetty.plugin}</version>
<configuration>
<stopKey>stop_key</stopKey>
<stopPort>9999</stopPort>
<contextPath>/</contextPath>
<webApp>
${settings.localRepository}/com/package/../../your.war
</webApp>
<jettyConfig>
${basedir}/src/test/etc/jetty-jmx.xml</jettyConfig>
</configuration>
<executions>
<execution>
<id>start-jetty</id>
<phase>pre-integration-test</phase>
<goals>
<goal>deploy-war</goal>
</goals>
<configuration>
<daemon>true</daemon>
</configuration>
</execution>
<execution>
<id>stop-jetty</id>
<phase>prepare-package</phase>
<goals>
<goal>stop</goal>
</goals>
</execution>
</executions>
</plugin>
</plugins>
</build>
</profile>
</profiles>


As you can see, this plugin configuration is done in a Maven profile as we are defining this for application B which also has its own Jetty configuration. The important piece is the jettyConfig element which points to a jetty-jmx.xml file. To get this file, download the Jetty container that has the same version as your maven-jetty-plugin. For instance is you use version 6.1.26 of the maven-jetty-plugin, make sure you download jetty-6.1.26 from the codehaus download page. If you are using the the new jetty-maven-plugin and Jetty 7 or 8, you need to download the Jetty container from Eclipse. The configuration is the same for the maven-jetty-plugin and the jetty-maven-plugin. Just make sure you download the jetty-jmx.xml file from the right Jetty container as they are different. You don't need to specify any additional system properties like -Dcom.sun.management.jmxremote, -Dcom.sun.management.jmxremote.ssl, -Dcom.sun.management.jmxremote.authenticate or -Dcom.sun.management.jmxremote.port.

Once the jetty-jmx.xml is downloaded, put in somewhere inside your Maven module where it does not get packaged into the module artifact. In the example above you can see that we have the jetty-jmx.xml file in src/test/etc but any other location will do. Open the file and enable remote JMX access via RMI. In the Jetty 6 based jetty-jmx.xml file these elements should be commented in:


<Call id="rmiRegistry" class="java.rmi.registry.LocateRegistry" name="createRegistry">
<Arg type="int">2099</Arg>
</Call>

<Call id="jmxConnectorServer" class="javax.management.remote.JMXConnectorServerFactory"
name="newJMXConnectorServer">
<Arg>
<New class="javax.management.remote.JMXServiceURL">
<Arg>
service:jmx:rmi://localhost:17264/jndi/rmi://localhost:2099/jmxrmi
</Arg>
</New>
</Arg>
<Arg/>
<Arg>
<Ref id="MBeanServer"/>
</Arg>
<Call name="start"/>
</Call>


Note that we changed the port to 17264. You might want to use the default port instead. In the Jetty 7 based jetty-jmx.xml file these elements should be commented in:


<Call name="createRegistry" class="java.rmi.registry.LocateRegistry">
<Arg type="java.lang.Integer">1099</Arg>
<Call name="sleep" class="java.lang.Thread">
<Arg type="java.lang.Integer">1000</Arg>
</Call>
</Call>

<New id="ConnectorServer" class="org.eclipse.jetty.jmx.ConnectorServer">
<Arg>
<New class="javax.management.remote.JMXServiceURL">
<Arg type="java.lang.String">rmi</Arg>
<Arg type="java.lang.String" />
<Arg type="java.lang.Integer">0</Arg>
<Arg type="java.lang.String">/jndi/rmi://localhost:1099/jettyjmx</Arg>
</New>
</Arg>
<Arg>org.eclipse.jetty:name=rmiconnectorserver</Arg>
<Call name="start" />
</New>


To test the setup, run mvn -Pitest jetty:run and start jconsole. In jconsole you do not connect to a local process. Select Remote Process and enter the service URL. This URL can be copied from the jetty-jmx.xml file if you are using Jetty 6 (i.e. service:jmx:rmi://localhost:17264/jndi/rmi://localhost:2099/jmxrmi). If you are using Jetty 7 and the jetty-maven-plugin, there will be a info statement on the command line when Maven starts the Jetty container from where you can copy the service URL. Finally to execute the integration test, we use the maven-failsafe-plugin like this:


<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-failsafe-plugin</artifactId>
<version>2.9</version>
<configuration>
<includes>
<include>**/com/package/integration/*.java</include>
</includes>
</configuration>
<executions>
<execution>
<id>integration-test</id>
<goals>
<goal>integration-test</goal>
</goals>
</execution>
<execution>
<id>verify</id> to specify any additional system properties like -Dcom.sun.management.jmxremote, -Dcom.sun.management.jmxremote.ssl, -Dcom.sun.management.jmxremote.authenticate or -Dcom.sun.management.jmxremote.port.

Once the jetty-jmx.xml is downloaded, put in somewhere inside your Maven module where it does not get packaged into the module artifact. In the example above you can see that we have the jetty-jmx.xml file in src/test/etc but any other location will do. Open the file and enable remote JMX access via RMI. In the Jetty 6 based jetty-jmx.xml file these elements should be commented in:


<Call id="rmiRegistry" class="java.rmi.registry.LocateRegistry" name="createRegistry">
<Arg type="int">2099</Arg>
</Call>

<Call id="jmxConnectorServer" class="javax.management.remote.JMXConnectorServerFactory"
name="newJMXConnectorServer">
<Arg>
<New class="javax.management.remote.JMXServiceURL">
<Arg>service:jmx:rmi://localhost:17264/jndi/rmi://localhost:2099/jmxrmi</Arg>
</New>
</Arg>
<Arg/>
<Arg>
<Ref id="MBeanServer"/>
</Arg>
<Call name="start"/>
</Call>


Note that we changed the port to 17264. You might want to use the default port instead. In the Jetty 7 based jetty-jmx.xml file these elements should be commented in:


<Call name="createRegistry" class="java.rmi.registry.LocateRegistry">
<Arg type="java.lang.Integer">1099</Arg>
<Call name="sleep" class="java.lang.Thread">
<Arg type="java.lang.Integer">1000</Arg>
</Call>
</Call>

<New id="ConnectorServer" class="org.eclipse.jetty.jmx.ConnectorServer">
<Arg>
<New class="javax.management.remote.JMXServiceURL">
<Arg type="java.lang.String">rmi</Arg>
<Arg type="java.lang.String" />
<Arg type="java.lang.Integer">0</Arg>
<Arg type="java.lang.String">/jndi/rmi://localhost:1099/jettyjmx</Arg>
</New>
</Arg>
<Arg>org.eclipse.jetty:name=rmiconnectorserver</Arg>
<Call name="start" />
</New>


To test the setup, run mvn -Pitest jetty:run and start jconsole. In jconsole you do not connect to a local process. Select Remote Process and enter the service URL. This URL can be copied from the jetty-jmx.xml file if you are using Jetty 6 (i.e. service:jmx:rmi://localhost:17264/jndi/rmi://localhost:2099/jmxrmi). If you are using Jetty 7 and the jetty-maven-plugin, there will be a info statement on the command line when Maven starts the Jetty container from where you can copy the service URL. Finally to execute the integration test, we use the maven-failsafe-plugin like this:


<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-failsafe-plugin</artifactId>
<version>2.9</version>
<configuration>
<includes>
<include>**/com/package/integration/*.java</include>
</includes>
</configuration>
<executions>
<execution>
<id>integration-test</id>
<goals>
<goal>integration-test</goal>
</goals>
</execution>
<execution>
<id>verify</id>
<goals>
<goal>verify</goal>
</goals>
</execution>
</executions>
</plugin>

<goals>
<goal>verify</goal>
</goals>
</execution>
</executions>
</plugin>

Sharing configuration files from a Maven Parent Project

Okay this post is probably not much news for people who know Maven in and out. I am planning to use this as a reference for myself, in the case that I have to solve a similar problem again in the future. The current project I am working with, is set up as a Maven multi module project. There is a parent pom which is set to pom-packaging. There are several child modules, set to either jar- or war-packaging. Within the pom.xml file of the parent project, we use the pluginManagement section to define plugins that should be available to the child modules. The pluginManagement mechanism is an excellent way to avoid the DRY problem and not to duplicate Maven configuration within the inheriting projects.

In most cases configuring plugins within the pluginManagement section is straight forward. It can however get a bit problematic if the plugin depends on (or is reading from external) configuration files. Lets have a look at one example from this parent project of ours.


<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0
http://maven.apache.org/maven-v4_0_0.xsd">
<modelVersion>4.0.0</modelVersion>

<groupId>com.package</groupId>
<artifactId>project-parent</artifactId>
<packaging>pom</packaging>
<version>0.1-SNAPSHOT</version>

<modules>
<module>child-a</module>
<module>child-b</module>
<module>child-c</module>
</modules>

<properties>
<version.mysql.connector>5.1.12</version.mysql.connector>
</properties>

<build>

<pluginManagement>
<plugins>
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>sql-maven-plugin</artifactId>
<version>1.4</version>
<dependencies>
<dependency>
<groupId>mysql</groupId>
<artifactId>mysql-connector-java</artifactId>
<version>${version.mysql.connector}</version>
</dependency>
</dependencies>
<configuration>
<driver>com.mysql.jdbc.Driver</driver>
<url>jdbc:mysql://localhost/</url>
<username>xyz</username>
<password>xyz</password>
</configuration>

<executions>
<execution>
<id>drop-and-recreate-db</id>
<phase>process-test-resources</phase>
<goals>
<goal>execute</goal>
</goals>
<configuration>
<autocommit>true</autocommit>
<srcFiles>
<srcFile>
${project.build.directory}/sql/schema/user.sql
</srcFile>
<srcFile>
${project.build.directory}/sql/schema/core.sql
</srcFile>
<srcFile>
${project.build.directory}/sql/schema/game.sql
</srcFile>
</srcFiles>
<onError>abort</onError>
</configuration>
</execution>
</executions>
</plugin>
</plugins>
</pluginManagement>

</build>
</project>


Here we use the sql-maven-plugin to setup the database before tests are being run. The sql-maven-plugin will execute a bunch of *.sql files which are stored in a subfolder in the parent project. As you deploy the parent project to your repository, these files won't be published along with the pom.xml as the packaging is set to pom-packaging. Therefore, if you run the inherited sql-maven-plugin the *.sql files will not be available and the plugin will fail. This will certainly be a problem if your continuous integration server has a build plan for each Maven child module and not a single build plan for the entire project.

To overcome this problem, there are 2 things you have to do. First, the parent project needs to publish the *.sql files (or other static files which are needed) to your repository, so that the inheriting modules have access to these files. For this to work, we use the maven-assembly-plugin like this:


<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0
http://maven.apache.org/maven-v4_0_0.xsd">
<modelVersion>4.0.0</modelVersion>

... as before ...

<build>
<plugins>
<plugin>
<artifactId>maven-assembly-plugin</artifactId>
<inherited>false</inherited>
<configuration>
<descriptors>
<descriptor>
${project.basedir}/assembly/zip.xml
</descriptor>
</descriptors>
</configuration>
<executions>
<execution>
<id>make-assembly</id>
<phase>package</phase>
<goals>
<goal>single</goal>
</goals>
</execution>
</executions>
</plugin>
</plugins>

<pluginManagement>
... as before ...
</pluginManagement>

</build>
</project>


Note that the maven-assembly-plugin in this case is not configured within the pluginManagement section of the parent pom, as we don't want to make this functionality available to child modules. In the configuration you can see that the plugin is set up to be executed during the package phase and that the plugin configuration is defined in the zip.xml file. This zip.xml file looks like this:


<assembly xmlns="http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.0 http://maven.apache.org/xsd/assembly-1.1.0.xsd">
<id>sql-files</id>
<formats>
<format>zip</format>
</formats>
<includeBaseDirectory>false</includeBaseDirectory>
<fileSets>
<fileSet>
<directory>${project.basedir}/sql/schema</directory>
<outputDirectory/>
<includes>
<include>**/*</include>
</includes>
</fileSet>
</fileSets>
</assembly>


This configuration will create a zip-file of all *.sql files found in ${project.basedir}/sql/schema and publish this zip-file along with the pom.xml when mvn deploy is executed. The id of this configuration is "sql-files". This id will be used as a suffix and become part of the filename of the zip-file.

Now that we publish the zip-file to the repository, we need a way for the child modules to grab and extract the zip-file before the maven-sql-plugin is executed. This is where the maven-dependency-plugin comes in handy. Again, the maven-dependency-plugin is configured in the pluginManagement section of the parent pom.xml, as this time we want to inherit the functionality to child modules. Here is what the configuration of the maven-dependency-plugin looks like:


<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0
http://maven.apache.org/maven-v4_0_0.xsd">
<modelVersion>4.0.0</modelVersion>

... as before ...

<build>
<plugins>
... as before ...
</plugins>

<pluginManagement>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-dependency-plugin</artifactId>
<version>2.3</version>
<executions>
<execution>
<id>unpack-sql-files</id>
<phase>process-test-resources</phase>
<goals>
<goal>unpack</goal>
</goals>
<configuration>
<artifactItems>
<artifactItem>
<groupId>com.package</groupId>
<artifactId>project-parent</artifactId>
<version>
${parent.version}
</version>
<type>zip</type>
<classifier>sql-files</classifier>
<overWrite>true</overWrite>
<outputDirectory>
${project.build.directory}/sql/schema
</outputDirectory>
<includes>**/*.sql</includes>
</artifactItem>
</artifactItems>
<includes>**/*</includes>
<overWriteReleases>true</overWriteReleases>
<overWriteSnapshots>true</overWriteSnapshots>
</configuration>
</execution>
</executions>
</plugin>

... as before ...

</pluginManagement>

</build>
</project>


The plugin (if a child module decides to use this) will be executed during the process-test-resources phase of the build. We are getting the zip file by specifying the groupId, artifactId, version and the type. Also the classifier value must match the id which we used earlier in the zip.xml file for configuring the maven-assembly-plugin. The zip-file is extracted to ${project.build.directory}/sql/schema and we are only extracting files having the *.sql extension (well there shouldn't be any other files but ok). This concludes what needs to be done to extract the zip-file and child modules are now ready to use the extracted files. Here is a snippet from a pom.xml file in a Maven child module. This is everything needed to run the sql-maven-plugin defined in the parent pom and extract the required configuration files upfront.


<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">

<modelVersion>4.0.0</modelVersion>
<parent>
<groupId>com.package</groupId>
<artifactId>project-parent</artifactId>
<version>0.1-SNAPSHOT</version>
</parent>

<artifactId>child-a</artifactId>
<packaging>war</packaging>

<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-dependency-plugin</artifactId>
</plugin>

<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>sql-maven-plugin</artifactId>
</plugin>
</plugins>
</build>
</project>


For the sake of completeness, once again the full parent pom.xml file.


<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0
http://maven.apache.org/maven-v4_0_0.xsd">
<modelVersion>4.0.0</modelVersion>

<groupId>com.package</groupId>
<artifactId>project-parent</artifactId>
<packaging>pom</packaging>
<version>0.1-SNAPSHOT</version>

<modules>
<module>child-a</module>
<module>child-b</module>
<module>child-c</module>
</modules>

<properties>
<version.mysql.connector>5.1.12</version.mysql.connector>
</properties>

<build>
<plugins>
<plugin>
<artifactId>maven-assembly-plugin</artifactId>
<inherited>false</inherited>
<configuration>
<descriptors>
<descriptor>
${project.basedir}/assembly/zip.xml
</descriptor>
</descriptors>
</configuration>
<executions>
<execution>
<id>make-assembly</id>
<phase>package</phase>
<goals>
<goal>single</goal>
</goals>
</execution>
</executions>
</plugin>
</plugins>

<pluginManagement>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-dependency-plugin</artifactId>
<version>2.3</version>
<executions>
<execution>
<id>unpack-sql-files</id>
<phase>process-test-resources</phase>
<goals>
<goal>unpack</goal>
</goals>
<configuration>
<artifactItems>
<artifactItem>
<groupId>com.package</groupId>
<artifactId>project-parent</artifactId>
<version>
${parent.version}
</version>
<type>zip</type>
<classifier>sql-files</classifier>
<overWrite>true</overWrite>
<outputDirectory>
${project.build.directory}/sql/schema
</outputDirectory>
<includes>**/*.sql</includes>
</artifactItem>
</artifactItems>
<includes>**/*</includes>
<overWriteReleases>true</overWriteReleases>
<overWriteSnapshots>true</overWriteSnapshots>
</configuration>
</execution>
</executions>
</plugin>

<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>sql-maven-plugin</artifactId>
<version>1.4</version>
<dependencies>
<dependency>
<groupId>mysql</groupId>
<artifactId>mysql-connector-java</artifactId>
<version>${version.mysql.connector}</version>
</dependency>
</dependencies>
<configuration>
<driver>com.mysql.jdbc.Driver</driver>
<url>jdbc:mysql://localhost/</url>
<username>xyz</username>
<password>xyz</password>
</configuration>

<executions>
<execution>
<id>drop-and-recreate-db</id>
<phase>process-test-resources</phase>
<goals>
<goal>execute</goal>
</goals>
<configuration>
<autocommit>true</autocommit>
<srcFiles>
<srcFile>
${project.build.directory}/sql/schema/user.sql
</srcFile>
<srcFile>
${project.build.directory}/sql/schema/core.sql
</srcFile>
<srcFile>
${project.build.directory}/sql/schema/game.sql
</srcFile>
</srcFiles>
<onError>abort</onError>
</configuration>
</execution>
</executions>
</plugin>
</pluginManagement>

</build>
</project>

Heating up the Code Generator

It has been a couple of years since I used a code generator in one of my projects. It must have been 2004 or 2005, when MDA was a big buzzword. Back then, we used a self-made code generator, which was written by a very smart developer who I used to work with at Aperto AG in Berlin. He even contributed the code generator to the open source community later on. Those were the days.

Since then, there wasn't much use for a code generator anymore. IDE's getting better and better at helping you with code generation and auto-completion etc. And of course, adding a generator to your build process is always a bit of work. So it is often faster to write the boilerplate code yourself, if that doesn't takes forever. However, last week it was time to bring code generation back from the grave. The game we are currently developing for EA here in Norway, has a mechanism where the game client sends game events to the server. In our domain we call these events also audit changes. A typical audit change can be that the player has found a treasure, that the player has consumed food or that the player has discovered a new scenery. On the client side, the audit change is implemented in ActionScript 3. On the server side the audit change is implemented in Java. There is a transport layer in between which serializes the AS3 object, sends it over the network and deserializes it back into a Java object. For us server developers, this meant that every new audit change also needed a transport definition about how to serialize and deserialize the audit change. This definition was always wrapped into a new audit change type class. The type definition class was written manually, which sort of was okay until we had more than 20 audit changes in the game. Thats when I started to look into generating the transport layer on the server side.

In Java 5 along with the new Annotation language feature, Sun added a command-line utility called Annotation Processing Tool (apt). This was later merged into the standard javac compiler with the release of Java 6. There is also the apt-jelly project which provides an interface to apt and can be used to generate code artifacts based on templates written with Freemarker or Jelly. Finally to glue everything together, there is the maven-apt-plugin which can be used to execute a AnnotationProcessorFactory during your build and therefore integrate apt into your project. The maven-apt-plugin looks sort of dead however. I think nowadays even the standard maven-compiler-plugin or the maven-annotation-plugin can be used to process your annotations and generate code artifacts. Since I got our generator working using the maven-apt-plugin, I did not bother looking at the other 2 plugins. If someone has a working example on how they are used with a AnnotationProcessorFactory, I would be really happy to see it. Let's look into some code now.

Here is an example of an audit change as it could exist in the game:



/**
* Audit change telling that the {@link User} has bought a {@link House}.
*/
@DatatypeDefinition(minSize = 7)
public class BoughtHouseForGold implements AuditChange {

private int itemId;

@DatatypeCollection(elementType = Integer.class)
private List<Integer> sceneries;

@DatatypeIgnore
private User friend;

... other stuff not relevant ...
}


The transport class that we want to generate need to serialize and deserialize every field of the audit change. As you can guess from above, we do not want to transport the friend field in the example. The meta data, that isn't accessible via reflection within the template which we will write later, needs to be given to the generator in another way - for instance via Annotations. Thats why I created a bunch of Annotations just to instruct the generator.


/**
* Marks the type annotated by this annotation as something that can be
* serialized and deserialized using a Datatype.
*/
@Retention(RetentionPolicy.SOURCE)
@Target({ElementType.TYPE})
public @interface DatatypeDefinition {
int minSize() default 0;
}


/**
* Any {@link Field} annotated this way, will be rendered as a Collection of the
* specified type when the Datatype is generated.
*/
@Retention(RetentionPolicy.SOURCE)
@Target({ElementType.FIELD})
public @interface DatatypeCollection {
Class<?> elementType();
}


/**
* Any {@link Field} annotated this way, will be ignored in when the Datatype is generated.
*/
@Retention(RetentionPolicy.SOURCE)
@Target({ElementType.FIELD})
public @interface DatatypeIgnore {
}


Now for the hardest part, the template. I can recommend, that you start writing the code for the first class (the one that should be generated later) manually before you work on the template. Add the class into the default location in Maven, i.e. src/main/java/com/whatever/package. The generated classes will end up in a different location later (under target/generated-sources/) so it will be easy to compare the expected outcome and the generated outcome while working on the template. Here is a template example in which I use Freemarker directives.


<#-- for each type annotated with DatatypeDefinition -->
<@forAllTypes var="type" annotationVar="datatypeDefinition" annotation="package.DatatypeDefinition">
<#-- tell apt-jelly that the outcome will be a java source artifact -->
<@javaSource name="package.types.${type.simpleName}Type">
package package.types;

<#-- all imports go here -->
import java.io.IOException;

/**
* This class contains the {@link Datatype} for {@link ${type.simpleName}}.
*/
public class ${type.simpleName}Type extends AbstractAuditableType<${type.simpleName}> { <#-- class name based on type that was annotated with DatatypeDefinition -->
public ${type.simpleName}Type() {
super(
<#-- replace camel case with underscores -->
TypeCodes.${type.simpleName?replace("(?<=[a-z0-9])[A-Z]|(?<=[a-zA-Z])[0-9]|(?<=[A-Z])[A-Z](?=[a-z])", "_$0", 'r')?upper_case}_TYPE_CODE,
new Datatype<${type.simpleName}>(${type.simpleName}.class, ${datatypeDefinition.minSize}) {

@Override
public ${type.simpleName} read(final DatatypeInput in) throws DataFormatException {
final ${type.simpleName} value = new ${type.simpleName}();
<@forAllFields var="field">
<#assign useField = true>
<#-- do not do anything if field is a constant -->
<#if field.static = true><#assign useField = false></#if>
<#-- do not do anything if annotated with @DatatypeIgnore -->
<@ifHasAnnotation declaration=field annotation="package.DatatypeIgnore"><#assign useField = false></@ifHasAnnotation>
<#if useField = true>
<#-- build name of the setter method -->
<#assign setter = "set${field?cap_first}">
<#assign useCollection = false>
<@ifHasAnnotation var="datatypeCollectionAnnotation" declaration=field annotation="package.DatatypeCollection"><#assign useCollection = true></@ifHasAnnotation>
<#if useCollection = true>
<#if datatypeCollectionAnnotation.elementType = "java.lang.Integer">
value.${setter}(in.readList(Datatype.uintvar31));
<#else>
System.out.println('Cannot read collections of type: ${datatypeCollectionAnnotation.elementType}. Extend auditable-type.fmt');
</#if>
<#else>
<#-- Handling for fields without extra annotations -->
<#if field.type = "int" || field.type = "java.lang.Integer">
value.${setter}(in.readUintvar31());
<#elseif field.type = "boolean" || field.type = "java.lang.Boolean">
value.${setter}(in.readBoolean());
<#elseif field.type = "java.lang.String">
value.${setter}(in.readString());
</#if>
</#if>
<#assign useCollection = false>
</#if>
<#assign useField = false>
</@forAllFields>
return value;
}

@Override
public void write(final DatatypeOutput out, final ${type.simpleName} value) throws IOException {
<@forAllFields var="field">
<#assign useField = true>
<#-- do not do anything if field is a constant -->
<#if field.static = true><#assign useField = false></#if>
<#-- do not do anything if annotated with @DatatypeIgnore -->
<@ifHasAnnotation declaration=field annotation="DatatypeIgnore"><#assign useField = false></@ifHasAnnotation>
<#if useField>
<#-- build name of the getter method -->
<#assign getter = "get${field?cap_first}">
<#if field.type = "boolean" || field.type = "java.lang.Boolean">
<#-- boolean getter starts with is -->
<#assign getter = "is${field?cap_first}">
</#if>
<#assign useCollection = false>
<@ifHasAnnotation var="datatypeCollectionAnnotation" declaration=field annotation="DatatypeCollection"><#assign useCollection = true></@ifHasAnnotation>
<#if useCollection = true>
<#if datatypeCollectionAnnotation.elementType = "java.lang.Integer">
out.writeCollection(Datatype.uintvar31, value.${getter}());
<#else>
System.out.println('Cannot write collections of type: ${datatypeCollectionAnnotation.elementType}. Extend auditable-type.fmt');
</#if>
<#else>
<#if field.type = "int" || field.type = "java.lang.Integer">
out.writeUintvar31(value.${getter}());
<#elseif field.type = "boolean" || field.type = "java.lang.Boolean">
out.writeBoolean(value.${getter}());
<#elseif field.type = "java.lang.String">
out.writeString(value.${getter}());
</#if>
</#if>
<#assign useCollection = false>
</#if>
<#assign useField = false>
</@forAllFields>
}
}
);
}
}
</@javaSource>
</@forAllTypes>


My apologies, this is incredibly hard to read here on the blog. It helps to click the "view source" button in the upper right corner of the code above and copy everything to a text editor. I also added comments in the template to explain what I am doing.

Finally here is the configuration for the maven-apt-plugin, so that it will generate your code artifacts before compiling your project (note that target/generated-sources will be merged with the real sources during compile time).


<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>apt-maven-plugin</artifactId>
<version>1.0-alpha-4</version>
<configuration>
<factory>net.sf.jelly.apt.freemarker.FreemarkerProcessorFactory</factory>
<options>
<option>template=${basedir}/src/main/resources/apt/auditable-type.fmt
</option>
</options>
<fork>true</fork>
</configuration>
<dependencies>
<dependency>
<groupId>net.sf.apt-jelly</groupId>
<artifactId>apt-jelly-core</artifactId>
<version>2.14</version>
</dependency>
<dependency>
<groupId>net.sf.apt-jelly</groupId>
<artifactId>apt-jelly-freemarker</artifactId>
<version>2.14</version>
</dependency>
</dependencies>
<executions>
<execution>
<goals>
<goal>process</goal>
</goals>
</execution>
</executions>
</plugin>


And voilà, here is our generated BoughtHouseForGoldType class fresh out of the oven:


package package.types;

import java.io.IOException;

/**
* This class contains the {@link Datatype} for {@link BoughtHouseForGold}.
*/
public class BoughtHouseForGoldType extends AbstractAuditableType<BoughtHouseForGold> {
public BoughtHouseForGoldType() {
super(
TypeCodes.BOUGHT_HOUSE_FOR_GOLD_TYPE_CODE,
new Datatype<BoughtHouseForGold>(BoughtHouseForGold.class, 7) {

@Override
public BoughtHouseForGold read(final DatatypeInput in) throws DataFormatException {
final BoughtHouseForGold value = new BoughtHouseForGold();
value.setItemId(in.readUintvar31());
value.setSceneries(in.readList(Datatype.uintvar31));
return value;
}

@Override
public void write(final DatatypeOutput out, final BoughtHouseForGold value) throws IOException {
out.writeUintvar31(value.getItemId());
out.writeCollection(Datatype.uintvar31, value.getSceneries());
}
}
);
}
}