Starting Dependency WAR Artifact using maven-jetty-plugin

I have worked in a lot of Maven projects, that used the maven-jetty-plugin. Normally the plugin is used to start a Jetty container with the WAR artifact produced by the current project. This works like a charm. Sometimes however, you want to host the WAR artifact of another project. This could be the case if you are developing the client for a service that can be reached via HTTP. The integration tests in the client project would require the service to be running and reachable, so that these tests can test and verify the client code. To start the Jetty container before and shut down after the integration test, you define two executions and bind them to the correct phase in Mavens lifecycle.



Also starting any external WAR artifact is straight forward with the maven-jetty-plugin and the deploy-war goal.



As you can see, the version is hard-coded in the path. This is somewhat ok as long as you are not violating the DRY principle. If you use the version somewhere else in your pom.xml, make sure to use a custom Maven property. Another way, which is a bit nicer perhaps, is to use the cargo-maven2-plugin which out of the box can start the WAR artifact of any Maven dependency from within the dependencies section. Here is a nice example from stackoverflow.com.

As you can see, either the cargo-maven2-plugin or the maven-jetty-plugin can be used for the simple use cases. It gets a bit trickier if you want to start the external WAR artifact and set some System properties during startup of the container. Both the maven-jetty-plugin and the cargo-maven2-plugin allow you to define individual system properties. However, only the maven-jetty-plugin can read a pre-existing properties file instead of individual System properties. This was required in one of the projects I am working on. Copying all the System properties out of the properties file to add them as individual System properties, is a lot of work and would again violate the DRY principle.

Also sometimes when you are developing both the service and the client project, and you are not using the SNAPSHOT mechanism, it can be tedious to update the version of the server WAR artifact that gets started during the integration tests of the client project. Maven knows two fixed keywords which you can use instead of specifying an exact version or a range. Use LATEST to download the latest snapshot or released version of a dependency from a repository. Use RELEASE to download the latest released version of a dependency from a dependency. Unfortunately you cannot use LATEST or RELEASE to start a WAR artifact of a dependency, if you are using the maven-jetty-plugin. This is because you specify the location of the WAR artifact as a full path inside the configuration - webApp element of the maven-jetty-plugin. The plugin does not use the syntax which is used to define the Maven dependencies.

There is however a little trick. You use the maven-dependency-plugin, which uses the default syntax for dependencies and can understand LATEST and RELEASE, to copy the WAR artifact to a fixed location. While copying you should also rename the war file. This makes your life easier as you never have to adapt the path in the webApp element of the maven-jetty-plugin if the version changes. Here is an example:

Test our new game

As some reader of this blog might know, I work for the EA studio of Playfish. Currently we are heading into the closed beta phase for a game which I helped to develop. This is a so called social game which is being played on Facebook. The backend of the game is developed by our team in Java. If you want to be one of the first ones to play the game and become a beta tester, fill in this application. I am not allowed to tell anything about the game at this point, just in response to this comment - yes the game is different from Adventure World and Cloudforest Expedition and much more fun to play.

Maybe I should have used a Lock here

Java 5 added some really nice classes in the java.util.concurrent package. For instance there is the ConcurrentMap interface which allows you to add items to a Map if they are not already contained. The code you would normally write if the ConcurrentMap didn't exist, needs to do this as a atomic check-then-act sequence, if the Map is shared between Threads. With the ConcurrentMap interface you get all of this for free using the putIfAbscent method.

I shoot myself in the foot today, with a small piece of code, which one of my unit tests was executing from a large number of Threads. The test was failing randomly, like 5% of the times. If you see tests randomly failing, it is often indicating a Date problem or a concurrency problem. Here is the class under test. Can you spot the problem?


In order to understand what the class does, you need to know what a DynamicProperty is. This is a class wrapping a value which comes from a remote source, i.e. over the Network. So instead of having fixed System properties, you would ask the remote source for the value. Then the value is cached for a couple of seconds and if expired you refresh the value. Now the ChangeAwareDynamicProperty is doing the same thing but additionally react to value changes. So every time the value is changed in the remote source, there is a costly operation that the ChangeAwareDynamicProperty needs to do. This code is not shown as it is not relevant.

What is the ConcurrentMap for? Obviously the costly operation should only be done once per value change right? So a simple approach is to use locking. Let only 1 Thread at a time go into a critical section where the current value is compared to the new value. If a change is detected, run the costly operation. This would totally work but the throughput would be horrible. Especially when a Thread detects a value change while holding the Lock, running the costly operation would block all other Threads for a while. So my idea was to use a ConcurrentMap instead of a Lock. If Threads detect a value change, they try to put the new value into a ConcurrentMap. By definition, only one Thread can put the new value into the Map. For this Thread the putIfAbscent call will return null. This Updater Thread will then run the costly operation, while other Threads will still return the previous value. Once the Updater Thread is finished, it will update the current value and remove the new current value from the ConcurrentMap. This is to prevent the Map from growing eternally. Sounds straightforward or?

Well obviously there was a problem, as the unit test was failing every now and then. Every time the test was failing, I could see that it was trying to run the costly operation twice for the same changed value, as indicated by the following log statement:


I started to believe, that the putIfAbsent method was buggy and returned null even if the value was already present in the Map. I asked another co-worker to check my code, to see if he could spot a problem. After a few minutes we realized that the problem was in my code - as to be expected. Like I said, I don't want the Map to grow forever. So the Updater Thread is removing the new changed value after the costly operation is finished. The problem is that another Thread could be waiting for the CPU in line 14. This is the line that invokes putIfAbscent. So once the Updater Thread is done, and the waiting Thread gets active, it will actually do exactly the same work again. Not good!

Our immediate solution was to not remove the Map entry after the Update Thread is finished. What we do instead is to remove the old value from the Map before reassigning the new changed value into currentVersion. As the Map will never contain more than 1 entry, it is always possible that a costly operation will be run again, even if a value has already been handled. This change only fixes the problem, that a single value change can trigger a consecutive execution of the costly operation.

git clone and remote end hung up unexpectedly

Yesterday morning before going to work, I created a git repository for a new hobby project of mine. I have done this a couple of time before and the git hosting provider of choice is Assembla. They are offering private git repositories and I never had any trouble in the past.

After creating the repository, I tried to clone it. I need to use sudo because I clone into a directory which is not owned by me. I am using the /web directory (or rather the directories under /web) directly as docroot for Apache.
sudo git clone git@git.assembla.com:my-new-repository.git
Initialized empty Git repository in /web/my-new-repository/.git/
Permission denied (publickey).
fatal: The remote end hung up unexpectedly

As you can see, something went wrong. I verified that my public ssh key was added to my Assembla account and it was. I decided it was a problem on the Assembla side and decided to try again this morning but the problem was still there. I created another git repository over at Bitbucket and tried again - same problem, wtf. Finally I had the great idea to try and clone the repository into my user directory and voila it worked. So it turned out that doing sudo and ssh public/private key authentication with git does not work. There is a good explanation about it on github.
If you are using sudo with git commands (e.g. using sudo git clone because you are deploying to a root-owned folder), ensure that you also generated the key using sudo. Otherwise, you will have generated a key for your current user, but when you are doing sudo git, you are actually the root user – thus, the keys will not match.