tag:blogger.com,1999:blog-14933530250886270012024-03-14T00:15:08.484-07:00Java Code SplitterGuten Tag! Do you like Scala, Java and coding as much as I do? I am a 41 yrs old Berliner, stranded in Scandinavia. I work in gaming as a Software Engineer and every day is a day solving problems and fighting DD's.Reikhttp://www.blogger.com/profile/07180713208158950018noreply@blogger.comBlogger85125tag:blogger.com,1999:blog-1493353025088627001.post-86866102093168907572019-05-09T04:46:00.001-07:002019-05-09T04:46:34.221-07:00Best language features in Kotlin (for me)I have been doing Java and Scala for many years. Recently my team switched to Kotlin and everybody is enjoying it so far. In this blog post I want to share a personal selection of language features that I really like.
<br /><br />
<h3>Smart casts on polymorphic collections</h3>
<br />
Kotlin has a super concise and safe way to narrow down a polymorphic list to a specific sub-type using the <b>as?</b> syntax.
<br /><br />
<script src="https://gist.github.com/reik-wargaming/86ea6ce1cff9c18ace76c4ef6904a976.js"></script>
<br /><br />
<h3>Logging return values</h3>
<br />
There are situations, where we want to log the return value of a function before actually returning it. Usually, this is done by storing the return value into a variable, doing the log statement and finally returning that variable. Kotlin makes this is a bit simpler with the <b>.also</b> keyword for side-effects. Using <b>.also</b>, we don't have to introduce this artificial variable.
<br /><br />
<script src="https://gist.github.com/reik-wargaming/82e974e9b02869565bfe5105532fe5cc.js"></script>
<br /><br />
... more to come Reikhttp://www.blogger.com/profile/07180713208158950018noreply@blogger.com0tag:blogger.com,1999:blog-1493353025088627001.post-5571036698886438842019-04-29T01:43:00.001-07:002019-04-29T01:44:38.010-07:00DynamoDBLocal and UnsatisfiedLinkError in Gradle<div dir="ltr" style="text-align: left;" trbidi="on">
This week I started to work on a new project using DynamoDB. As always, I’d like to write some integration tests, to verify my datastore integration works as intended. I know one cheap way to test DynamoDB, is using containers and <a href="https://localstack.cloud/">LocalStack</a>. However, I decided to go even simpler and give <a href="https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/DynamoDBLocal.Maven.html">DynamoDBLocal</a> a spin. This is just a library, for your tests to depend on. Super easy to integrate in gradle or maven projects.
<br />
<br />
Unfortunately using DynamoDBLocal is not as straightforward. Relatively soon you might hit a UnsatisfiedLinkError related to SQLLite - similar to:
<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://2.bp.blogspot.com/-z-bWzpTgxXI/XMa4D8TKjQI/AAAAAAAAskE/gD2zAqtE8Vs18VChKlSpcoCYfXcK40AMACLcBGAs/s1600/sqllite_linker_errors.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;" target="_blank"><img border="0" data-original-height="380" data-original-width="1437" height="85" src="https://2.bp.blogspot.com/-z-bWzpTgxXI/XMa4D8TKjQI/AAAAAAAAskE/gD2zAqtE8Vs18VChKlSpcoCYfXcK40AMACLcBGAs/s320/sqllite_linker_errors.png" width="320" /></a></div>
<br />
<br />
<br />
<br />
<br />
<br />
To fix this in a gradle build, we modified our test task, to copy some binaries around and also make a system property available to the tests.
<br />
<br />
<script src="https://gist.github.com/reik-wargaming/f6f1cc8ded1204f424d6eb900f70eff0.js"></script>
<br />
One last pitfall might be your IntelliJ IDE. Delete all existing test configurations and make sure to run your tests using Gradle Runner.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://2.bp.blogspot.com/-5Y4mBneqGc4/XMa45FQzVxI/AAAAAAAAskM/wibjXzFz5AUe3dmI3W1UpaqJGhSy8Fi6wCLcBGAs/s1600/intellij_gradle_test_runner.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;" target="_blank"><img border="0" data-original-height="652" data-original-width="1026" height="203" src="https://2.bp.blogspot.com/-5Y4mBneqGc4/XMa45FQzVxI/AAAAAAAAskM/wibjXzFz5AUe3dmI3W1UpaqJGhSy8Fi6wCLcBGAs/s320/intellij_gradle_test_runner.png" width="320" /></a></div>
</div>
Reikhttp://www.blogger.com/profile/07180713208158950018noreply@blogger.com0tag:blogger.com,1999:blog-1493353025088627001.post-35114007644948056872017-02-14T04:20:00.000-08:002017-02-14T04:20:57.575-08:00ThriftMux Unit Test RoundtripI have decided to write this blog post mainly for my own reference. Today I was working on a Thrift service that I wanted to start and test within a unit test. All our services are using the <a href="https://twitter.github.io/finagle/">Finagle RPC framework</a> with <a href="https://twitter.github.io/finagle/guide/Protocols.html#mux">ThriftMux</a> as transport. I will show you how to find a free port on your machine to start the server on and then create a client to invoke that server. Given this Thrift IDL file.
<br /><br />
<script src="https://gist.github.com/reikje/3b4adb56a2645d8ea3ab10c5fed67e03.js"></script>
<br /><br />
using <a href="https://twitter.github.io/scrooge/">Scrooge</a> or the <a href="https://twitter.github.io/scrooge/SBTPlugin.html">scrooge-sbt-plugin</a>, this is a <a href="http://www.scalatest.org/">ScalaTest</a> that does exactly that.
<br /><br />
<script src="https://gist.github.com/reikje/898d3050cd960b2e7d47be3e02d187cd.js"></script>
Reikhttp://www.blogger.com/profile/07180713208158950018noreply@blogger.com0tag:blogger.com,1999:blog-1493353025088627001.post-7021963541448852312017-01-26T03:28:00.002-08:002017-01-26T03:29:43.638-08:00Dynamic Per-Request Log LevelLooks like it took me three years to write something new for this blog. Kids and other cool stuff got in the way - you know the score! I also should say, that I haven't done anything in Java in the past three years (almost). So to be more accurate, I should rename the blog to <b>jvmsplitter.blogspot.com</b> but whatever - I just drop my <b>Scala blog posts</b> here anyways!
<br /><br />
This week I implemented a great idea that a co-worker came up with. Have you ever been in a situation, where you have a running system with a bunch of microservices in production and suddenly something doesn't work as expected? In the world of <a href="https://www.battlefield.com/games/battlefield-1/battlepack">Battlefield</a>, examples could be that players cannot purchase Battlepacks on PS4 or that matchmaking stopped working. So how do you find the root cause of the problem? Right, you want to look at some logs to get additional information. Only problem is, that your log level in production is usually quite high, for instance WARN or even ERROR - otherwise the amount of logging would just be too much. Wouldn't it be great to alter the <b>log level dynamically</b> on a <b>per-request basis</b>? This would allow you to test in production using TRACE logging – for just your test user. <a href="https://twitter.github.io/finagle/guide/Contexts.html">Finagle contexts</a> to the rescue!
<br /><br />
Here at <a href="http://www.dice.se">DICE</a> we have build all our Scala backend services based on <a href="https://twitter.github.io/finagle/">Twitters Finagle framework</a> – which is similar to the <a href="https://tokio.rs/">Tokio Framework</a> in Rust if you have used that. In a nutshell Finagle is a RPC framework on the JVM with build-in support for various transport protocols, load balancing, service discovery, backoff etc. One semi-hidden feature of Finagle is the broadcast context. Think of the broadcast context as a ThreadLocal that is send along with every request through an RPC graph - from microservice to microservice. Finagle itself uses this internally, for instance to send a unique <a href="https://twitter.github.io/finagle/guide/Contexts.html#current-trace-id">trace id</a> along with each request. In my implementation, I have used the broadcast context to allow for a per-request log level override. Let's get our hands dirty! The first thing you want to implement is a new <a href="https://github.com/twitter/finagle/blob/develop/finagle-core/src/main/scala/com/twitter/finagle/context/Context.scala">Key</a> that Finagle can send over the RPC graph.
<br /><br />
<script src="https://gist.github.com/reikje/21d26ee248a29f44e8e3713c39162169.js"></script>
<br /><br />
Essentially each Key need to implement two methods marshal and unmarshal, so that Finagle knows how to convert the Key from and to a Byte Array. I am not sharing this code here, but if you want to see how to unit test your code, Finagle has an <a href="https://github.com/twitter/finagle/blob/develop/finagle-thriftmux/src/test/scala/com/twitter/finagle/thriftmux/EndToEndTest.scala">example</a>. No that we have a class for the log level override defined, we need code to set the override into as well as code to read the override from the broadcast context.
<br /><br />
In most system architectures you have one system on the outside of your cluster. Here at DICE we call this system the gateway and it is the only service that is accessible from the public internet. All requests arrive at the gateway and it is the root node in the RPC graph. In other words, the gateway calls other microservices, which might call other microservices and so on. The most logical choice to define a log level override would be inside a <a href="https://twitter.github.io/finagle/guide/ServicesAndFilters.html">Finagle Filter</a>. I haven't actually written the Filter yet but it would look similar to this.
<br /><br />
<script src="https://gist.github.com/reikje/4c258374216a05e16959e7e791d61b40.js"></script>
<br /><br />
You have to be very careful with the Filter as this code is executed for every request entering your system! Now that we have code to set a log level override into the broadcast context, let's actually use it somewhere. To make this a seamless as possible for the developers, it is helpful if all your microservices share the same logging setup. For instance do we use slf4j with logback and the <a href="https://logback.qos.ch/apidocs/ch/qos/logback/classic/LoggerContext.html">LoggerContext</a> is set up programmatically inside a trait that every microservice is using (btw. our services follow the <a href="https://twitter.github.io/twitter-server/">Twitter Server template</a>).
<br /><br />
<script src="https://gist.github.com/reikje/9969cc6d7c270f150204c8500cec0bc0.js"></script>
<br /><br />
As you can already guess now, reading from the broadcast context and actually using the override is wrapped inside a logback <a href="https://logback.qos.ch/manual/filters.html#TurboFilter">TurboFilter</a>. Logback consults the filter for every log event and you can use this to decide if something should be logged or not. The following filter reads from the broadcast context and then makes a decision based on a potential override.
<br /><br />
<script src="https://gist.github.com/reikje/fbd37cea3f36f6bb74a6c688fa4af2f7.js"></script>
<br /><br />
<b>Conclusion</b>: you can use Finagles broadcast context to transport a log level override through an RPC graph. You need some service to set the override in the context. It is helpful if this system is on the outside of your architecture and preferably uses HTTP. With HTTP it is easy to write a Finagle Filter and base the override on the HTTP request, i.e. by looking at the HTTP headers. Finagle transports the override magically through your RPC call graph and any microservice can use the override for logging decisions. To make this as simple as possible encapsulate this decision logic in some code that is shared between all your microservices.
Reikhttp://www.blogger.com/profile/07180713208158950018noreply@blogger.com0tag:blogger.com,1999:blog-1493353025088627001.post-7156660326750349912014-04-11T09:11:00.002-07:002014-04-11T09:11:42.776-07:00Two Scala Serialization ExamplesIn the last two days I’ve been looking into ways to serialize and deserialize some Scala objects. I tested a few suggestions that were mentioned on <a href="http://stackoverflow.com/questions/7590557/simple-hassle-free-zero-boilerplate-serialization-in-scala-java-similar-to-pyt">this post on Stackoverflow</a>. As a reference for myself (and because sometimes it is hard to find good examples) I am adding two examples for <a href="https://github.com/scala/pickling">Scala Pickling</a> and <a href="https://github.com/twitter/chill">Twitter Chill</a>. Let’s have a basic SBT project first.
<br /><br />
<script src="https://gist.github.com/reikje/10480671.js"></script>
<br /><br />
Since I work with the Battlefield franchise let’s create some domain classes that we are going to serialize and deserialize.
<br /><br />
<script src="https://gist.github.com/reikje/10480714.js"></script>
<br /><br />
The first candidate will be <a href="https://github.com/scala/pickling">Scala Pickling</a>. The following code pickles a List of 3000 random <i>WeaponAccessory</i> instances.
<br /><br />
<script src="https://gist.github.com/reikje/10480805.js"></script>
<br /><br />
Unfortunately the code doesn't even compile properly. Scala Pickling uses Macros and advanced Scala compile features. Trying to compile Pickling.scala fails during compilation. Also people are encouraged to depend on a SNAPSHOT version which means you are always depending on the latest patches. When I wrote this blog post I hit this issue. <b>Verdict</b>: scala-pickling is very easy to use and works great for very simple stuff. As soon as your object graph gets a bit more complicated you will hit weird errors. Another problem is the lack of a non-SNAPSHOT version.
<br /><br />
The seconds test candidate was <a href="https://github.com/twitter/chill">Twitter Chill</a> which is based on Kryo. chill-scala adds some Scala specific extensions. Your SBT project should depend on chill directly, which contains the code in chill-scala (which isn’t published separately). Even though they don’t have Scala examples in their Github documentation and I got some cryptic errors first when doing stuff wrong - I have to say this is an awesome library that works great! Also the authors reply fast on <a href="https://twitter.com/posco">Twitter</a>. <b>Verdict</b>: highly recommended!
<br /><br />
<script src="https://gist.github.com/reikje/10480931.js"></script>Reikhttp://www.blogger.com/profile/07180713208158950018noreply@blogger.com0tag:blogger.com,1999:blog-1493353025088627001.post-56195000985553337102014-04-11T07:47:00.000-07:002014-04-11T07:47:44.434-07:00SBT and faster RPM packagingWe do a lot of Scala coding nowadays and I am trying to introduce SBT as build tool to all our new Scala projects. When we deploy these applications to Amazon EC2 nodes, we use <a href="http://docs.opscode.com/chef_solo.html">Chef Solo</a> and the <a href="http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html">Instance User Data</a> feature to install an RPM file. We don’t use custom AMI’s. The RPM file is hosted in S3 and made available as package via <a href="https://github.com/seporaitis/yum-s3-iam">this yum plugin</a>. Each time we build our project via our continuous integration server (Bamboo), a new RPM package is created and uploaded to S3.
<br /><br />
It became more and more of a problem that building that particular application in Bamboo took a long time. The build plan ran for more than 10 minutes. So yesterday I spent some time to make it build a bit faster.
<br /><br />
First of all I have to say it is pretty lame that the <a href="https://jira.atlassian.com/browse/BAM-13592">SBT plugin is broken</a> in Bamboo since version 4.4.3 and no one from Atlassian is interested in fixing it since August 2013! I tried to fix the Bamboo plugin myself but Atlassian has some non-public Maven repositories so I couldn’t even build it. Given that the top four Java/Scala build tools are Ant, Maven, Gradle and SBT you could also say that Bamboo is somewhat 25% broken currently. Anyway a workaround is to use the Script Task in a Job and run SBT, which is what we do currently.
<br /><br />
When I looked at our build there were basically two steps which took a long time. First we were creating a big one-jar (also called uber-jar sometimes). This is a single jar file that contains all compiled classes from all dependencies as well as our own classes. To create the uber-jar we used the <a href="https://github.com/sbt/sbt-assembly">sbt-assembly plugin</a> which can run for a bit if you have a lot of dependencies. But actually you don’t need to have a single big jar file as you can add an entire directory to the Java classpath when starting an application. So I switched to a plugin called <a href="https://github.com/xerial/sbt-pack">sbt-pack</a> which dumps the jar files of all managed dependencies into a folder under target along with your project jar. This folder is then used later when building the RPM. Not using the sbt-assembly plugin to create a single uber-jar already saved us about 2 minutes of buildtime.
<br /><br />
The second change was addressing creation of the actual RPM package. Previously we were using <a href="https://github.com/sbt/sbt-native-packager">SBT native packager</a> to assemble the RPM file. Unfortunately it was also not running very fast. <a href="https://github.com/sbt/sbt-native-packager/issues/103">Another big issue</a> in Bamboo was that the sbt-native-packager logs some stuff on Std Error. This failed the build because Bamboo is scanning the build log for errors. (Our hack around this issue was to write a SBT task that logs 250 lines of “Build Successful” into the Bamboo log - what a mess). Today the RPM is build <a href="https://github.com/jordansissel/fpm">using fpm</a>. On your Bamboo server you need to install fpm which is a Ruby Gem (gem install fpm). Then install Python and the fabric library.
<br /><br />
And here is how we use fabric and fpm. In the root of your Scala project create a folder called build. Inside this folder store the following file:
<br /><br />
<script src="https://gist.github.com/reikje/10474315.js"></script>
<br /><br />
You probably want to adapt <i>projectname</i>, <i>packagename</i> and the <i>fpm settings</i> to match your own project. To invoke the script during a build create a Script task in Bamboo that executes: <b>fab -f build/fabfile.py build</b>. When the Script is executed from Bamboo it is looking for a file called version.txt in the build folder. The file version.txt need to be created upfront via SBT to propagate the project version to the Python script. This is what the custom <b>rpmPrepare</b> task does.
<br /><br />
<script src="https://gist.github.com/reikje/10474573.js"></script>
<br /><br />
The rpmPrepare task reuses a SettingKey called branchName which contains the name of the branch in Github. The name of the RPM package will contain the branch name, so that you can build multiple branches of the same project in Bamboo in parallel without having to worry about version clashes. The branchName Setting in SBT is retrieved via either a system property or an environment variable called “branchName”. This variable is set from Bamboo. Each build plan in Bamboo is made of individual tasks and for a task you can set individual environment variables. So just add <i>-DbranchName=${bamboo.repository.branch.name}</i> and Bamboo will feed in the Github branch name into the task.
<br /><br />
So after running the Python script you will have the RPM file in the WORK_DIR folder. For running Java command-line applications we use <a href="http://supervisord.org/">Supervisor</a>. Here is an example how to invoke a Main class given that the RPM installs your project in /opt/projectname.
<br /><br />
<script src="https://gist.github.com/reikje/10474745.js"></script>Reikhttp://www.blogger.com/profile/07180713208158950018noreply@blogger.com0tag:blogger.com,1999:blog-1493353025088627001.post-7498269642984911192014-02-12T04:32:00.000-08:002014-02-12T04:32:08.088-08:00Publishing from SBT to NexusI am pretty new to <a href="http://www.scala-sbt.org/">SBT</a>. Yesterday, for the first time, we wanted to publish the jar artifact of an in-house utility library into our private <a href="http://www.sonatype.org/nexus/">Nexus repository</a>. This is an internal Nexus repository which we use mostly in Java projects build with Maven. While the task of publishing an artifact from SBT is well documented, it was not working right away. We hit some problems. Some answers to these problems we found on Stackoverflow, but some things we needed to figure out ourselves.
<br /><br />
To prepare your build in SBT basically <a href="http://www.scala-sbt.org/release/docs/Detailed-Topics/Publishing.html">do these things</a>. Add values for the <b>publishTo</b> Setting and the <b>credentials</b> Task. I recommend using a credentials file not under version control for obvious reasons. The first thing you want to verify is that you are using the correct “realm” value, which can be either a property in the credentials file or the first argument to the constructor of the <a href="http://www.scala-sbt.org/release/sxr/sbt/Credentials.scala.html">Credentials</a> class. Use curl to figure out the correct value as explained here. Send a POST to the Nexus repository which you want to publish to without any authentication arguments. For us this was the call.
<br/><br />
<script src="https://gist.github.com/reikje/8954670.js"></script>
<br/><br />
Look for the <b>WWW-Authenticate</b> header and use the realm value. I think the default is “Sonatype Nexus Repository Manager”.
<br/><br />
This was a step in the right direction but we still got the following error in SBT:
<br/><br />
<script src="https://gist.github.com/reikje/8954691.js"></script>
<br/><br />
Not super useful but more info is actually available in the Nexus logfiles. Make sure you set the loglevel to DEBUG via the Nexus admin GUI first, then tail <b>nexus.log</b> while you try to publish from SBT. Here is some output in nexus.log, basically saying that SBT did not sent a value for username and password as part of the Basic Authentication.
<br/><br />
<script src="https://gist.github.com/reikje/8954716.js"></script>
And I was using the following build.sbt file:
<br/><br />
<script src="https://gist.github.com/reikje/8954731.js"></script>
<br/><br />
After running a few tests, I figured out that the second argument to the <b>sbt.Credentials</b> class should only be the host and must not include the port – <b>doh</b>! After fixing this, everything works just fine. Another thing you want to check via the Nexus admin GUI is the Access Settings of your repository. For “Deployment Policy” we have set it to “Allow Redeploy”.Reikhttp://www.blogger.com/profile/07180713208158950018noreply@blogger.com1tag:blogger.com,1999:blog-1493353025088627001.post-21195183817123352262014-01-17T08:12:00.001-08:002014-01-17T08:12:24.697-08:00Dynamic Type System TroubleThis week I really learned to appreciate my Java compiler. I learned it the hard way – by not using it. In the last game that we released (Battlefield 4) I have implemented a feature for our players which suggests 3 game items to progress on, i.e. a weapon to unlock, an assignment that should be finished etc. Our internal name for this feature is “Suggestions”. A player would not only see these 3 items but also see his own progress towards reaching the suggestion goal of each item. The code that calculates the 3 items has become quite complex since there are a lot of different item types that we can pick from and we need to match each player individually. The code is written in Python, my favorite language at this point, which uses a dynamic type system.
<br /><br />
The “Suggestions” feature was tested thoroughly and worked quite well in production. I implemented some additional functionality on top. Players now also had the opportunity to manually pick individual items so they could see their progress in the game and on our companion website <a href="http://battlelog.battlefield.com/">Battlelog</a>. Unfortunately after a few weeks <a href="http://www.reddit.com/r/battlefield_4/comments/1v81r1/battlelog_sure_has_the_best_suggestions/">players complained</a> about strange problems. These players would see completely random items being suggested to them – even with the progression totally being off. In some cases, players got items suggested that they had completed or unlocked. These errors happened completely random. Not able to reproduce in any of our test systems. But it was happening mostly to players that played the game a lot. So I started to investigate.
<br /><br />
No unit test was broken and also a long code review did not surface any problems. Fortunately we have very short release cycles. So I added some additional logging to this functionality, which was released to production earlier this week. This finally got me something! I could see that in some rare cases the function, which calculates the suggested items of a player, returns not just 3 but more: 4, 5, 6 sometimes 9 items! I am posting you a ridiculous simplified version of the code below. Try to spot the problem.
<br /><br />
<script src="https://gist.github.com/reikje/8475624.js"></script>
<br /><br />
I should also tell you, that an instance of the SuggestionService is shared. The Service is used in an application which uses <a href="http://www.gevent.org/">gevent</a>. There are many <a href="http://greenlet.readthedocs.org/en/latest/">Greenlets</a> (lightweight Threads) which call the suggest method simultaneously. Ring ring – multithreading issue! The problem is in Line 10, where two parentheses are missing. Instead of creating an instance of the ProgressSuggestions class every time the suggest method is called, the code gets a reference to the ProgressSuggestions class and assigns it to a variable called progress. Then, on the first invocation, it dynamically adds a suggestions class field to that class. Something that would neither be possible nor compile in a statically typed language like Java. All Greenlets modify the same class instance, so player’s suggestions can overwrite each other. The simple fix is to create an instance of the ProgressSuggestions class as it was intended. I am surprised that this bug could live so long. In a real multithreaded application this would have affected much more players. Greenlets are only semi parallel. They must yield at a bad time to trigger this problem. Here is the correct version.
<br /><br />
<script src="https://gist.github.com/reikje/8476073.js"></script>Reikhttp://www.blogger.com/profile/07180713208158950018noreply@blogger.com0tag:blogger.com,1999:blog-1493353025088627001.post-33366777857657292342013-12-20T06:57:00.000-08:002013-12-20T06:57:01.735-08:00Akka and parameterized mutable Actor stateAfter doing just Python for almost one year, I am back on the JVM with some recent Scala projects. In one of the projects I had the chance to try <a href="http://akka.io/">Akka</a> for the first time – which is an amazing library. In one of my Actors, and I think this is quite a common use case, I needed to run some initialization logic based on the Actors constructor arguments. During construction, the Actor would initialize an object that was expensive to create. This object would then be re-used in the receive method of the Actor.
<br /><br />
I knew that Actor instances were shared, i.e. multiple calls to the receive method would be done on the same Actor object. So being new to Akka, I was afraid of having shared mutable state within my Actor and I was researching for a better way to do the initialization, other than just having a mutable field. This is when I found out about the <a href="http://doc.akka.io/docs/akka/snapshot/scala/fsm.html">FSM</a> (Finite State Machine) trait. It is a perfect way to model initialization. I created two States for my Actor (if you want to do initialization in multiple Actors it’s a good idea to keep the common states, data holders and initialization messages in a separate object)
<br /><br />
<script src="https://gist.github.com/reikje/8055466.js"></script>
<br /><br />
Individual states and data holder are then created in the individual Actors. The parent (supervisor) would then create the Actor, send an Initialize message which would in turn create the expensive object. The actor would then move itself to the next state and be ready to receive the further messages.
<br /><br />
<script src="https://gist.github.com/reikje/8055543.js"></script>
While this is a very nice way to model initialization one big problem became apparent – restarts. As soon as my Actor failed with an exception in the Initialized state, the parent Actor would restart it in the New state. This made the Actor pretty much unusable. One potential solution to this are probably the <a href="http://doc.akka.io/docs/akka/snapshot/scala/actors.html">lifecycle</a> methods. I could have overwritten the postRestart method in my Actor, where I have access to the constructor arguments, to send an initialization message to myself. But instead and against my gut feeling I decided to use a mutable field instead.
<br /><br />
As I learned later, even though multiple threads share the same Actor instance, Akka guarantees that only a single thread will handle a message in the receive method at a time (also called <a href="http://doc.akka.io/docs/akka/snapshot/general/jmm.html">The Actor subsequent processing rule</a>). So now I set the mutable field to a None (Option type) and on the first message that arrives the field is initialized properly to a Some. This works fine but throws up some interesting questions. Since Akka is using Dispatchers (thread pools), subsequent messages in an Actor are most likely handled by different threads. In Java, changes to fields of shared objects done in one thread are not always visible to other threads (unless the field is volatile, the modification is done in a synchronized code section or in a section guarded by a Lock). Apparently this is not a problem for Akka.
<br /><br />
<blockquote>In layman’s terms this means that changes to internal fields of the actor are visible when the next message is processed by that actor and you don’t need to make the fields volatile.</blockquote>
<br /><br />
Unfortunately it is not further explained how Akka achieves this. The visibility problem DOES exist in Akka - if Actor's contain fields that are modified when receiving a message (i.e. some immutable field of ArrayBuffer where elements are added and removed in the receive method). In that case, how does Akka make sure that those changes are seen in other threads when the next message arrives? In my application at least, I had one issue which seemed to be a visibility problem. Unfortunately until now I wasn’t able to isolate and reproduce this problem in a unit test :( What I have so far (some parts need to be added).
<br /><br />
<script src="https://gist.github.com/reikje/8055689.js"></script>
<br /><br />
Have to fill the gap and do the HTTP POST. What I have seen is a print indicating that a smaller batch has been pushed out – which can ultimately only be a visibility issue. My guess it that the culprit is either my asynchronous POST using the Dispatch library or the way clear() is implemented in the ArrayBuffer class. Further investigating. For now, this change got rid of the problem for me.
<br /><br />
<script src="https://gist.github.com/reikje/8055792.js"></script>Reikhttp://www.blogger.com/profile/07180713208158950018noreply@blogger.com1tag:blogger.com,1999:blog-1493353025088627001.post-55948826251443174872013-07-02T08:46:00.000-07:002013-07-02T08:46:17.392-07:00Generating REST docs with Scala and FinatraMore than 2 years ago I wrote a Blog post about Enunciate - a tool which helps you to generate a nice documentation for your REST API if you use Java and JAX-RS. I like documentation that exists very close to the code and is created and updated while you implement the main functionality. This kind of documentation has also been recommended in the <a href="http://pragmatictips.com/68">Pragmatic Programmer</a> book.
<br /><br />
I have not been using JAX-RS and Servlet in a while. We are currently implementing most of our REST API’s on top of <a href="http://twitter.github.io/finagle/">Finagle</a>, a RPC System created in the Twitter software forge that runs on <a href="http://netty.io/">Netty</a>. While it is possible to use Finagle directly together with Scala path matching for the routes, I could not find a clever way for self-updating documentation close to the code. Fortunately there is another Twitter project called <a href="https://github.com/capotej/finatra">Finatra</a>, which puts a <a href="http://www.sinatrarb.com/">Sinatra</a>-<a href="http://flask.pocoo.org/">Flask</a> alike web framework on top of Finagle. Finatra will not only make it easier to define Resources and Routes but also help you with the documentation.
<br /><br />
Here is a how you typically define a route in Finatra:
<br /><br />
<script src="https://gist.github.com/reikje/5910280.js"></script>
<br /><br />
For the documentation itself I am using <a href="https://developers.helloreverb.com/swagger/">Swagger</a>, which can generate HTML from annotations. Swagger already comes with a bunch of useful annotations. Unfortunately some annotations like a @Path equivalent was missing, so I was forced to use some <a href="http://jcp.org/en/jsr/detail?id=311">JSR-311</a> (JAX-RS) instead, even though we are not using JAX-RS for the API. Here is the evolution of the Finatra controller from above with the Swagger and JSR-311 annotations added. As you can see it was necessary to move the routes from the constructor into separate methods that can be annotated. This makes the Scala code a bit uglier and harder to read, especially if you have a lot of annotations in place. But hey, you will love the outcome.
<br /><br />
<script src="https://gist.github.com/reikje/5910314.js"></script>
<br /><br />
The final step is to generate the documentation during our Maven build. We are using the maven-swagger-plugin for that. I even copied and customized the strapdown.html.mustache from the plugin into our project, so that we could tweak the generated documentation and use another Twitter Bootstrap theme instead.
<br /><br />
<script src="https://gist.github.com/reikje/5910357.js"></script>
<br /><br />
The outcome will be a generated docs.html file in the target folder of your build. The docs.html will contain autoreplaced.com as path - which was specified in the maven-swagger-plugin. I normally replace “autoreplaced.com” with JavaScript (something that can easily be done if you use your own Mustache template). Also it is nice to have Finatra render the docs.html file.
<br /><br />
<script src="https://gist.github.com/reikje/5910439.js"></script>Reikhttp://www.blogger.com/profile/07180713208158950018noreply@blogger.com0tag:blogger.com,1999:blog-1493353025088627001.post-72257131459290419562013-01-25T14:31:00.000-08:002013-01-30T02:15:43.770-08:00Embedded Cassandra with Python and JavaTesting is important. When developing applications based on <a href="http://cassandra.apache.org/">Cassandra</a> and Java, you have a lot of options that help you testing your code during development. Unfortunately when using Python it is not as great. For instance there is no straight-forward solution to start an embedded Cassandra server from Python, that your unitests (or rather integration tests) can communicate with. The good news is, starting a Java process from Python code is dead easy. Using this hybrid-approach, we can easily write Cassandra integrated unittests under Python.
<br /><br />
First of all I took the existing <a href="https://github.com/jsevellec/cassandra-unit">cassandra-unit</a> library and tweaked it. Normally when starting embedded Cassandra via cassandra-unit, you specify a configuration file (cassandra.yaml) and optionally a temporary directory. Cassandra-unit would then load the file as a classpath resource and copy the contents to a new file into the specified temp directory (default target/embeddedCassandra). Hence the file has to be on the classpath and the path has to be relative. I thought it was much nicer if you could send instead an absolute path and the configuration file was used directly from where it was located. So from Python we could later modify the Cassandra config file the way we wanted and also put it in the final location. So the first thing you want to do is to clone the tweaked <a href="https://github.dice.ad.ea.com/RSchatz/embedded-cassandra-starter">embedded-cassandra-starter</a> code from git and create a jar artifact by running (yes you need to use Maven)
<br /><br />
<script src="https://gist.github.com/4638478.js"></script>
<br /><br />
The outcome of this will be a jar file that you can put into your Python project, i.e. under resources/cassandra. The next thing you need is a vanilla Cassandra configuration file (<a href="http://code.google.com/p/cassandra-examples/source/browse/trunk/cassandra.yaml">cassandra.yaml</a>) in your project. We also have that one checked in along with the jar file, i.e. under resource/cassandra and we called it <a href="https://github.com/reikje/embedded-cassandra-starter/blob/master/cassandra.yaml.template">cassandra.yaml.template</a>
<br /><br />
We use the .template extension because the file contains two placeholders (<b>{{CASSANDRA_DIR}}</b> and <b>{{CASSANDRA_PORT}}</b>) which will be replaced later. A great co-worker of mine then wrote a Python class called EmbeddedCassandra. This class will in its __init__ method find an available port and create a random directory in the systems temporary directory (let’s call it work directory for now). EmbeddedCassandra also has a start and a stop method. The start method will copy the configuration template file into the work directory and replace the two placeholders mentioned above. Finally it will start a new Java process using using the subprocess module. It will basically invoke the jar file that we build earlier in the same way as you would from the command-line (Example can be found here). The stop method in EmbeddedCassandra will bring down the process and do some cleanup.
<br /><br />
<script src="https://gist.github.com/4638398.js"></script>
<br /><br />
All of this is now wrapped into a EmbeddedCassandraTestCase class, which acts as a base class for unit tests that want to test against Cassandra. This class invokes start and stop in its setup and tearDown method.
<br /><br />
<script src="https://gist.github.com/4638412.js"></script>
<br /><br />
So now you are able to write some nice Python unittests (rather integration tests) against Cassandra, for instance using the great <a href="https://github.com/pycassa/pycassa">Pycassa</a> library. Here is a simple example.
<br /><br />
<script src="https://gist.github.com/4638413.js"></script>Reikhttp://www.blogger.com/profile/07180713208158950018noreply@blogger.com0tag:blogger.com,1999:blog-1493353025088627001.post-84464623631433146562013-01-08T14:00:00.006-08:002013-01-08T14:03:44.169-08:00Investigating Cassandra Heap<div dir="ltr" style="text-align: left;" trbidi="on">
We are working on a new application which will use Apache Cassandra. Yesterday a co-worker sent me the following warning, which we kept seeing in the logs every now and then on several nodes. I was asked if this was something to worry about.
<br />
<br />
<blockquote>
WARN [ScheduledTasks:1] 2013-01-07 12:14:10,865 GCInspector.java (line 145) Heap is 0.8336618755935529 full. You may need to reduce memtable and/or cache sizes. Cassandra will now flush up to the two largest memtables to free up memory. Adjust flush_largest_memtables_at threshold in cassandra.yaml if you don't want Cassandra to do this automatically.</blockquote>
<br />
<br />
The warning is a bit misleading as you will see in a bit - but hey, using 83% of your JVM heap memory should always ring at least some alarm bells. Since I haven’t used Cassandra that much, I needed to investigate how it uses its heap memory. We are using Datastax Community Edition 1.1.x, so the first place to look for more information was Opscenter. Bu it didn’t give me much information about the heap. Next I went into one cluster node via SSH to see if I could get out some stats via JMX, as I was suspecting a big cache to be the problem. For the first time I used <a href="http://wiki.cyclopsgroup.org/jmxterm">jmxterm</a> instead of commandline-jmxclient. So to get some numbers for Cassandras key and row cache via JMX you can do this:
<br />
<br />
<script src="https://gist.github.com/4488285.js"></script>
<br />
Obviously we were running defaults for the 2 caches. The <a href="http://www.datastax.com/dev/blog/maximizing-cache-benefit-with-cassandra">key cache</a> was very small and the row cache not even enabled. By default Cassandra 1.1 assigns 5% of the JVM heap memory to the key cache, though never more than 100 MB. As next step I wanted to find out how the heap memory was actually used. So I ran <b>jmap -heap `pgrep java`</b> as explained <a href="http://prefetch.net/blog/index.php/2007/10/27/summarizing-java-heap-utilization-with-jmap/">here</a>. Make sure you have only 1 java process running otherwise feed in the pid manually to jmap. Note: doing a heap dump to file wasn't such a great idea. It stopped after about 20 minutes. At this point the dump file was 2.7 GB big and the node had already left the cluster.<br />
<br />
Apparently 2.8 GB of our 4 GB heap were used in the old generation (also called concurrent mark and sweep generation if a CMS GC algorithm is in use). The old generation contains objects that have survived a couple of collections in the various stages of the young generation. After reading <a href="http://blog.mikiobraun.de/2010/08/cassandra-gc-tuning.html">this blog post</a> about Cassandra GC tuning and this description from Oracle, I was thinking that the old generation might be filled because the JVM never did a major collection. Apparently if <b>–XX: CMSInitiatingOccupancyFraction</b> is not changed via the <b>JAVA_OPTS</b>, a major collection would only be issued at <a href="http://www.oracle.com/technetwork/java/javase/gc-tuning-6-140523.html#cms.starting_a_cycle">approximately 92% of usage</a>. So if Cassandra was flushing the largest memtable every time at 0.75 percent (default value for flush_largest_memtables_at in cassandra.yaml) it would free heap memory therefore preventing a concurrent major collection.
<br />
<br />
Then however I realized that we were still running with the default value for <a href="http://www.datastax.com/dev/blog/whats-new-in-cassandra-1-0-improved-memory-and-disk-space-management">memtable_total_space_in_mb</a>, which is the only setting for memtables since Cassandra 1.0. The default is to use a maximum of 1/3 of the JVM heap. So something else was eating up the heap memory, not memtables. So Cassandra dropping the largest memtable at 75% seems kind of desperate in our scenario. So with caching and memtables not being the culprits, what else was left? It turned out the bloom filter for the amount of data and the number of nodes we have, was <a href="http://nmmm.nu/bloomfilter.htm">getting very big</a>. Our test cluster has 6 nodes and the total data size is around 400 GB. Cassandra uses a bloom filter in front of its SSTables to check if a row exists before it does disk IO. This is an extra layer that, if tuned properly, can make Cassandra access to column families more efficient because disk IO is slow. A bloom filter is a probabilistic data structure. It can give you false positives, meaning it will tell you a record exists in an SSTable but it does not. It will however never tell you a record does not exist while it exists in reality (false negative).
<br />
<br />
The false positives ratio can be tuned using the <b>bloom_filter_fp_chance</b> parameter in cassandra.yaml. We were running default of 0.1 for this parameter, which I think accounts for a 10% chance of a false positive. The value can be anything between 0 and 1. Well nothing is for free and having a better bloom filter increases the size of the data structure.
<br />
<br />
The bloom filter is defined per column family. So one way to bring down the size of a bloom filter in Cassandra, is to evaluate your column families. Column families which are not getting a lot of read requests should be fine without a effective bloom filter. Another possibility is to add more nodes to the cluster, so that each node maintains less data therefore bringing also down the size of the bloom filter. Finally here is some good news for Cassandra 1.2 (still waiting for the Datastax release for 1.2). The bloom filter can run off-heap since Cassandra 1.2. For this to work you need to <a href="http://www.datastax.com/docs/1.2/install/install_jre">enable Java Native Access</a> (JNA), which isn’t done by default when installing Cassandra (even when installing from the Debian packages from what I heard). Running the bloom filter off-heap will solve your immediate heap problems. As far as I know it is not recommended to run Cassandra with more than 8GB of heap memory. However you still need to <a href="http://www.datastax.com/docs/1.2/operations/tuning_bloomfilters">tune your bloom filter</a> in regards to data size, number of nodes and false positives ratio. Otherwise you might run out of system memory. Finally also tuning the CMS garbage collection is useful. I think we will set it up to be <a href="http://www.oracle.com/technetwork/java/javase/gc-tuning-6-140523.html#icms">incremental</a>.</div>
Reikhttp://www.blogger.com/profile/07180713208158950018noreply@blogger.com0tag:blogger.com,1999:blog-1493353025088627001.post-77333266128454650012012-04-20T06:39:00.000-07:002012-04-20T06:39:39.877-07:00Don't let these ThreadLocals escape<div dir="ltr" style="text-align: left;" trbidi="on">
A while ago I wrote about ThreadLocals and how <a href="http://javasplitter.blogspot.com/2011/07/beauties-and-pitfalls-of-threadlocals.html">useful and tricky</a> they can be. In that blog post, I also wrote that it was a good practice to clean up your ThreadLocals so that, if the Thread was reused, it would not have access to the previous values. The recommended way to do this for a web application was to use a ServletFilter.
<br /><br />
After running this setup for a while, I also have to add that you really have to know all the entry points into your application in order to achieve 100% cleanup coverage. Normally the ServletFilter is adding what is called a "Cleaner" into the <a href="http://svn.apache.org/viewvc/portals/jetspeed-2/portal/trunk/jetspeed-commons/src/main/java/org/apache/jetspeed/util/ServletRequestCleanupService.java?view=markup&pathrev=1101917">ServletRequestCleanupService</a>. That is required before you can cleanup callbacks for any of your ThreadLocals. In our log file I saw that we were not always adding a "Cleaner". This was an indication, that we were running code which had not been passed through the ServletFilter, so I reviewed our application.
<br /><br />
It turned out that the first problem was a missing url-pattern element inside the filter-mapping block in the web.xml file. Unless you are mapping to /* make sure you are catching all the possible Url's. The good news is that since Servlet 2.5 you are allowed to have multiple url-pattern elements inside each filter-mapping element. So this was an easy fix.
<br /><br />
<script src="https://gist.github.com/2428573.js?file=web.xml"></script>
<br /><br />
Some people were writing that you could also separate multiple pattern with a comma. I haven't tried that myself.
<br /><br />
Another problem area are application internal Thread pools. For instance do we use <a href="http://static.springsource.org/spring/docs/3.1.x/spring-framework-reference/html/beans.html#context-functionality-events">custom Events in the Spring framework</a> which are passed between Spring beans. Per default this is done synchronously. You can change to asynchronous <a href="http://www.lordofthejars.com/2011/10/una-terra-promessa-un-mondo-diverso.html">Event delivery</a> by using a ApplicationEventMulticaster together with a proper TaskExecutor (i.e. <a href="http://static.springsource.org/spring/docs/3.0.x/javadoc-api/org/springframework/scheduling/concurrent/ThreadPoolTaskExecutor.html">org.springframework.scheduling.concurrent.ThreadPoolTaskExecutor</a> instead of default <a href="http://static.springsource.org/spring/docs/3.0.x/api/org/springframework/core/task/SyncTaskExecutor.html">org.springframework.core.task.SyncTaskExecutor</a>). However if you do this, you are creating yourself a thread pool. Listening to and handling of the Events will be done in separate Thread and not pass through the ServletFilter. So I was looking into ways to make sure, that each Event Listener adds a "Cleaner" and executes the cleaning logic afterwards. This was a good candidate to use an Aspect.
<br /><br />
<script src="https://gist.github.com/2428573.js?file=NonServletRequestCleanupAspect.java"></script>
<br /><br />
I have used the <a href="http://static.springsource.org/spring/docs/3.1.x/spring-framework-reference/html/aop.html#aop-ataspectj">@AspectJ syntax</a> for this Aspect and made it a Spring bean. This means I can compile the Aspect using a regular Java compiler. Instead of using load-time or compile-time weaving, we are using the Spring proxy-based AOP approach. In the code above, I am creating 2 Pointcuts, each one mapping to one of the Listeners where we actually have to do cleanup. This is probably not very future proof. Someone else might write another event listener in the future, which does then not have a Pointcut mapped to it. On the other hand using the proxy based AOP approach is probably slower than real weaving and some of the listeners which we have (5 currently) are really receiving a lot of events. So I sacrificed a future-proof implementation for maximum performance.
<br /><br />
The 2 Pointcuts selecting listener execution <a href="http://static.springsource.org/spring/docs/3.1.x/spring-framework-reference/html/aop.html#aop-pointcuts-combining">are then combined</a> in another Pointcut, which is then finally being used to create an Around advice which has a similar cleanup logic to the ServletFilter. Splitting Pointcuts into logical units with meaningful names is also a good AOP practice which I can recommend. I also took the liberty to rename a few classes. ServletRequestCleanupService became just CleanupService and ServletRequestCleanupCallback became CleanupCallback, which was more fitting now that not everything was passing the ServletRequestCleanupFilter anymore.
<br /><br />
Time to wrap this up. If you need to clean up ThreadLocals from your Threads, investigate carefully make sure you have covered all entry points to your application. At least at some logging so you can find "holes" easily.
<br /></div>Anonymousnoreply@blogger.com0tag:blogger.com,1999:blog-1493353025088627001.post-91626591210033044082012-04-18T04:31:00.000-07:002012-04-18T04:31:23.121-07:00Mimicking a circular buffer<div dir="ltr" style="text-align: left;" trbidi="on">
Today I needed a Java collection having some non-standard properties. I wanted to continuously iterate over the collection, pretty much like you would over a circular buffer. This alone would be simple enough to implement with any Java List I guess but I also wanted to be able to remove elements from the Collection while going through it. I could have written my own linked list and unlinked the elements that I wanted to remove while cycling through the list. However I wasn't really interested in adding a custom linked list to our project just for this specific purpose. Unfortunately the remove method of the LinkedList in Java takes an index which implies that you cannot iterate through the list using a for-each loop. If you use a for loop, the control variables have to be adapted after removing elements - so the code becomes more complex.
<br /><br />
Google Guava to the rescue. They have this nice utility class <a href="http://docs.guava-libraries.googlecode.com/git/javadoc/com/google/common/collect/Iterators.html">Iterators</a>. The cycle method in Iterators returns a indefinitely looping Iterator for any Iterable that is given as argument. This would give me the behavior of the ring buffer and, because it was an Iterator, I was able to remove elements from the underlying collection. The loop would stop if the Collection was exhausted. Pretty neat.
<br /><br />
<script src="https://gist.github.com/2413000.js?file=Main.java"></script>
<br /></div>Anonymousnoreply@blogger.com0tag:blogger.com,1999:blog-1493353025088627001.post-86123768138161945692012-02-02T07:22:00.000-08:002012-02-02T07:22:47.070-08:00Testing just got betterThis week I managed to work a bit with our test suite and make it run faster. When you work on a project, which has a codebase that grows and grows, it is natural that more and more tests are getting added. After some months you will be sitting with a test suite that runs a minute or even longer. This is the execution time for all our unit tests in the different Maven modules.
<br /><br />
<script src="https://gist.github.com/1723841.js?file=testtimes-before.txt"></script>
<br />
Last year I read an article in the German <a href="http://www.javamagazin.de/">Java Magazin</a> about a library called <a href="http://www.patterntesting.com/">org.patterntesting</a>. The library comes with <a href="http://junit.org/apidocs/junit/textui/TestRunner.html">TestRunner</a> that can be used to run all test methods within a test class in parallel. Just change your test to look like this:
<br /><br />
<script src="https://gist.github.com/1723841.js?file=Base64EncoderTest.java"></script>
<br />
This will of course not work for all your tests immediately as not all tests can be run in parallel. Often this is due to bad test or software design. For instance tests requiring write access to the same physical File, tests altering shared fields within a test class, tests changing static field values - just to name a few. As you refactor your tests, so that they can run concurrently, you will automatically improve the design and testability of your application. We had a couple of these "smelling" unit tests that needed to be refactored. So this is what the execution time looked like after running the tests with patterntesting.
<br /><br />
<script src="https://gist.github.com/1723841.js?file=testtimes-after.txt"></script>
<br />
Saving 40 seconds does not seem a lot. But 40 seconds times 15 builds per day times 3 developers times 21 working days in a month brings you to 10,5 hours.
Unfortunately, it isn't always as easy. Sometimes your test is already using the TestRunner, so you cannot just switch and use ParallelRunner. This is the case for all our Spring tests which were using the <a href="http://static.springsource.org/spring/docs/2.5.x/api/org/springframework/test/context/junit4/SpringJUnit4ClassRunner.html">SpringJUnit4ClassRunner</a> from Spring. I contacted one of the authors of the patterntesting library and got some help. In the latest version, patterntesting 1.2, there is a new TestRunner class ParallelProxyRunner, which can be used in connection with the DelegateTo annotation, to delegate to the original TestRunner while running the test in parallel. This works for the SpringJUnit4ClassRunner, but you have to be aware that the SpringJUnit4ClassRunner is not thread-safe (a problem that will be <a href="https://jira.springsource.org/browse/SPR-5863">fixed in Spring 3.2</a>). Though as a user of the patterntesting library you will never be affected by this - the ParallelProxyRunner will hide this problem for you.
<br /><br />
This isn't everything patterntesting has to offer. My favorite thing is the @Broken annotation which replaces the @Ignore annotation in Junit.
<br /><br />
<script src="https://gist.github.com/1723841.js?file=Broken.java"></script>
<br />
One big anti-pattern in test driven development is developers adding @Ignore annotations and then never look at the test case again. When introduced the patterntesting library to other EA developers, I got a lot of responses like: "why do you have tests flagged as ignore or broken in the first place?" - it's bad practice. Yes, you are all right. But often reality is different. Game producers can get very pushy. Developers are forced to commit hot-fixes which can potentially break existing tests. Then the developer might not be able to fix the test for various reasons.
<br /><br />
<ul>
<li>He or she is new in the team and doesn't have the big picture.</li><li>He or she is junior and doesn't know how stuff works.</li><li>The test is overly complicated so that only the author understands it.</li><li>It takes too long to fix it and something else has higher priority.</ul><br/> Just to name a few. Patterntesting comes adds other useful stuff for the testing toolbox. Here are some examples:
<br /><br />
<script src="https://gist.github.com/1723841.js?file=RunTestOn.java"></script>
<br />
<br />
<script src="https://gist.github.com/1723841.js?file=Other.java"></script>
<br />
More examples can be <a href="http://sourceforge.net/apps/mediawiki/patterntesting/index.php?title=Testing_with_PatternTesting">found here</a>.Anonymousnoreply@blogger.com1tag:blogger.com,1999:blog-1493353025088627001.post-82669833105937409122012-01-23T05:09:00.000-08:002012-01-23T05:09:47.669-08:00Starting Dependency WAR Artifact using maven-jetty-pluginI have worked in a lot of Maven projects, that used the <a href="http://docs.codehaus.org/display/JETTY/Maven+Jetty+Plugin">maven-jetty-plugin</a>. Normally the plugin is used to start a Jetty container with the WAR artifact produced by the current project. This works like a charm. Sometimes however, you want to host the WAR artifact of another project. This could be the case if you are developing the client for a service that can be reached via HTTP. The integration tests in the client project would require the service to be running and reachable, so that these tests can test and verify the client code. To start the Jetty container before and shut down after the integration test, you define two executions and bind them to the correct phase in Mavens lifecycle.
<br /><br />
<script src="https://gist.github.com/1662983.js?file=pom-integration-test.xml"></script>
<br /><br />
Also starting any external WAR artifact is straight forward with the maven-jetty-plugin and the deploy-war goal.
<br /><br />
<script src="https://gist.github.com/1662983.js?file=pom-external-war.xml"></script>
<br /><br />
As you can see, the version is hard-coded in the path. This is somewhat ok as long as you are not violating the DRY principle. If you use the version somewhere else in your pom.xml, make sure to use a custom Maven property. Another way, which is a bit nicer perhaps, is to use the <a href="http://cargo.codehaus.org/Maven2+plugin">cargo-maven2-plugin</a> which out of the box can start the WAR artifact of any Maven dependency from within the dependencies section. Here is a nice example from <a href="http://stackoverflow.com/questions/2677815/how-to-make-jetty-maven-plugin-deploy-a-war-that-is-retrieved-from-a-repository">stackoverflow.com</a>.
<br /><br />
As you can see, either the cargo-maven2-plugin or the maven-jetty-plugin can be used for the simple use cases. It gets a bit trickier if you want to start the external WAR artifact and set some System properties during startup of the container. Both the <a href="http://docs.codehaus.org/display/JETTY/Maven+Jetty+Plugin#MavenJettyPlugin-sysprops">maven-jetty-plugin</a> and the <a href="http://cargo.codehaus.org/Passing+system+properties">cargo-maven2-plugin</a> allow you to define individual system properties. However, only the maven-jetty-plugin can read a pre-existing properties file instead of individual System properties. This was required in one of the projects I am working on. Copying all the System properties out of the properties file to add them as individual System properties, is a lot of work and would again violate the DRY principle.
<br /><br />
Also sometimes when you are developing both the service and the client project, and you are not using the SNAPSHOT mechanism, it can be tedious to update the version of the server WAR artifact that gets started during the integration tests of the client project. Maven knows two <a href="http://stackoverflow.com/questions/30571/how-do-i-tell-maven-to-use-the-latest-version-of-a-dependency">fixed keywords</a> which you can use instead of specifying an exact version or a range. Use LATEST to download the latest snapshot or released version of a dependency from a repository. Use RELEASE to download the latest released version of a dependency from a dependency. Unfortunately you cannot use LATEST or RELEASE to start a WAR artifact of a dependency, if you are using the maven-jetty-plugin. This is because you specify the location of the WAR artifact as a full path inside the configuration - webApp element of the maven-jetty-plugin. The plugin does not use the syntax which is used to define the Maven dependencies.
<br /><br />
There is however a little trick. You use the maven-dependency-plugin, which uses the default syntax for dependencies and can understand LATEST and RELEASE, to copy the WAR artifact to a fixed location. While copying you should also rename the war file. This makes your life easier as you never have to adapt the path in the webApp element of the maven-jetty-plugin if the version changes. Here is an example:
<br /><br />
<script src="https://gist.github.com/1662983.js?file=pom-xml"></script>Anonymousnoreply@blogger.com2tag:blogger.com,1999:blog-1493353025088627001.post-504186485153498722012-01-20T02:50:00.000-08:002012-01-20T02:51:22.742-08:00Test our new gameAs some reader of this blog might know, I work for the EA studio of Playfish. Currently we are heading into the closed beta phase for a game which I helped to develop. This is a so called social game which is being played on Facebook. The backend of the game is developed by our team in Java. If you want to be one of the first ones to play the game and become a beta tester, fill in <a href="https://www.surveymonkey.com/s/JMMFLHF">this application</a>. I am not allowed to tell anything about the game at this point, just in response to <a href="http://blog.games.com/2012/01/17/playfish-new-game-beta-test/">this comment</a> - yes the game is different from Adventure World and Cloudforest Expedition and much more fun to play.Anonymousnoreply@blogger.com0tag:blogger.com,1999:blog-1493353025088627001.post-40169577692982209892012-01-19T10:30:00.000-08:002012-01-19T10:30:02.508-08:00Maybe I should have used a Lock hereJava 5 added some really nice classes in the <b>java.util.concurrent</b> package. For instance there is the <a href="http://docs.oracle.com/javase/1.5.0/docs/api/java/util/concurrent/ConcurrentMap.html">ConcurrentMap</a> interface which allows you to add items to a Map if they are not already contained. The code you would normally write if the ConcurrentMap didn't exist, needs to do this as a atomic check-then-act sequence, if the Map is shared between Threads. With the ConcurrentMap interface you get all of this for free using the <a href="http://docs.oracle.com/javase/1.5.0/docs/api/java/util/concurrent/ConcurrentMap.html#putIfAbsent(K, V)">putIfAbscent</a> method.
<br /><br />
I shoot myself in the foot today, with a small piece of code, which one of my unit tests was executing from a large number of Threads. The test was failing randomly, like 5% of the times. If you see tests randomly failing, it is often indicating a <a href="http://docs.oracle.com/javase/1.5.0/docs/api/java/sql/Date.html">Date</a> problem or a concurrency problem. Here is the class under test. Can you spot the problem?
<br />
<br />
<script src="https://gist.github.com/1641604.js?file=ChangeAwareDynamicProperty.java"></script>
<br />
In order to understand what the class does, you need to know what a DynamicProperty is. This is a class wrapping a value which comes from a remote source, i.e. over the Network. So instead of having fixed System properties, you would ask the remote source for the value. Then the value is cached for a couple of seconds and if expired you refresh the value. Now the ChangeAwareDynamicProperty is doing the same thing but additionally react to value changes. So every time the value is changed in the remote source, there is a costly operation that the ChangeAwareDynamicProperty needs to do. This code is not shown as it is not relevant.
<br /><br />
What is the ConcurrentMap for? Obviously the costly operation should only be done once per value change right? So a simple approach is to use locking. Let only 1 Thread at a time go into a critical section where the current value is compared to the new value. If a change is detected, run the costly operation. This would totally work but the throughput would be horrible. Especially when a Thread detects a value change while holding the Lock, running the costly operation would block all other Threads for a while. So my idea was to use a ConcurrentMap instead of a Lock. If Threads detect a value change, they try to put the new value into a ConcurrentMap. By definition, only one Thread can put the new value into the Map. For this Thread the putIfAbscent call will return null. This Updater Thread will then run the costly operation, while other Threads will still return the previous value. Once the Updater Thread is finished, it will update the current value and remove the new current value from the ConcurrentMap. This is to prevent the Map from growing eternally. Sounds straightforward or?
<br /><br />
Well obviously there was a problem, as the unit test was failing every now and then. Every time the test was failing, I could see that it was trying to run the costly operation twice for the same changed value, as indicated by the following log statement:
<br />
<br />
<script src="https://gist.github.com/1641604.js?file=build.log"></script>
<br />
I started to believe, that the putIfAbsent method was buggy and returned null even if the value was already present in the Map. I asked another co-worker to check my code, to see if he could spot a problem. After a few minutes we realized that the problem was in my code - as to be expected. Like I said, I don't want the Map to grow forever. So the Updater Thread is removing the new changed value after the costly operation is finished. The problem is that another Thread could be waiting for the CPU in line 14. This is the line that invokes putIfAbscent. So once the Updater Thread is done, and the waiting Thread gets active, it will actually do exactly the same work again. Not good!
<br /><br />
Our immediate solution was to not remove the Map entry after the Update Thread is finished. What we do instead is to remove the old value from the Map before reassigning the new changed value into currentVersion. As the Map will never contain more than 1 entry, it is always possible that a costly operation will be run again, even if a value has already been handled. This change only fixes the problem, that a single value change can trigger a consecutive execution of the costly operation.
<br /><br />
<script src="https://gist.github.com/1641604.js?file=FixedChangeAwareDynamicProperty.java"></script>Anonymousnoreply@blogger.com0tag:blogger.com,1999:blog-1493353025088627001.post-53882613681023048752012-01-02T23:58:00.000-08:002012-01-03T00:00:51.362-08:00git clone and remote end hung up unexpectedlyYesterday morning before going to work, I created a git repository for a new hobby project of mine. I have done this a couple of time before and the git hosting provider of choice is <a href="http://offers.assembla.com/free-git-hosting/">Assembla</a>. They are offering private git repositories and I never had any trouble in the past.
<br /><br />
After creating the repository, I tried to clone it. I need to use sudo because I clone into a directory which is not owned by me. I am using the /web directory (or rather the directories under /web) directly as docroot for Apache.
<br /><pre class="brush: scala">
sudo git clone git@git.assembla.com:my-new-repository.git
Initialized empty Git repository in /web/my-new-repository/.git/
Permission denied (publickey).
fatal: The remote end hung up unexpectedly
</pre><br />
As you can see, something went wrong. I verified that my public ssh key was added to my Assembla account and it was. I decided it was a problem on the Assembla side and decided to try again this morning but the problem was still there. I created another git repository over at <a href="https://bitbucket.org/">Bitbucket</a> and tried again - same problem, wtf. Finally I had the great idea to try and clone the repository into my user directory and voila it worked. So it turned out that doing sudo and ssh public/private key authentication with git does not work. There is a good explanation about it on <a href="http://help.github.com/ssh-issues/">github</a>.
<blockquote>
If you are using sudo with git commands (e.g. using sudo git clone because you are deploying to a root-owned folder), ensure that you also generated the key using sudo. Otherwise, you will have generated a key for your current user, but when you are doing sudo git, you are actually the root user – thus, the keys will not match.
</blockquote>Anonymousnoreply@blogger.com0tag:blogger.com,1999:blog-1493353025088627001.post-71400359021493343912011-12-20T06:53:00.000-08:002011-12-23T01:54:49.185-08:00Using Reference Maps for Caches and ListenersA while ago I wrote a <a href="http://javasplitter.blogspot.com/2011/08/lost-virginity-weakhashmap-first-timer.html">blog post</a> about the WeakHashMap. It then turned out that the WeakHashMap was not the optimal choice for that particular use case and I proposed a different solution. To make this a bit more seizable, I decided to implement a post a full code example. Let me again describe the use case.<br />
<br />
Let's say you have a class wrapping some sort of event. Let's give the event class a name, write a Java interface and call it Auditable. Each Auditable subclass must implement two methods: validate and process. There is a invoker class called AuditableInvoker which receives a collection of Auditable's and invokes validate and process on each one of them. So far so good.<br />
<br />
<script src="https://gist.github.com/1513718.js?file=auditable.java"></script>
<br />
<script src="https://gist.github.com/1513718.js?file=AuditableInvoker.java"></script>
<br />
As an example, let's implement two Auditable subclasses which are pretty stupid. SleepingAuditable will just hold the current Thread for a few milliseconds. IteratingAuditable will run a small loop in it's validate and process methods.<br />
<br />
<script src="https://gist.github.com/1513718.js?file=IteratingAuditable.java"></script>
<br />
<script src="https://gist.github.com/1513718.js?file=SleepingAuditable.java"></script>
<br />
In addition to that, there is a requirement that you need to know the execution time of the validate and process method in each Auditable subclass. Fortunately you can add listeners to AuditableInvoker. So all you have to do is to write a listener that measures the execution times. The listener need to start a stop watch before validate or process is invoked and stop this very stop watch after process and validate are finished. Once they are finished, the execution time can be computed and kept in a helper class that we call the StatsCollector. To keep things simple, our UnboundedStatsCollector will only increment a counter, completely ignoring the execution times.<br />
<br />
<script src="https://gist.github.com/1513718.js?file=AuditableLifecycleListener.java"></script>
<br />
<script src="https://gist.github.com/1513718.js?file=StatsCollector.java"></script>
<br />
<script src="https://gist.github.com/1513718.js?file=UnboundedStatsCollector.java"></script>
<br />
The tricky part here is that you need to use the same stop watch before and after the invocations of an Auditable. A good use case for a map using weak referenced keys and object identity for comparison. Once an Auditable subclass has finished it's lifecycle and is no longer referenced somewhere else in the code, the garbage collection can collect the Auditable as well as the associated stop watch. This will prevent the Map from growing indefinitely. So here is a implementation using a ReferenceIdentityMap from the commons-collections project.<br />
<br />
<script src="https://gist.github.com/1513718.js?file=ExecutionTimingAuditableLifecycleListener.java"></script>
<br />
To verify that we really see the expected behavior, I have written a unit test that is stressing the ExecutionTimingAuditableLifecycleListener using multiple Threads. In this unit test I am re-using a class called MultithreadedStressTester which I stole from Nat Pryze's book "<a href="http://www.growing-object-oriented-software.com/">Growing Object Oriented Software guided by Tests</a>".<br />
<br />
<script src="https://gist.github.com/1513718.js?file=MultithreadedStressTester.java"></script>
<br />
The ExecutionTimingAuditableLifecycleListenerTest uses the MultithreadedStressTester to send a bunch of Threads over to the ExecutionTimingAuditableLifecycleListener, verifying that each invocation is properly timed using the ReferenceIdentityMap under the hood.<br />
<script src="https://gist.github.com/1513718.js?file=ExecutionTimingAuditableLifecycleListenerTest.java"></script>
<br />
Finally, if you want to use <a href="http://code.google.com/p/guava-libraries/">google-guava</a> instead of commons-collections, you can also use a LoadingCache with weak keys instead of the ReferenceIdentityMap. Here is a version of the ExecutionTimingAuditableLifecycleListener using google-guava.<br />
<script src="https://gist.github.com/1513758.js?file=ExecutionTimingAuditableLifecycleListenerGuava.java"></script>Anonymousnoreply@blogger.com0tag:blogger.com,1999:blog-1493353025088627001.post-66466303374071630272011-10-10T08:05:00.000-07:002012-01-21T07:10:01.357-08:00The Static Final Inline Trap<div class="separator" style="clear: both; text-align: center;">
<a href="http://2.bp.blogspot.com/-SASoPl34uRY/TpMJj0wLnlI/AAAAAAAAAAw/Z9y6XVaq8sk/s1600/screenshot_no_flash.jpg" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="175" src="http://2.bp.blogspot.com/-SASoPl34uRY/TpMJj0wLnlI/AAAAAAAAAAw/Z9y6XVaq8sk/s320/screenshot_no_flash.jpg" width="320" /></a></div>
Last week I was chasing a very interesting problem which we could only resolve with the help of a colleague. One of our testers found a bug in the current game that we are developing. If the user doesn't have Flashplayer installed, the No-Flash image is not shown properly. All code related to this, is pulled in from shared libraries which our project depends on. In the screen shot you will see something that looks like magic. There is a method which populates and returns a Map. Amongst others, one key called noflash_image_url is defining the location of the No-Flash image. In the debugger pane to the lower right, you can see the static final constant NOFLASH_IMAGE_URL evaluating to the correct value. This value is the correct location of the No-Flash image.
<br />
<br />
The strange thing however, is that the Map contains another (wrong) value, even though the Map value is also set using the same static final constant NOFLASH_IMAGE_URL. You can see that in the lower middle pane of the screen shot. So for some strange reason, the constant is evaluating to both the correct and the wrong value. I guess we could call it semi-constant in this particular case.
<br />
<br />
Anytime you see something weird like this, your first thought should be class loading problem. Based on my experience, the weirdest problems are often rooted in class loading. However this problem is of a different kind. For a better understanding, you have to know that the Class which initializes the Map is a dependency that ours project pulls in from a Maven dependency to library A. This library A then depends on another (transient) Maven library B which contains the NOFLASH_IMAGE_URL constant. Our project also defines a direct Maven dependency to library B. This is needed because library B changes quite often and we always want the latest version of B in our project. The latest version of B does contain the correct value the No-Flash image in the NOFLASH_IMAGE_URL constant. So one might think, that the Maven dependency resolution mechanism pulled in an old version of library B, but this wasn't the case.
<br />
<br />
A colleague then hinted me in the right direction. The Java compiler often does something called inlining for access to a static final variable. This is an optimization as the value of a final variable cannot change after it has been assigned. To verify this we ran the Java class file disassembler (javap) over the class file in library A.
<br />
<br />
<pre class="brush: scala"> 78: pop
79: aload_2
80: ldc #108; //String noflash_image_url
82: ldc #109; //String http://static.playfish.com/shared/noflash.jpg
84: invokeinterface #100, 3; //InterfaceMethod java/util/Map.put:(Ljava/lang/Object;Ljava/lang/Object;)Ljava/lang/Object;
</pre>
<br />
<br />
As you can see, the compiler is really inlining the value in places where we, by running a debugger over the source code, expect an evaluation of the static final variable at runtime. This is definitely something you have to be aware of. We can fix this, by re-compiling library A against a newer version of library B, which will inline the correct No-Flash image. Another option to prevent inlining would be to declare the constant like this:
<br />
<br />
<pre class="brush: scala"> public static final String NOFLASH_IMAGE_URL = new String ("...");
</pre>
<br />
<br />
But then this might be really hard to understand for other developers and they might revert this change to a String literal if not properly documented. <strong>Update:</strong> I just realized that this problem is featured as Puzzle 93: Class Warfare in the <a href="http://www.javapuzzlers.com/contents.html">Java Puzzlers</a> book from Joshua Bloch. The compiler will inline all constant variables, such as Primitives and Strings which are initialized with a constant expression. Surprisingly, null is not inlined, neither are Java 5 enums.Anonymousnoreply@blogger.com0tag:blogger.com,1999:blog-1493353025088627001.post-54475264538801693772011-08-19T08:00:00.000-07:002011-11-22T05:18:28.696-08:00Lost Virginity: WeakHashMap first timerIt was almost three years ago at <a href="http://www.oopsla.org/oopsla2008/">OOPSLA in Nashville</a>, where I heard about the <a href="http://download.oracle.com/javase/6/docs/api/java/util/WeakHashMap.html">WeakHashMap</a> for the first time. The class is quite useful if you need a Map implementation, where the keys are compared using their memory references and not using equals. Another important property of the WeakHashMap is that the Map entries are removed "automagically" if no other object (than the WeakHashMap itself) holds a reference to the key object. The garbage collection will in that case remove Map entry and collect the key.
<br />
<br />
In the past 3 years I never used the WeakHashMap in any project. That changed yesterday. In the game that we are currently developing, we are using a mechanism where the game client is sending game events to the server. The server will then evaluate the game events and alter the users in memory before their state is made persistent in the database. Here is an example:
<br />
<br />
<pre class="brush: scala">public interface GameEvent {
/**
* Subclasses may implement this to run validation logic, before the GameEvent is processed.
*
* @param user the {@link User} to apply this {@link GameEvent} on
* @return {@link AuditResult} never <code>null</code>
*/
AuditResult validate(User user);
/**
* Subclasses may implement this to run implementation specific logic, potentially altering the
* given {@link User}.
*
* @param user the {@link User} to apply this {@link GameEvent} on
* @return {@link AuditResult} never <code>null</code>
*/
AuditResult process(User user);
}
public class ConsumeFood implements GameEvent {
private final int amount;
public ConsumeFood(final int amount) {
this.amount = amount;
}
@Override
public AuditResult validate(final User user) {
if (user.getFood() < this.amount) {
return AuditResult("User doesn't have this amount of food.");
}
return AuditResult.SUCCESS;
}
@Override
public AuditResult process(final User user) {
user.addEnergy(this.amount);
user.subtractFood(this.amount);
return AuditResult.SUCCESS;
}
}
</pre>
<br />
<br />
Once the game client sends us the ConsumeFood game event, we subtract food from the player and add energy instead. We also have a wrapper class around a collection of game events and the execution logic looks like this:
<br />
<br />
<pre class="brush: scala">public class GameEvents {
.. other methods ...
protected AuditResult process(final User user, final GameEvent change) {
final AuditResult validateResult = change.validate(user);
if (validateResult == AuditResult.SUCCESS) {
return change.process(user);
} else {
return validateResult;
}
}
}
</pre>
<br />
<br />
First we validate that we can apply the game event, then we process the event and alter the player. Since the number of different game events is continuously growing and growing, I thought it might be useful to measure the execution time of the validate and the process method in each game event. The way I implemented this a while ago, was through delegation. I added a wrapper class which was wrapping the real game event and timing the validate and process method:
<br />
<br />
<pre class="brush: scala">import org.springframework.util.StopWatch;
public final class TimingGameEvent implements GameEvent {
private final GameEvent gameEvent;
public TimingGameEvent(final GameEvent gameEvent) {
this.gameEvent = gameEvent;
}
/**
* Delegates the processing to the encapsulated {@link GameEvent}. Uses a {@link StopWatch} to time the
* execution time.
*/
@Override
public AuditResult process(final User user) {
final StopWatch stopWatch = new StopWatch("process-stop-watch");
stopWatch.start();
try {
return this.gameEvent.process(user);
} finally {
stopWatch.stop();
this.processTimeInMs = stopWatch.getLastTaskTimeMillis();
}
}
/**
* Delegates the validation to the encapsulated {@link GameEvent}. Uses a {@link StopWatch} to time the
* execution time.
*/
@Override
public AuditResult validate(final User user) {
final StopWatch stopWatch = new StopWatch("validate-stop-watch");
stopWatch.start();
try {
return this.gameEvent.validate(user);
} finally {
stopWatch.stop();
this.validationTimeInMs = stopWatch.getLastTaskTimeMillis();
}
}
}
</pre>
<br />
<br />
This worked well. However, this week we got another requirement from business. We needed to implement some sort of gameplay recorder. Each game event that the server receives must be recorded, so we can replay these events later. My first idea was to add another wrapper around the already existing TimingGameEvent wrapper class but this would have made it difficult to serialize the real game event to a File. Yes we decided to serialize to and deserialize from a String, which is stored in a plain textfile and each line represents one game event. I discarded the idea to add other wrappers around the game event and suggested a refactoring. Instead of using delegating wrappers, why not use a listener mechanism. Each listener would be notified before and after execution of the validate and the process method in each game event. Listeners could register themselves and it would be easier to extend in the future. On the negative side, of course measuring the execution times would not be as accurate anymore, as there could be other listeners which want to be notified before the game event is validated and processed. This however was not a big issue, since we were not interested in the exact time in milliseconds but rather in long running methods of a couple of seconds. I also added a mechanism to make sure the execution timing listener would get notified just before the game event method was executed and right after it was returning. More on that later.
<br />
<br />
Here is the listener interface I came up with:
<br />
<br />
<pre class="brush: scala">public interface GameEventLifecycleListener {
void onValidationStart(final User user, final GameEvent gameEvent);
void onValidationFinish(final User user, final GameEvent gameEvent,
final AuditResult auditResult);
void onProcessStart(final User user, final GameEvent gameEvent);
void onProcessFinish(final User user, final GameEvent gameEvent,
final AuditResult auditResult);
}
</pre>
<br />
<br />
Refactoring the TimingGameEvent class from above to a TimingGameEventLifecycleListener wasn't straight forward. Each invocation of the validate or the process method will now result in two listener notifications. So how do you know when to “press” stop on the StopWatch?
<br />
<br />
This is where the WeakHashMap comes in handy. Remember that each game event is going through the same chain? First onValidationStart is called, then onValidationFinish, onProcessStart and finally onProcessFinish. So the Listener could maintain a Map of all event implemented using a WeakHashMap. The first notification callback will add the game event to this Map. Subsequent notifications can assume that the game event will be present in the WeakHashMap. After the game event has passed through the chain and no object is referencing the game event anymore, it will automatically be removed from the WeakHashMap. Here is a part of the TimingGameEventLifecycleListener which will show you the concept.
<br />
<br />
<pre class="brush: scala">import org.springframework.core.Ordered;
public class TimingGameEventLifecycleListener extends AbstractGameEventLifecycleListener {
/**
* By default the WeakHashMap is not thread-safe, so it needs to be wrapped in a synchronizedMap. This however
* is quite slow, hence the ExecutionTimingGameEventLifecycleListener should not be running in production
* all the time.
*/
private final Map<GameEvent, TimedExecution> timedExecutions = Collections.synchronizedMap(
new WeakHashMap<GameEvent, TimedExecution>()
);
@Override
public void onValidationStart(final User user, final GameEvent gameEvent) {
final TimedExecution timeValidation =
new TimedExecution(gameEvent.getClass());
this.timedExecutions.put(gameEvent, timeValidation);
// other stuff
}
@Override
public void onValidationFinish(final User user, final GameEvent gameEvent,
final AuditResult auditResult) {
final TimedExecution timeValidation = this.timedExecutions.get(gameEvent);
if (timeValidation != null) {
timeValidation.stopTimer();
}
}
... other notification methods ...
@Override
public int getOrder() {
return Ordered.HIGHEST_PRECEDENCE;
}
}
</pre>
<br />
<br />
So the WeakHashMap can be nice in the role of a cache between different Listener methods. Another thing you may notice in the code above, is that the Listener is deriving from AbstractGameEventLifecycleListener instead of implementing GameEventLifecycleListener. I added a abstract base class for two reasons. First it is better to provide empty default implementations of all notification methods. Concrete Listeners like the TimingGameEventLifecycleListener can then only overwrite the methods they are interested in (okay in this case we are interested in all four notification methods, but other Listeners might not be). The second reason is that we want to force the Listeners to be in a specific order. Every Listener can for himself decide "how important" he is by implementing the <span style="font-style: italic;">getOrder()</span> method defined in the <a href="http://static.springsource.org/spring/docs/3.0.5.RELEASE/api/org/springframework/core/Ordered.html">org.springframework.core.Ordered</a> interface which the AbstractGameEventLifecycleListener is implementing. Normally this interface is used by Spring to apply an order to <a href="http://www.eclipse.org/aspectj/">Aspects</a>. Though you might choose to keep your domain clean of Spring framework classes. Here is the AbstractGameEventLifecycleListener:
<br />
<br />
<pre class="brush: scala">public class GameEvents {
private final GameEvent[] events;
private final NavigableSet<AbstractGameEventLifecycleListener> listeners;
public void addListeners(final
Collection<AbstractGameEventLifecycleListener> listeners) {
this.listeners.addAll(listeners);
}
public GameEvents(final GameEvent[] events) {
final int length = events == null ? 0 : events.length;
this.listeners = new TreeSet<AbstractGameEventLifecycleListener>();
this.events = new GameEvent[length];
if (length > 0) {
System.arraycopy(events, 0, this.events, 0, length);
}
}
// other methods
protected AuditResult process(final User user, final GameEvent gameEvent) {
final AuditResult validateResult = gameEvent.validate(user);
if (validateResult == AuditResult.SUCCESS) {
return gameEvent.process(user);
} else {
return validateResult;
}
}
/**
* Runs the {@link GameEvent#validate(User)} function of the given
* {@code gameEvent}, notifying all {@link AbstractGameEventLifecycleListener}s
* before and after. The listener having the highest
* precidence is notified last before and first after the validation method.
* @param user the {@link User} to validate the game event for
* @param gameEvent the gameEvent to validate
* @return the result of the validation
*/
AuditResult runValidate(final User user, final GameEvent gameEvent) {
for (Iterator<AbstractGameEventLifecycleListener>
iterator = this.listeners.descendingIterator();
iterator.hasNext(); ) {
final AbstractGameEventLifecycleListener listener = iterator.next();
listener.onValidationStart(user, gameEvent);
}
final AuditResult validateResult = gameEvent.validate(user);
for (final AbstractGameEventLifecycleListener listener : this.listeners) {
listener.onValidationFinish(user, gameEvent, validateResult);
}
return validateResult;
}
/**
* Runs the {@link GameEvent#process(User)} function of the given
* {@code gameEvent}, notifying all {@link AbstractGameEventLifecycleListener}s
* before and after. The listener having the highest
* precedence is notified last before and first after the validation method.
* @param user the {@link User} to process the gameEvent for
* @param gameEvent the audit gameEvent to process
* @return the result of processing the gameEvent
*/
AuditResult runProcess(final User user, final GameEvent gameEvent) {
for (Iterator<AbstractGameEventLifecycleListener> iterator =
this.listeners.descendingIterator();
iterator.hasNext(); ) {
final AbstractGameEventLifecycleListener listener = iterator.next();
listener.onProcessStart(user, gameEvent);
}
final AuditResult validateResult = gameEvent.process(user);
for (final AbstractGameEventLifecycleListener listener : this.listeners) {
listener.onProcessFinish(user, gameEvent, validateResult);
}
return validateResult;
}
}
</pre>
<br />
<br />
I said earlier, it is desirable to notify the TimingGameEventLifecycleListener last before validation starts and first after it finishes (to get more accurate timings). The GameEvents class, which is notifying the listeners, will honor the order using a <a href="http://download.oracle.com/javase/6/docs/api/java/util/NavigableSet.html">NavigableSet</a> that can be iterated in forward and backward order. Take a look at the updated version of the GameEvents class to see how it is implemented:
<br />
<br />
<pre class="brush: scala">public abstract class AbstractGameEventLifecycleListener
implements GameEventLifecycleListener, Ordered, Comparable<GameEventLifecycleListener> {
@Override
public void onValidationStart(final User user, final GameEvent gameEvent) { }
@Override
public void onValidationFinish(final User user, final GameEvent gameEvent,
final AuditResult auditResult) { }
@Override
public void onProcessStart(final User user, final GameEvent gameEvent) { }
@Override
public void onProcessFinish(final User user, final GameEvent gameEvent,
final AuditResult auditResult) { }
/**
* Compares the order of the two {@link GameEventLifecycleListener}s
* using {@link Ordered}.
* @param other another {@link GameEventLifecycleListener}
* @return int
*/
@Override
public int compareTo(final GameEventLifecycleListener other) {
return Integer.valueOf(this.getOrder()).compareTo(other.getOrder());
}
}
</pre>
<br />
<br />
One thing I wasn't able to come up with, was a good unit test to verify that the WeakHashMap is indeed not holding key references forever. This is extremely difficult to test as it involves testing for garbage collection and no, I am not suggesting running System.gc() from your test. I found something similar on <a href="http://blogs.oracle.com/tor/entry/leak_unit_tests">this blog post</a>. Apparently the Netbeans API offers something called assertGC(..) but it wasn't really fitting for my use case. So if you have a good suggestion how to test the behavior of a WeakHashMap, I am happy to hear it.<br />
<br />
<span class="Apple-style-span" style="color: red;"><b>* UPDATE * UPDATE *</b></span> After a few weeks running this the WeakHashMap and seeing some weird errors in the logs every now and then, I realized it's not the right Map implementation to use. The WeakHashMap is not what you want to use here, because the keys are not really compared using object identity. Initially I thought this was the case, when reading through the Javadoc of the WeakHashMap. What you really want is a hybrid Map, that combines the WeakHashMap with a IdentityHashMap. This hybrid Map will compare the keys based on objects identity and also use weak key references. The bad news is, there is no such map in the JDK (Java 6 at least). The good news is, there is a <a href="http://docs.jboss.org/hibernate/search/3.4/api/org/hibernate/search/util/WeakIdentityHashMap.html">WeakIdentityHashMap</a> in the Hibernate Search project and a <a href="http://commons.apache.org/collections/api-release/org/apache/commons/collections/map/ReferenceIdentityMap.html">ReferenceIdentityMap</a> in the Commons Collections Project which can be used.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-1493353025088627001.post-34650298766613261912011-08-19T02:15:00.000-07:002011-08-19T02:26:30.140-07:00Testing JMX between two Web Applications using MavenThe problem: you have two web applications and each is developed inside a separate Maven module. You need to communicate from one web application to the other and you don't want to implement a service but use JMX instead. This is a scenario we are having here at the moment. The first web application (application A) contains the game server logic of our new game. The second web application (application B) contains a debug tool which we will not deploy into production. I have selected JMX for the communication, mainly because I didn't wanted to add another technology and we are already using JMX in the first application. Both web application are Spring powered.
<br />
<br />First here is a nice <a href="http://static.springsource.org/spring/docs/3.0.x/reference/jmx.html">Spring feature</a>, which completely hides the JMX complexity for the client application behind a proxy.
<br />
<br /><pre class="brush: scala">
<br /> <bean id="clientConnector" class="org.springframework.jmx.support.MBeanServerConnectionFactoryBean">
<br /> <property name="serviceUrl" value="[SERVICE_URL]"/>
<br /> </bean>
<br />
<br /> <bean id="gameplayRecordable" class="org.springframework.jmx.access.MBeanProxyFactoryBean">
<br /> <property name="objectName" value="[MBEAN]" />
<br /> <property name="proxyInterface" value="any.java.Interface" />
<br /> <property name="server" ref="clientConnector" />
<br /> </bean>
<br /></pre>
<br />
<br />First you define a client connector which connects you to the RMI server port of the other web application. Then you define a <a href="http://static.springsource.org/spring/docs/3.0.5.RELEASE/api/org/springframework/jmx/access/MBeanProxyFactoryBean.html">MBeanProxyFactoryBean</a> using this client connector. The <span style="font-weight:bold;">objectName</span> must be the name of your MBean inside the MBean container. If you are not sure about the name, use jconsole to connect to the process of the first web application and look it up. Another important property of the MBeanProxyFactoryBean is the <span style="font-weight:bold;">proxyInterface</span>. This is an interface that the proxy will implement. The proxy will try to map each method call on that interface in application B to a JMX call in application A. I can really recommend to share the same Interface in both applications as it makes stuff really simple.
<br />
<br />This was simple so far. Now lets say you want to write an integration test to automatically test the whole shebang. The test should start up a JMX enabled Jetty from Maven. This Jetty instance should explode the war file of application A (hosting the MBean you want to invoke). Once Jetty is up, the test should execute, connect to application A via the MBeanProxyFactoryBean and validate the results. First lets enable remote JMX access in the configuration of the <a href="http://docs.codehaus.org/display/JETTY/Maven+Jetty+Plugin">maven-jetty-plugin</a>:
<br />
<br /><pre class="brush: scala">
<br /><profiles>
<br /><profile>
<br /><id>itest</id>
<br /><build>
<br /><plugins>
<br /><plugin>
<br /><groupId>org.mortbay.jetty</groupId>
<br /><artifactId>maven-jetty-plugin</artifactId>
<br /><version>${version.jetty.plugin}</version>
<br /><configuration>
<br /> <stopKey>stop_key</stopKey>
<br /> <stopPort>9999</stopPort>
<br /> <contextPath>/</contextPath>
<br /> <webApp>
<br /> ${settings.localRepository}/com/package/../../your.war
<br /> </webApp>
<br /> <jettyConfig>
<br />${basedir}/src/test/etc/jetty-jmx.xml</jettyConfig>
<br /></configuration>
<br /><executions>
<br /> <execution>
<br /> <id>start-jetty</id>
<br /> <phase>pre-integration-test</phase>
<br /> <goals>
<br /> <goal>deploy-war</goal>
<br /> </goals>
<br /> <configuration>
<br /> <daemon>true</daemon>
<br /> </configuration>
<br /> </execution>
<br /> <execution>
<br /> <id>stop-jetty</id>
<br /> <phase>prepare-package</phase>
<br /> <goals>
<br /> <goal>stop</goal>
<br /> </goals>
<br /> </execution>
<br /></executions>
<br /></plugin>
<br /></plugins>
<br /></build>
<br /></profile>
<br /></profiles>
<br /></pre>
<br />
<br />As you can see, this plugin configuration is done in a Maven profile as we are defining this for application B which also has its own Jetty configuration. The important piece is the <a href="http://wiki.eclipse.org/Jetty/Tutorial/JMX">jettyConfig element</a> which points to a jetty-jmx.xml file. To get this file, <a href="http://dist.codehaus.org/jetty/">download the Jetty container</a> that has the same version as your maven-jetty-plugin. For instance is you use version 6.1.26 of the maven-jetty-plugin, make sure you download jetty-6.1.26 from the codehaus download page. If you are using the the new <a href="http://wiki.eclipse.org/Jetty/Feature/Jetty_Maven_Plugin">jetty-maven-plugin</a> and Jetty 7 or 8, you need to download the <a href="http://download.eclipse.org/jetty/">Jetty container from Eclipse</a>. The configuration is the same for the maven-jetty-plugin and the jetty-maven-plugin. Just make sure you download the jetty-jmx.xml file from the right Jetty container as they are different. You <a href="http://stackoverflow.com/questions/5297462/enable-remote-jmx-on-jetty">don't need</a> to specify any additional system properties like -Dcom.sun.management.jmxremote, -Dcom.sun.management.jmxremote.ssl, -Dcom.sun.management.jmxremote.authenticate or -Dcom.sun.management.jmxremote.port.
<br />
<br />Once the jetty-jmx.xml is downloaded, put in somewhere inside your Maven module where it does not get packaged into the module artifact. In the example above you can see that we have the jetty-jmx.xml file in src/test/etc but any other location will do. Open the file and enable remote JMX access via RMI. In the Jetty 6 based jetty-jmx.xml file these elements should be commented in:
<br />
<br /><pre class="brush: scala">
<br /> <Call id="rmiRegistry" class="java.rmi.registry.LocateRegistry" name="createRegistry">
<br /> <Arg type="int">2099</Arg>
<br /> </Call>
<br />
<br /> <Call id="jmxConnectorServer" class="javax.management.remote.JMXConnectorServerFactory"
<br /> name="newJMXConnectorServer">
<br /> <Arg>
<br /> <New class="javax.management.remote.JMXServiceURL">
<br /> <Arg>
<br /> service:jmx:rmi://localhost:17264/jndi/rmi://localhost:2099/jmxrmi
<br /> </Arg>
<br /> </New>
<br /> </Arg>
<br /> <Arg/>
<br /> <Arg>
<br /> <Ref id="MBeanServer"/>
<br /> </Arg>
<br /> <Call name="start"/>
<br /> </Call>
<br /></pre>
<br />
<br />Note that we changed the port to 17264. You might want to use the default port instead. In the Jetty 7 based jetty-jmx.xml file these elements should be commented in:
<br />
<br /><pre class="brush: scala">
<br /> <Call name="createRegistry" class="java.rmi.registry.LocateRegistry">
<br /> <Arg type="java.lang.Integer">1099</Arg>
<br /> <Call name="sleep" class="java.lang.Thread">
<br /> <Arg type="java.lang.Integer">1000</Arg>
<br /> </Call>
<br /> </Call>
<br />
<br /> <New id="ConnectorServer" class="org.eclipse.jetty.jmx.ConnectorServer">
<br /> <Arg>
<br /> <New class="javax.management.remote.JMXServiceURL">
<br /> <Arg type="java.lang.String">rmi</Arg>
<br /> <Arg type="java.lang.String" />
<br /> <Arg type="java.lang.Integer">0</Arg>
<br /> <Arg type="java.lang.String">/jndi/rmi://localhost:1099/jettyjmx</Arg>
<br /> </New>
<br /> </Arg>
<br /> <Arg>org.eclipse.jetty:name=rmiconnectorserver</Arg>
<br /> <Call name="start" />
<br /> </New>
<br /></pre>
<br />
<br />To test the setup, run mvn -Pitest jetty:run and start jconsole. In jconsole you do not connect to a local process. Select Remote Process and enter the service URL. This URL can be copied from the jetty-jmx.xml file if you are using Jetty 6 (i.e. service:jmx:rmi://localhost:17264/jndi/rmi://localhost:2099/jmxrmi). If you are using Jetty 7 and the jetty-maven-plugin, there will be a info statement on the command line when Maven starts the Jetty container from where you can copy the service URL. Finally to execute the integration test, we use the maven-failsafe-plugin like this:
<br />
<br /><pre class="brush: scala">
<br /><plugin>
<br /><groupId>org.apache.maven.plugins</groupId>
<br /><artifactId>maven-failsafe-plugin</artifactId>
<br /><version>2.9</version>
<br /><configuration>
<br /> <includes>
<br /> <include>**/com/package/integration/*.java</include>
<br /> </includes>
<br /></configuration>
<br /><executions>
<br /> <execution>
<br /> <id>integration-test</id>
<br /> <goals>
<br /> <goal>integration-test</goal>
<br /> </goals>
<br /> </execution>
<br /> <execution>
<br /> <id>verify</id></a> to specify any additional system properties like -Dcom.sun.management.jmxremote, -Dcom.sun.management.jmxremote.ssl, -Dcom.sun.management.jmxremote.authenticate or -Dcom.sun.management.jmxremote.port.
<br />
<br />Once the jetty-jmx.xml is downloaded, put in somewhere inside your Maven module where it does not get packaged into the module artifact. In the example above you can see that we have the jetty-jmx.xml file in src/test/etc but any other location will do. Open the file and enable remote JMX access via RMI. In the Jetty 6 based jetty-jmx.xml file these elements should be commented in:
<br />
<br /><pre class="brush: scala">
<br /> <Call id="rmiRegistry" class="java.rmi.registry.LocateRegistry" name="createRegistry">
<br /> <Arg type="int">2099</Arg>
<br /> </Call>
<br />
<br /> <Call id="jmxConnectorServer" class="javax.management.remote.JMXConnectorServerFactory"
<br /> name="newJMXConnectorServer">
<br /> <Arg>
<br /> <New class="javax.management.remote.JMXServiceURL">
<br /> <Arg>service:jmx:rmi://localhost:17264/jndi/rmi://localhost:2099/jmxrmi</Arg>
<br /> </New>
<br /> </Arg>
<br /> <Arg/>
<br /> <Arg>
<br /> <Ref id="MBeanServer"/>
<br /> </Arg>
<br /> <Call name="start"/>
<br /> </Call>
<br /></pre>
<br />
<br />Note that we changed the port to 17264. You might want to use the default port instead. In the Jetty 7 based jetty-jmx.xml file these elements should be commented in:
<br />
<br /><pre class="brush: scala">
<br /> <Call name="createRegistry" class="java.rmi.registry.LocateRegistry">
<br /> <Arg type="java.lang.Integer">1099</Arg>
<br /> <Call name="sleep" class="java.lang.Thread">
<br /> <Arg type="java.lang.Integer">1000</Arg>
<br /> </Call>
<br /> </Call>
<br />
<br /> <New id="ConnectorServer" class="org.eclipse.jetty.jmx.ConnectorServer">
<br /> <Arg>
<br /> <New class="javax.management.remote.JMXServiceURL">
<br /> <Arg type="java.lang.String">rmi</Arg>
<br /> <Arg type="java.lang.String" />
<br /> <Arg type="java.lang.Integer">0</Arg>
<br /> <Arg type="java.lang.String">/jndi/rmi://localhost:1099/jettyjmx</Arg>
<br /> </New>
<br /> </Arg>
<br /> <Arg>org.eclipse.jetty:name=rmiconnectorserver</Arg>
<br /> <Call name="start" />
<br /> </New>
<br /></pre>
<br />
<br />To test the setup, run mvn -Pitest jetty:run and start jconsole. In jconsole you do not connect to a local process. Select Remote Process and enter the service URL. This URL can be copied from the jetty-jmx.xml file if you are using Jetty 6 (i.e. service:jmx:rmi://localhost:17264/jndi/rmi://localhost:2099/jmxrmi). If you are using Jetty 7 and the jetty-maven-plugin, there will be a info statement on the command line when Maven starts the Jetty container from where you can copy the service URL. Finally to execute the integration test, we use the maven-failsafe-plugin like this:
<br />
<br /><pre class="brush: scala">
<br /><plugin>
<br /><groupId>org.apache.maven.plugins</groupId>
<br /><artifactId>maven-failsafe-plugin</artifactId>
<br /><version>2.9</version>
<br /><configuration>
<br /> <includes>
<br /> <include>**/com/package/integration/*.java</include>
<br /> </includes>
<br /></configuration>
<br /><executions>
<br /> <execution>
<br /> <id>integration-test</id>
<br /> <goals>
<br /> <goal>integration-test</goal>
<br /> </goals>
<br /> </execution>
<br /> <execution>
<br /> <id>verify</id>
<br /> <goals>
<br /> <goal>verify</goal>
<br /> </goals>
<br /> </execution>
<br /></executions>
<br /></plugin>
<br /></pre>
<br /> <goals>
<br /> <goal>verify</goal>
<br /> </goals>
<br /> </execution>
<br /></executions>
<br /></plugin>
<br /></pre>Unknownnoreply@blogger.com1tag:blogger.com,1999:blog-1493353025088627001.post-57841465174130010672011-08-09T07:07:00.000-07:002011-08-09T07:41:25.844-07:00Sharing configuration files from a Maven Parent ProjectOkay this post is probably not much news for people who know Maven in and out. I am planning to use this as a reference for myself, in the case that I have to solve a similar problem again in the future. The current project I am working with, is set up as a Maven <a href="http://www.sonatype.com/books/mvnex-book/reference/multimodule.html">multi module project</a>. There is a parent pom which is set to pom-packaging. There are several child modules, set to either jar- or war-packaging. Within the pom.xml file of the parent project, we use the <a href="http://maven.apache.org/pom.html#Plugin_Management">pluginManagement</a> section to define plugins that should be available to the child modules. The pluginManagement mechanism is an excellent way to avoid the <a href="http://en.wikipedia.org/wiki/Don't_repeat_yourself">DRY</a> problem and not to duplicate Maven configuration within the inheriting projects.
<br />
<br />In most cases configuring plugins within the pluginManagement section is straight forward. It can however get a bit problematic if the plugin depends on (or is reading from external) configuration files. Lets have a look at one example from this parent project of ours.
<br />
<br /><pre class="brush: scala">
<br /><project xmlns="http://maven.apache.org/POM/4.0.0"
<br /> xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
<br /> xsi:schemaLocation="http://maven.apache.org/POM/4.0.0
<br /> http://maven.apache.org/maven-v4_0_0.xsd">
<br /> <modelVersion>4.0.0</modelVersion>
<br />
<br /> <groupId>com.package</groupId>
<br /> <artifactId>project-parent</artifactId>
<br /> <packaging>pom</packaging>
<br /> <version>0.1-SNAPSHOT</version>
<br />
<br /> <modules>
<br /> <module>child-a</module>
<br /> <module>child-b</module>
<br /> <module>child-c</module>
<br /> </modules>
<br />
<br /> <properties>
<br /> <version.mysql.connector>5.1.12</version.mysql.connector>
<br /> </properties>
<br />
<br /> <build>
<br />
<br /> <pluginManagement>
<br /> <plugins>
<br /> <plugin>
<br /> <groupId>org.codehaus.mojo</groupId>
<br /> <artifactId>sql-maven-plugin</artifactId>
<br /> <version>1.4</version>
<br /> <dependencies>
<br /> <dependency>
<br /> <groupId>mysql</groupId>
<br /> <artifactId>mysql-connector-java</artifactId>
<br /> <version>${version.mysql.connector}</version>
<br /> </dependency>
<br /> </dependencies>
<br /> <configuration>
<br /> <driver>com.mysql.jdbc.Driver</driver>
<br /> <url>jdbc:mysql://localhost/</url>
<br /> <username>xyz</username>
<br /> <password>xyz</password>
<br /> </configuration>
<br />
<br /> <executions>
<br /> <execution>
<br /> <id>drop-and-recreate-db</id>
<br /> <phase>process-test-resources</phase>
<br /> <goals>
<br /> <goal>execute</goal>
<br /> </goals>
<br /> <configuration>
<br /> <autocommit>true</autocommit>
<br /> <srcFiles>
<br /> <srcFile>
<br /> ${project.build.directory}/sql/schema/user.sql
<br /> </srcFile>
<br /> <srcFile>
<br /> ${project.build.directory}/sql/schema/core.sql
<br /> </srcFile>
<br /> <srcFile>
<br /> ${project.build.directory}/sql/schema/game.sql
<br /> </srcFile>
<br /> </srcFiles>
<br /> <onError>abort</onError>
<br /> </configuration>
<br /> </execution>
<br /> </executions>
<br /> </plugin>
<br /> </plugins>
<br /> </pluginManagement>
<br />
<br /> </build>
<br /></project>
<br /></pre>
<br />
<br />Here we use the <a href="http://mojo.codehaus.org/sql-maven-plugin/">sql-maven-plugin</a> to setup the database before tests are being run. The sql-maven-plugin will execute a bunch of <span style="font-weight:bold;">*.sql</span> files which are stored in a subfolder in the parent project. As you deploy the parent project to your repository, these files won't be published along with the pom.xml as the packaging is set to pom-packaging. Therefore, if you run the inherited sql-maven-plugin the *.sql files will not be available and the plugin will fail. This will certainly be a problem if your <a href="http://www.atlassian.com/software/bamboo/">continuous integration server</a> has a build plan for each Maven child module and not a single build plan for the entire project.
<br />
<br />To overcome this problem, there are 2 things you have to do. First, the parent project needs to publish the *.sql files (or other static files which are needed) to your repository, so that the inheriting modules have access to these files. For this to work, we use the <a href="http://maven.apache.org/plugins/maven-assembly-plugin/">maven-assembly-plugin</a> like this:
<br />
<br /><pre class="brush: scala">
<br /><project xmlns="http://maven.apache.org/POM/4.0.0"
<br /> xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
<br /> xsi:schemaLocation="http://maven.apache.org/POM/4.0.0
<br /> http://maven.apache.org/maven-v4_0_0.xsd">
<br /> <modelVersion>4.0.0</modelVersion>
<br />
<br /> ... as before ...
<br />
<br /> <build>
<br /> <plugins>
<br /> <plugin>
<br /> <artifactId>maven-assembly-plugin</artifactId>
<br /> <inherited>false</inherited>
<br /> <configuration>
<br /> <descriptors>
<br /> <descriptor>
<br /> ${project.basedir}/assembly/zip.xml
<br /> </descriptor>
<br /> </descriptors>
<br /> </configuration>
<br /> <executions>
<br /> <execution>
<br /> <id>make-assembly</id>
<br /> <phase>package</phase>
<br /> <goals>
<br /> <goal>single</goal>
<br /> </goals>
<br /> </execution>
<br /> </executions>
<br /> </plugin>
<br /> </plugins>
<br />
<br /> <pluginManagement>
<br /> ... as before ...
<br /> </pluginManagement>
<br />
<br /> </build>
<br /></project>
<br /></pre>
<br />
<br />Note that the maven-assembly-plugin in this case is not configured within the pluginManagement section of the parent pom, as we don't want to make this functionality available to child modules. In the configuration you can see that the plugin is set up to be executed during the package phase and that the plugin configuration is defined in the <span style="font-weight:bold;">zip.xml</span> file. This zip.xml file looks like this:
<br />
<br /><pre class="brush: scala">
<br /><assembly xmlns="http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.0"
<br /> xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
<br /> xsi:schemaLocation="http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.0 http://maven.apache.org/xsd/assembly-1.1.0.xsd">
<br /> <id>sql-files</id>
<br /> <formats>
<br /> <format>zip</format>
<br /> </formats>
<br /> <includeBaseDirectory>false</includeBaseDirectory>
<br /> <fileSets>
<br /> <fileSet>
<br /> <directory>${project.basedir}/sql/schema</directory>
<br /> <outputDirectory/>
<br /> <includes>
<br /> <include>**/*</include>
<br /> </includes>
<br /> </fileSet>
<br /> </fileSets>
<br /></assembly>
<br /></pre>
<br />
<br />This configuration will create a zip-file of all *.sql files found in <span style="font-weight:bold;">${project.basedir}/sql/schema</span> and publish this zip-file along with the pom.xml when mvn deploy is executed. The id of this configuration is "<span style="font-style:italic;">sql-files</span>". This id will be used as a suffix and become part of the filename of the zip-file.
<br />
<br />Now that we publish the zip-file to the repository, we need a way for the child modules to grab and extract the zip-file before the maven-sql-plugin is executed. This is where the <a href="http://maven.apache.org/plugins/maven-dependency-plugin/">maven-dependency-plugin</a> comes in handy. Again, the maven-dependency-plugin is configured in the pluginManagement section of the parent pom.xml, as this time we want to inherit the functionality to child modules. Here is what the configuration of the maven-dependency-plugin looks like:
<br />
<br /><pre class="brush: scala">
<br /><project xmlns="http://maven.apache.org/POM/4.0.0"
<br /> xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
<br /> xsi:schemaLocation="http://maven.apache.org/POM/4.0.0
<br /> http://maven.apache.org/maven-v4_0_0.xsd">
<br /> <modelVersion>4.0.0</modelVersion>
<br />
<br /> ... as before ...
<br />
<br /> <build>
<br /> <plugins>
<br /> ... as before ...
<br /> </plugins>
<br />
<br /> <pluginManagement>
<br /> <plugin>
<br /> <groupId>org.apache.maven.plugins</groupId>
<br /> <artifactId>maven-dependency-plugin</artifactId>
<br /> <version>2.3</version>
<br /> <executions>
<br /> <execution>
<br /> <id>unpack-sql-files</id>
<br /> <phase>process-test-resources</phase>
<br /> <goals>
<br /> <goal>unpack</goal>
<br /> </goals>
<br /> <configuration>
<br /> <artifactItems>
<br /> <artifactItem>
<br /> <groupId>com.package</groupId>
<br /> <artifactId>project-parent</artifactId>
<br /> <version>
<br /> ${parent.version}
<br /> </version>
<br /> <type>zip</type>
<br /> <classifier>sql-files</classifier>
<br /> <overWrite>true</overWrite>
<br /> <outputDirectory>
<br /> ${project.build.directory}/sql/schema
<br /> </outputDirectory>
<br /> <includes>**/*.sql</includes>
<br /> </artifactItem>
<br /> </artifactItems>
<br /> <includes>**/*</includes>
<br /> <overWriteReleases>true</overWriteReleases>
<br /> <overWriteSnapshots>true</overWriteSnapshots>
<br /> </configuration>
<br /> </execution>
<br /> </executions>
<br /> </plugin>
<br />
<br /> ... as before ...
<br />
<br /> </pluginManagement>
<br />
<br /> </build>
<br /></project>
<br /></pre>
<br />
<br />The plugin (if a child module decides to use this) will be executed during the process-test-resources phase of the build. We are getting the zip file by specifying the groupId, artifactId, version and the type. Also the classifier value must match the id which we used earlier in the zip.xml file for configuring the maven-assembly-plugin. The zip-file is extracted to ${project.build.directory}/sql/schema and we are only extracting files having the *.sql extension (well there shouldn't be any other files but ok). This concludes what needs to be done to extract the zip-file and child modules are now ready to use the extracted files. Here is a snippet from a pom.xml file in a Maven child module. This is everything needed to run the sql-maven-plugin defined in the parent pom and extract the required configuration files upfront.
<br />
<br /><pre class="brush: scala">
<br /><project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
<br /> xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
<br />
<br /> <modelVersion>4.0.0</modelVersion>
<br /> <parent>
<br /> <groupId>com.package</groupId>
<br /> <artifactId>project-parent</artifactId>
<br /> <version>0.1-SNAPSHOT</version>
<br /> </parent>
<br />
<br /> <artifactId>child-a</artifactId>
<br /> <packaging>war</packaging>
<br />
<br /> <build>
<br /> <plugins>
<br /> <plugin>
<br /> <groupId>org.apache.maven.plugins</groupId>
<br /> <artifactId>maven-dependency-plugin</artifactId>
<br /> </plugin>
<br />
<br /> <plugin>
<br /> <groupId>org.codehaus.mojo</groupId>
<br /> <artifactId>sql-maven-plugin</artifactId>
<br /> </plugin>
<br /> </plugins>
<br /> </build>
<br /></project>
<br /></pre>
<br />
<br />For the sake of completeness, once again the full parent pom.xml file.
<br />
<br /><pre class="brush: scala">
<br /><project xmlns="http://maven.apache.org/POM/4.0.0"
<br /> xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
<br /> xsi:schemaLocation="http://maven.apache.org/POM/4.0.0
<br /> http://maven.apache.org/maven-v4_0_0.xsd">
<br /> <modelVersion>4.0.0</modelVersion>
<br />
<br /> <groupId>com.package</groupId>
<br /> <artifactId>project-parent</artifactId>
<br /> <packaging>pom</packaging>
<br /> <version>0.1-SNAPSHOT</version>
<br />
<br /> <modules>
<br /> <module>child-a</module>
<br /> <module>child-b</module>
<br /> <module>child-c</module>
<br /> </modules>
<br />
<br /> <properties>
<br /> <version.mysql.connector>5.1.12</version.mysql.connector>
<br /> </properties>
<br />
<br /> <build>
<br /> <plugins>
<br /> <plugin>
<br /> <artifactId>maven-assembly-plugin</artifactId>
<br /> <inherited>false</inherited>
<br /> <configuration>
<br /> <descriptors>
<br /> <descriptor>
<br /> ${project.basedir}/assembly/zip.xml
<br /> </descriptor>
<br /> </descriptors>
<br /> </configuration>
<br /> <executions>
<br /> <execution>
<br /> <id>make-assembly</id>
<br /> <phase>package</phase>
<br /> <goals>
<br /> <goal>single</goal>
<br /> </goals>
<br /> </execution>
<br /> </executions>
<br /> </plugin>
<br /> </plugins>
<br />
<br /> <pluginManagement>
<br /> <plugin>
<br /> <groupId>org.apache.maven.plugins</groupId>
<br /> <artifactId>maven-dependency-plugin</artifactId>
<br /> <version>2.3</version>
<br /> <executions>
<br /> <execution>
<br /> <id>unpack-sql-files</id>
<br /> <phase>process-test-resources</phase>
<br /> <goals>
<br /> <goal>unpack</goal>
<br /> </goals>
<br /> <configuration>
<br /> <artifactItems>
<br /> <artifactItem>
<br /> <groupId>com.package</groupId>
<br /> <artifactId>project-parent</artifactId>
<br /> <version>
<br /> ${parent.version}
<br /> </version>
<br /> <type>zip</type>
<br /> <classifier>sql-files</classifier>
<br /> <overWrite>true</overWrite>
<br /> <outputDirectory>
<br /> ${project.build.directory}/sql/schema
<br /> </outputDirectory>
<br /> <includes>**/*.sql</includes>
<br /> </artifactItem>
<br /> </artifactItems>
<br /> <includes>**/*</includes>
<br /> <overWriteReleases>true</overWriteReleases>
<br /> <overWriteSnapshots>true</overWriteSnapshots>
<br /> </configuration>
<br /> </execution>
<br /> </executions>
<br /> </plugin>
<br />
<br /> <plugin>
<br /> <groupId>org.codehaus.mojo</groupId>
<br /> <artifactId>sql-maven-plugin</artifactId>
<br /> <version>1.4</version>
<br /> <dependencies>
<br /> <dependency>
<br /> <groupId>mysql</groupId>
<br /> <artifactId>mysql-connector-java</artifactId>
<br /> <version>${version.mysql.connector}</version>
<br /> </dependency>
<br /> </dependencies>
<br /> <configuration>
<br /> <driver>com.mysql.jdbc.Driver</driver>
<br /> <url>jdbc:mysql://localhost/</url>
<br /> <username>xyz</username>
<br /> <password>xyz</password>
<br /> </configuration>
<br />
<br /> <executions>
<br /> <execution>
<br /> <id>drop-and-recreate-db</id>
<br /> <phase>process-test-resources</phase>
<br /> <goals>
<br /> <goal>execute</goal>
<br /> </goals>
<br /> <configuration>
<br /> <autocommit>true</autocommit>
<br /> <srcFiles>
<br /> <srcFile>
<br /> ${project.build.directory}/sql/schema/user.sql
<br /> </srcFile>
<br /> <srcFile>
<br /> ${project.build.directory}/sql/schema/core.sql
<br /> </srcFile>
<br /> <srcFile>
<br /> ${project.build.directory}/sql/schema/game.sql
<br /> </srcFile>
<br /> </srcFiles>
<br /> <onError>abort</onError>
<br /> </configuration>
<br /> </execution>
<br /> </executions>
<br /> </plugin>
<br /> </pluginManagement>
<br />
<br /> </build>
<br /></project>
<br /></pre>Unknownnoreply@blogger.com2tag:blogger.com,1999:blog-1493353025088627001.post-48483353504648009662011-08-05T07:10:00.000-07:002011-08-05T07:35:35.070-07:00Heating up the Code GeneratorIt has been a couple of years since I used a code generator in one of my projects. It must have been 2004 or 2005, when <a href="http://en.wikipedia.org/wiki/Model-driven_architecture">MDA</a> was a big buzzword. Back then, we used a self-made code generator, which was written by a very smart developer who I used to work with at <a href="http://www.aperto.de/en.html">Aperto AG</a> in Berlin. He even <a href="http://sourceforge.net/projects/apertogenerator/">contributed</a> the code generator to the open source community later on. Those were the days.<br /><br />Since then, there wasn't much use for a code generator anymore. IDE's getting better and better at helping you with code generation and auto-completion etc. And of course, adding a generator to your build process is always a bit of work. So it is often faster to write the boilerplate code yourself, if that doesn't takes forever. However, last week it was time to bring code generation back from the grave. The game we are currently developing for <a href="http://www.playfish.com/?page=company">EA here in Norway</a>, has a mechanism where the game client sends game events to the server. In our domain we call these events also audit changes. A typical audit change can be that the player has found a treasure, that the player has consumed food or that the player has discovered a new scenery. On the client side, the audit change is implemented in ActionScript 3. On the server side the audit change is implemented in Java. There is a transport layer in between which serializes the AS3 object, sends it over the network and deserializes it back into a Java object. For us server developers, this meant that every new audit change also needed a transport definition about how to serialize and deserialize the audit change. This definition was always wrapped into a new audit change type class. The type definition class was written manually, which sort of was okay until we had more than 20 audit changes in the game. Thats when I started to look into generating the transport layer on the server side.<br /><br />In Java 5 along with the new Annotation language feature, Sun added a command-line utility called <a href="http://download.oracle.com/javase/1.5.0/docs/guide/apt/GettingStarted.html">Annotation Processing Tool</a> (apt). This was later merged into the standard javac compiler with the release of Java 6. There is also the <a href="http://apt-jelly.sourceforge.net/">apt-jelly project</a> which provides an interface to apt and can be used to generate code artifacts based on templates written with <a href="http://freemarker.sourceforge.net/">Freemarker</a> or <a href="http://commons.apache.org/jelly/index.html">Jelly</a>. Finally to glue everything together, there is the <a href="http://mojo.codehaus.org/apt-maven-plugin/">maven-apt-plugin</a> which can be used to execute a <a href="http://download.oracle.com/javase/1.5.0/docs/guide/apt/mirror/com/sun/mirror/apt/AnnotationProcessorFactory.html">AnnotationProcessorFactory</a> during your build and therefore integrate apt into your project. The maven-apt-plugin looks sort of dead however. I think nowadays even the standard <a href="http://jira.codehaus.org/browse/MCOMPILER-75">maven-compiler-plugin</a> or the <a href="http://code.google.com/p/maven-annotation-plugin/">maven-annotation-plugin</a> can be used to process your annotations and generate code artifacts. Since I got our generator working using the maven-apt-plugin, I did not bother looking at the other 2 plugins. If someone has a working example on how they are used with a AnnotationProcessorFactory, I would be really happy to see it. Let's look into some code now.<br /><br />Here is an example of an audit change as it could exist in the game:<br /><br /><pre class="brush: scala"><br /><br />/**<br /> * Audit change telling that the {@link User} has bought a {@link House}.<br /> */<br />@DatatypeDefinition(minSize = 7)<br />public class BoughtHouseForGold implements AuditChange {<br /><br /> private int itemId;<br /><br /> @DatatypeCollection(elementType = Integer.class)<br /> private List<Integer> sceneries;<br /><br /> @DatatypeIgnore<br /> private User friend;<br /><br /> ... other stuff not relevant ...<br />}<br /></pre><br /><br />The transport class that we want to generate need to serialize and deserialize every field of the audit change. As you can guess from above, we do not want to transport the friend field in the example. The meta data, that isn't accessible via reflection within the template which we will write later, needs to be given to the generator in another way - for instance via Annotations. Thats why I created a bunch of Annotations just to instruct the generator.<br /><br /><pre class="brush: scala"><br />/**<br /> * Marks the type annotated by this annotation as something that can be <br /> * serialized and deserialized using a Datatype.<br /> */<br />@Retention(RetentionPolicy.SOURCE)<br />@Target({ElementType.TYPE})<br />public @interface DatatypeDefinition {<br /> int minSize() default 0;<br />}<br /><br /><br />/**<br /> * Any {@link Field} annotated this way, will be rendered as a Collection of the <br /> * specified type when the Datatype is generated.<br /> */<br />@Retention(RetentionPolicy.SOURCE)<br />@Target({ElementType.FIELD})<br />public @interface DatatypeCollection {<br /> Class<?> elementType();<br />}<br /><br /><br />/**<br /> * Any {@link Field} annotated this way, will be ignored in when the Datatype is generated.<br /> */<br />@Retention(RetentionPolicy.SOURCE)<br />@Target({ElementType.FIELD})<br />public @interface DatatypeIgnore {<br />}<br /></pre><br /><br />Now for the hardest part, the template. I can recommend, that you start writing the code for the first class (the one that should be generated later) manually before you work on the template. Add the class into the default location in Maven, i.e. <span style="font-weight:bold;">src/main/java/com/whatever/package</span>. The generated classes will end up in a different location later (under <span style="font-weight:bold;">target/generated-sources/</span>) so it will be easy to compare the expected outcome and the generated outcome while working on the template. Here is a template example in which I use Freemarker directives.<br /><br /><pre class="brush: scala"><br /><#-- for each type annotated with DatatypeDefinition --><br /><@forAllTypes var="type" annotationVar="datatypeDefinition" annotation="package.DatatypeDefinition"><br /><#-- tell apt-jelly that the outcome will be a java source artifact --><br /><@javaSource name="package.types.${type.simpleName}Type"><br />package package.types;<br /><br /><#-- all imports go here --><br />import java.io.IOException;<br /><br />/**<br /> * This class contains the {@link Datatype} for {@link ${type.simpleName}}.<br /> */<br />public class ${type.simpleName}Type extends AbstractAuditableType<${type.simpleName}> { <#-- class name based on type that was annotated with DatatypeDefinition --><br /> public ${type.simpleName}Type() {<br /> super(<br /> <#-- replace camel case with underscores --><br /> TypeCodes.${type.simpleName?replace("(?<=[a-z0-9])[A-Z]|(?<=[a-zA-Z])[0-9]|(?<=[A-Z])[A-Z](?=[a-z])", "_$0", 'r')?upper_case}_TYPE_CODE,<br /> new Datatype<${type.simpleName}>(${type.simpleName}.class, ${datatypeDefinition.minSize}) {<br /><br /> @Override<br /> public ${type.simpleName} read(final DatatypeInput in) throws DataFormatException {<br /> final ${type.simpleName} value = new ${type.simpleName}();<br /> <@forAllFields var="field"><br /> <#assign useField = true><br /> <#-- do not do anything if field is a constant --><br /> <#if field.static = true><#assign useField = false></#if><br /> <#-- do not do anything if annotated with @DatatypeIgnore --><br /> <@ifHasAnnotation declaration=field annotation="package.DatatypeIgnore"><#assign useField = false></@ifHasAnnotation><br /> <#if useField = true><br /> <#-- build name of the setter method --><br /> <#assign setter = "set${field?cap_first}"><br /> <#assign useCollection = false><br /> <@ifHasAnnotation var="datatypeCollectionAnnotation" declaration=field annotation="package.DatatypeCollection"><#assign useCollection = true></@ifHasAnnotation><br /> <#if useCollection = true><br /> <#if datatypeCollectionAnnotation.elementType = "java.lang.Integer"><br /> value.${setter}(in.readList(Datatype.uintvar31));<br /> <#else><br /> System.out.println('Cannot read collections of type: ${datatypeCollectionAnnotation.elementType}. Extend auditable-type.fmt');<br /> </#if><br /> <#else><br /> <#-- Handling for fields without extra annotations --><br /> <#if field.type = "int" || field.type = "java.lang.Integer"><br /> value.${setter}(in.readUintvar31()); <br /> <#elseif field.type = "boolean" || field.type = "java.lang.Boolean"><br /> value.${setter}(in.readBoolean());<br /> <#elseif field.type = "java.lang.String"><br /> value.${setter}(in.readString());<br /> </#if><br /> </#if><br /> <#assign useCollection = false><br /> </#if><br /> <#assign useField = false><br /> </@forAllFields><br /> return value;<br /> }<br /><br /> @Override<br /> public void write(final DatatypeOutput out, final ${type.simpleName} value) throws IOException {<br /> <@forAllFields var="field"><br /> <#assign useField = true><br /> <#-- do not do anything if field is a constant --><br /> <#if field.static = true><#assign useField = false></#if><br /> <#-- do not do anything if annotated with @DatatypeIgnore --><br /> <@ifHasAnnotation declaration=field annotation="DatatypeIgnore"><#assign useField = false></@ifHasAnnotation><br /> <#if useField><br /> <#-- build name of the getter method --><br /> <#assign getter = "get${field?cap_first}"><br /> <#if field.type = "boolean" || field.type = "java.lang.Boolean"><br /> <#-- boolean getter starts with is --><br /> <#assign getter = "is${field?cap_first}"><br /> </#if><br /> <#assign useCollection = false><br /> <@ifHasAnnotation var="datatypeCollectionAnnotation" declaration=field annotation="DatatypeCollection"><#assign useCollection = true></@ifHasAnnotation><br /> <#if useCollection = true><br /> <#if datatypeCollectionAnnotation.elementType = "java.lang.Integer"><br /> out.writeCollection(Datatype.uintvar31, value.${getter}());<br /> <#else><br /> System.out.println('Cannot write collections of type: ${datatypeCollectionAnnotation.elementType}. Extend auditable-type.fmt');<br /> </#if> <br /> <#else><br /> <#if field.type = "int" || field.type = "java.lang.Integer"><br /> out.writeUintvar31(value.${getter}()); <br /> <#elseif field.type = "boolean" || field.type = "java.lang.Boolean"><br /> out.writeBoolean(value.${getter}());<br /> <#elseif field.type = "java.lang.String"><br /> out.writeString(value.${getter}());<br /> </#if><br /> </#if><br /> <#assign useCollection = false><br /> </#if><br /> <#assign useField = false><br /> </@forAllFields><br /> }<br /> }<br /> );<br /> }<br />}<br /></@javaSource><br /></@forAllTypes><br /></pre><br /><br />My apologies, this is incredibly hard to read here on the blog. It helps to click the "view source" button in the upper right corner of the code above and copy everything to a text editor. I also added comments in the template to explain what I am doing.<br /><br />Finally here is the configuration for the maven-apt-plugin, so that it will generate your code artifacts before compiling your project (note that target/generated-sources will be merged with the real sources during compile time). <br /><br /><pre class="brush: scala"><br /><plugin><br /><groupId>org.codehaus.mojo</groupId><br /><artifactId>apt-maven-plugin</artifactId><br /><version>1.0-alpha-4</version><br /><configuration><br /> <factory>net.sf.jelly.apt.freemarker.FreemarkerProcessorFactory</factory><br /> <options><br /> <option>template=${basedir}/src/main/resources/apt/auditable-type.fmt<br /> </option><br /> </options><br /> <fork>true</fork><br /></configuration><br /><dependencies><br /> <dependency><br /> <groupId>net.sf.apt-jelly</groupId><br /> <artifactId>apt-jelly-core</artifactId><br /> <version>2.14</version><br /> </dependency><br /> <dependency><br /> <groupId>net.sf.apt-jelly</groupId><br /> <artifactId>apt-jelly-freemarker</artifactId><br /> <version>2.14</version><br /> </dependency><br /></dependencies><br /><executions><br /> <execution><br /> <goals><br /> <goal>process</goal><br /> </goals><br /> </execution><br /></executions><br /></plugin><br /></pre><br /><br />And voilà, here is our generated <span style="font-weight:bold;">BoughtHouseForGoldType</span> class fresh out of the oven:<br /><br /><pre class="brush: scala"><br />package package.types;<br /><br />import java.io.IOException;<br /><br />/**<br /> * This class contains the {@link Datatype} for {@link BoughtHouseForGold}.<br /> */<br />public class BoughtHouseForGoldType extends AbstractAuditableType<BoughtHouseForGold> {<br /> public BoughtHouseForGoldType() {<br /> super(<br /> TypeCodes.BOUGHT_HOUSE_FOR_GOLD_TYPE_CODE,<br /> new Datatype<BoughtHouseForGold>(BoughtHouseForGold.class, 7) {<br /><br /> @Override<br /> public BoughtHouseForGold read(final DatatypeInput in) throws DataFormatException {<br /> final BoughtHouseForGold value = new BoughtHouseForGold();<br /> value.setItemId(in.readUintvar31());<br /> value.setSceneries(in.readList(Datatype.uintvar31));<br /> return value;<br /> }<br /><br /> @Override<br /> public void write(final DatatypeOutput out, final BoughtHouseForGold value) throws IOException {<br /> out.writeUintvar31(value.getItemId());<br /> out.writeCollection(Datatype.uintvar31, value.getSceneries());<br /> }<br /> }<br /> );<br /> }<br />}<br /></pre>Unknownnoreply@blogger.com0