Spending time before Java One in San Francisco


I finally made it to San Francisco. My company sent me and ten colleagues to the Java One conference this year. This is supposed to be the biggest Java related conference in the world. We will see about that. It starts next Tuesday. In the meantime, I have some extra days I can spent in San Francisco. The plan is to travel to Fresno today and meet a friend which I have not seen for a couple of years.

Even before the Java One starts are there a lot of conferences about agile software development, Java based technologies or vendor specific stuff. Some of them are even for free. There is the CommunityOne West that starts next Monday. It is a side conference, also hosted by Sun. The Community One West goes from 1st to 3rd of June, also in Moscone Center. Since it is starting one day before the real JavaOne conference, it will be a great opportunity for me to get my head filled up with Java stuff even earlier. This is the CommunityOne program for Monday. There are a lot of sessions about Open Solaris and Cloud Computing. Since I will unfortunately only be coming back from Fresno on Monday, I will miss the morning and lunch sessions of the Community One West. However, there are some “pearls” I found during the afternoon like “Dynamic Data in a Web 2.0 World”, “Three Techniques for Database Scalability with Hibernate” or “What Do You Need to Know About Creating and Running a Scalable Web Site but Were Afraid to Ask?” Looking forward to go there.

Anyway, I want to quit this post with some practical tips for you guys entering the US. One of my colleagues was really hit hard this time from the US border control. I am not sure that is the correct name but they are the guys who will check your filled in papers from the plane and ask all these questions. Obviously they had found something in his profile or he just looked similar to someone they were looking for. He was asked to not proceed to the Exit but to another office called “Secondary”. In there, they asked a lot of detailed questions and this time even with a quite obvious background. Something like “Do you have family or friends in Saudi Arabia, Iran or somewhere else in the middle east?” I guess he was tempted to answer that our former System Owner migrated from Iran to Sweden some 20 years ago :)

Then the staff was really going into detail with questions about Java One, like what type of conference it is. How many years he had worked in our company. What his position is in the company. What exactly he was doing etc. Finally, he cleared secondary. Now everyone, even the Exit people, are in an area between the border security and the customs. This is where you pick up your luggage. I headed directly to the bathroom to get my hands washed. It is really smart, in times where US has the most Swine Influenza cases, to have everyone press their four finger and thumb of both hands on a fingerprint scanner!

Waiting for the luggage, some guys with beagle dogs went around, checking bags. These dogs are really great. They found a lot of food in peoples bags. It was fun to watch - snap, dog caught ya. After a couple of minutes, we got the bags and moved out. Guess what, my colleague was picked on again and he had to go someplace else. They asked him, if he wanted to change something in his customs declaration paper. Obviously that was not the case and they started searching his belongings. Of course they did not find anything.

Other stuff. Take it easy when leaving the plane. Border control usually starts off with only a few counters, so it looks like you have to wait forever. They will then open more and more counters. The passengers who were among the first, waited longest. It is better to come late I'd say. Make sure you have both sides on all forms filled in. Be prepared to answer detailed questions about the purpose of your trip and details about the place you are staying. Do not travel to the US if you do not speak English please. I had a Spanish lady in my queue, filling in a German I-94 form, who could not talk Any! English. It was a disaster. She did not even know what to fill in in which fields, not to talk about the questions they asked her. I know it is ignorant but thats the way it is, you have to be at least OK in English.

How to Google App Enginefy your existing Java application and fail

Last month Google added Java support to the Google App Engine. This means, that you can from now on host Java based applications on the Google infrastructure. Even better is the fact, that you can do it for free. Well, it is free for starters. There are quotas like 5 million page views per day, 6,5 hours CPU time per day, 1GB traffic per day etc. For the everyday "lets-do-something in Java" app, this is more than enough, so I decided to run a simple Wicket web-application on Googles appengine.

I am still a part-time student at Fernuni Hagen in Germany and in the current semester I am in a course called “Web 2.0 and social software”. The course is organized so that 3 students have to pick a topic from the web 2.0 bubble and prepare a presentation about it. Recently I have read the book “Collective Intelligence in Action” so I thought I might as well talk about Tag, Tagging and TagClouds. It is always better to not just talk in your presentations but show some real action. Therefore I created a little web-application, based on Satnam Alag's domain model from the book and powered with Apache Wicket and Spring to be fully functional.

The non Google App Enginefied version uses an in-memory HSQL database to persist Tags, Users and Items. This means, if you restart the application, everything will be gone. Everything is visualized using a TagCloud embedded in a Apache Wicket WebPage. The page will also contain entry forms to add new Users, Items and Tags. So here is the domain model which consists of five classes. The Entity class is an abstract base class for Item, Tag, TaggedItem and User.




public abstract class Entity implements Serializable
{
private static final long serialVersionUID = 1L;

public int id;

public String name;

public int getId()
{
return id;
}

public void setId(int id)
{
this.id = id;
}

public String getName()
{
return name;
}

public void setName(String name)
{
this.name = name;
}

@Override
public boolean equals(Object o)
{
if (this == o) return true;
if (o == null || getClass() != o.getClass()) return false;

Entity entity = (Entity) o;

if (id != entity.id) return false;

return true;
}

@Override
public int hashCode()
{
return new Integer(id).hashCode();
}
}

public class User extends Entity
{
}

public class Tag extends Entity
{
}

public class Item extends Entity
{
}

public class TaggedItem extends Entity
{
private int _userId;

private int _itemId;

private int _tagId;

private int _weight;

public int getUserId()
{
return _userId;
}

public void setUserId(final int userId)
{
_userId = userId;
}

public int getItemId()
{
return _itemId;
}

public void setItemId(final int itemId)
{
_itemId = itemId;
}

public int getTagId()
{
return _tagId;
}

public void setTagId(final int tagId)
{
_tagId = tagId;
}

public int getWeight()
{
return _weight;
}

public void setWeight(final int weight)
{
_weight = weight;
}
}




The CRUD operations for this domain model are implemented using the DAO pattern (surprise, surprise). On top of the hierarchy sits the GenericDao interface which contains methods for read, write, delete. Then I have created interfaces for each individual DAO class that is connected to one entity in the domain model, ie. UserDao or ItemDao. These individual DAO interfaces extend the GenericDao, specifying the Entity to persist and the primary key type. The concrete implementation is now done in classes like UserJdbcDao or ItemJdbcDao. These are implementing the appropriate interface and extend a helper class called AbstractJdbcDao, which contains a reference to the Spring SimpleJdbcTemplate. Here is an example of the class structure for the User class.




public interface GenericDao<T, PK extends Serializable>
{

/**
* Persist the newInstance object into database
*/
PK create(T newInstance);

/**
* Retrieve an object that was previously persisted to the database using
* the indicated id as primary key
*/
T read(PK id);

/**
* Retrieves all objects that were previously persisted to the database.
*/
List<T> readAll();

/**
* Save changes made to a persistent object.
*/
void update(T transientObject);

/**
* Remove an object from persistent storage in the database
*/
void delete(T persistentObject);
}

public interface UserDao extends GenericDao<User, Long>
{
/**
* Returns the {@link User} whose name matches the given String.
* @param name a users name
* @return User or <code>null</code>
*/
User readByName(final String name);
}

public abstract class AbstractJdbcDao<T extends Entity>
{
private SimpleJdbcTemplate m_simpleJdbcTemplate;
private String m_identityQuery;

protected SimpleJdbcTemplate getSimpleJdbcTemplate()
{
return m_simpleJdbcTemplate;
}

public void setDataSource(DataSource dataSource)
{
m_simpleJdbcTemplate = new SimpleJdbcTemplate(dataSource);
}

public String getIdentityQuery()
{
return m_identityQuery;
}

public void setIdentityQuery(final String identityQuery)
{
m_identityQuery = identityQuery;
}

abstract ParameterizedRowMapper<T> getMapper();
}

public class UserJdbcDao extends AbstractJdbcDao<User> implements UserDao
{
private static final ParameterizedRowMapper<User> MAPPER = new ParameterizedRowMapper<User>()
{
public User mapRow(ResultSet rs, int rowNum) throws SQLException
{
final User user = new User();
user.setId(rs.getLong("user_id"));
user.setName(rs.getString("user_name"));
return user;
}
};

public Long create(User newInstance)
{
final SimpleJdbcTemplate template = getSimpleJdbcTemplate();
template.update("INSERT INTO user(user_name) VALUES(?)", newInstance.getName());
return template.queryForLong(getIdentityQuery());
}

public User read(Long id)
{
final SimpleJdbcTemplate template = getSimpleJdbcTemplate();
final List<User> users = template.query("SELECT * FROM user WHERE user_id = ?", getMapper(), id);
return users.isEmpty() ? null : users.get(0);
}

public User readByName(final String name)
{
final SimpleJdbcTemplate template = getSimpleJdbcTemplate();
final List<User> users = template.query("SELECT * FROM user WHERE user_name = ?", getMapper(), name);
return users.isEmpty() ? null : users.get(0);
}

public List<User> readAll()
{
final SimpleJdbcTemplate template = getSimpleJdbcTemplate();
return template.query("SELECT * FROM user", getMapper());
}

public void update(User transientObject)
{
throw new UnsupportedOperationException("Not implemented yet.");
}

public void delete(User persistentObject)
{
final SimpleJdbcTemplate template = getSimpleJdbcTemplate();
template.update("DELETE FROM user WHERE user_id = ?", persistentObject.getId());
}

ParameterizedRowMapper<User> getMapper()
{
return MAPPER;
}
}




Everything is wired together using Spring beans. This is where I define, that I want to use an in-memory database. All standard Spring stuff so far, no magic involved.




<?xml version="1.0" encoding="UTF-8"?>
<beans
xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:aop="http://www.springframework.org/schema/aop"
xmlns:tx="http://www.springframework.org/schema/tx"
xsi:schemaLocation="
http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.5.xsd
http://www.springframework.org/schema/tx http://www.springframework.org/schema/tx/spring-tx-2.5.xsd
http://www.springframework.org/schema/aop http://www.springframework.org/schema/aop/spring-aop-2.5.xsd">

<!-- HSQL DS -->
<bean id="dataSource" class="org.springbyexample.jdbc.datasource.InitializingBasicDataSource" destroy-method="close">
<property name="driverClassName" value="org.hsqldb.jdbcDriver"/>
<property name="url" value="jdbc:hsqldb:mem:."/>
<property name="username" value="sa"/>
<property name="password" value=""/>
<property name="sqlScriptProcessor">
<bean class="org.springbyexample.jdbc.core.SqlScriptProcessor">
<property name="sqlScripts">
<list>
<value>classpath:/db-create.sql</value>
</list>
</property>
</bean>
</property>
</bean>

<!-- DAO base -->
<bean id="daoWithDataSource" abstract="true">
<property name="dataSource" ref="dataSource" />
<property name="identityQuery" value="CALL IDENTITY();" /> <!-- HSQL specific -->
</bean>

<!-- JDBC DAO's -->
<bean id="userDao" class="com.fernuni.db.jdbc.UserJdbcDao" parent="daoWithDataSource" />
<bean id="itemDao" class="com.fernuni.db.jdbc.ItemJdbcDao" parent="daoWithDataSource" />
<bean id="tagDao" class="com.fernuni.db.jdbc.TagJdbcDao" parent="daoWithDataSource" />
<bean id="taggedItemDao" class="com.fernuni.db.jdbc.TaggedItemJdbcDao" parent="daoWithDataSource" />

</beans>




Also note that I used some special DataSource which will always execute a database initializer script. To use this InitializingBasicDataSource, you have to reference the SpringByExample library. In Maven powered projects you would add this to your POM:




<repositories>
<repository>
<id>springbyexample.org</id>
<name>Spring by Example</name>
<url>http://www.springbyexample.org/maven/repo</url>
</repository>
</repositories>

<dependency>
<groupId>org.springbyexample</groupId>
<artifactId>spring-by-example-jdbc</artifactId>
<version>1.0.3</version>
</dependency>




Finally I have created a single page using Apache Wicket which will visualize the TagCloud and also contain form fields to create and delete Users, Items and Tags. The page class is called HomePage and contains references to all DAO beans. Later on, it will replace the DAO beans with the ones I have to use in a Google App Engine environment. Here is my HomePage. I have removed a lot of code for better readability as the class is complex. Get the source code to have a look at the full implementation.




public class HomePage extends WebPage
{
private static final long serialVersionUID = 1L;

private String m_newUserName = "";
private String m_newItemName = "";

private String m_newTagName = "";
private User m_taggingUser = new User();
private Item m_taggedItem = new Item();

@SpringBean
private UserDao m_userDao;

@SpringBean
private ItemDao m_itemDao;

@SpringBean
private TagDao m_tagDao;

@SpringBean
private TaggedItemDao m_taggedItemDao;

public HomePage(final PageParameters parameters)
{
addUserFields();
addItemFields();
addTagFields();

displayTagCloud();

...
}

private void displayTagCloud()
{
...
}


private void addTagFields()
{
final Form tagForm = new Form("newTags");

final RequiredTextField newTagName = new RequiredTextField("newTagName", new PropertyModel(this, "newTagName"));
tagForm.add(newTagName);

final IModel userChoices = new LoadableDetachableModel()
{
protected Object load()
{
return m_userDao.readAll();
}
};

final IChoiceRenderer userChoiceRenderer = new IChoiceRenderer()
{
public Object getDisplayValue(Object object)
{
final User user = (User) object;
return user.getName();
}

public String getIdValue(Object object, int index)
{
final User user = (User) object;
return user.getId() + "";
}
};

final ListChoice userListChoices = new ListChoice("taggingUser", new PropertyModel(this, "taggingUser"), userChoices, userChoiceRenderer);
userListChoices.setRequired(true);
tagForm.add(userListChoices);

final IModel itemChoices = new LoadableDetachableModel()
{
protected Object load()
{
return m_itemDao.readAll();
}
};

final IChoiceRenderer itemChoiceRenderer = new IChoiceRenderer()
{
public Object getDisplayValue(Object object)
{
final Item item = (Item) object;
return item.getName();
}

public String getIdValue(Object object, int index)
{
final Item item = (Item) object;
return item.getId() + "";
}
};

final ListChoice itemListChoices = new ListChoice("taggedItem", new PropertyModel(this, "taggedItem"), itemChoices, itemChoiceRenderer);
itemListChoices.setRequired(true);
tagForm.add(itemListChoices);

final Button saveNewTagButton = new Button("saveNewTagButton")
{
@Override
public void onSubmit()
{
super.onSubmit();

Tag existingTag = m_tagDao.readByText(m_newTagName);
if (existingTag == null)
{
final Tag newTag = new Tag();
newTag.setName(m_newTagName);
m_tagDao.create(newTag);

existingTag = m_tagDao.readByText(m_newTagName);
assert existingTag != null;
}

final Long taggingUserId = m_taggingUser.getId();
final Long taggedItemId = m_taggedItem.getId();

final TaggedItem taggedItem = new TaggedItem();
taggedItem.setItemId(taggedItemId);
taggedItem.setUserId(taggingUserId);
taggedItem.setTagId(existingTag.getId());

m_taggedItemDao.create(taggedItem);
}
};
tagForm.add(saveNewTagButton);

add(tagForm);

// Existing Tags
final Form existingTagsForm = new Form("existingTagsForm");

final IModel taggedItemsModel = new LoadableDetachableModel()
{
protected Object load()
{
return m_taggedItemDao.readAll();
}
};

final ListView existingTaggedItems = new ListView("existingTaggedItems", taggedItemsModel)
{
protected void populateItem(ListItem item)
{
final TaggedItem taggedItem = (TaggedItem) item.getModelObject();

final User taggingUser = m_userDao.read(taggedItem.getUserId());
final Item itemContainingTag = m_itemDao.read(taggedItem.getItemId());
final Tag tag = m_tagDao.read(taggedItem.getTagId());

final Label taggedText = new Label("taggedText", new PropertyModel(tag, "name"));
item.add(taggedText);
final Label taggedBy = new Label("taggedBy", new PropertyModel(taggingUser, "name"));
item.add(taggedBy);
final Label taggedAt = new Label("taggedAt", new PropertyModel(itemContainingTag, "name"));
item.add(taggedAt);

final Button deleteTaggedItem = new Button("deleteTaggedItem")
{
@Override
public void onSubmit()
{
super.onSubmit();

m_taggedItemDao.delete(taggedItem);
}
};
item.add(deleteTaggedItem);

}
};
existingTagsForm.add(existingTaggedItems);

add(existingTagsForm);
}


...

}




So far so good. Lets now Google App Enginefy this application. First of all, since this application uses Apache Wicket, you have to change some settings regarding Threads. Google Appengine does not support Threads today. Step 1: run the Wicket application in deployment mode to disable a thread checking for modifications in the background. Step 2: override the method newSessionStore() in your WebApplication class like this:




@Override
protected ISessionStore newSessionStore()
{
return new HttpSessionStore(this);
}




This will eliminate another thread that is used by the default DiskPageStore class. Step 3: enable sessions in Google App Engine as they are turned off by default. Wicket uses Http Session heavily. I got all this from Alastair Maw’s blog, thanks a lot!

Now you have to add all the libraries that come with the Google App Engine SDK to your project. In my case the project is based on Maven, so I added the following to my pom.xml.




<repositories>
<repository>
<id>appengine</id>
<name>Google App Engine Libraries</name>
<url>http://www.mvnsearch.org/maven2</url>
</repository>
<repository>
<id>datanucleus</id>
<name>Datanucleus Libraries</name>
<url>http://www.datanucleus.org/downloads/maven2</url>
</repository>
</repositories>

<dependencies>
<dependency>
<groupId>com.google.appengine</groupId>
<artifactId>jdo2-api</artifactId>
<version>2.3-SNAPSHOT</version>
</dependency>

<dependency>
<groupId>com.google.appengine</groupId>
<artifactId>datanucleus-appengine</artifactId>
<version>1.0.1.final</version>
</dependency>

<dependency>
<groupId>org.datanucleus</groupId>
<artifactId>datanucleus-core</artifactId>
<version>${datanucleus.version}</version>
<scope>runtime</scope>
</dependency>

<dependency>
<groupId>org.datanucleus</groupId>
<artifactId>datanucleus-jpa</artifactId>
<version>${datanucleus.version}</version>
</dependency>

<dependency>
<groupId>com.google.appengine</groupId>
<artifactId>appengine-api-1.0-sdk</artifactId>
<version>1.2.1</version>
</dependency>
</dependencies>

<properties>
<datanucleus.version>1.1.0</datanucleus.version>
</properties>




Unfortunately some dependencies cannot be resolved, even with the extra repositories I added manually. You have to add these jar file manually to your local Maven repository. I will not write about it here but check Dan Walmsleys Blog or Shalins Blog, they explain what you need to do.

Alright, now that we have all set up to Google App Enginefly our Java application - let's start. The App Engine SDK ships with a Servlet Container which you can start and test your application in an App Engine like environment. Download and unzip the SDK, then go into the bin folder. To enable remote debugging in the appengine environment, open the startup script (dev_appserver.sh in Linux) and make it look like this:




#!/bin/bash

# Launches the development AppServer

[ -z "${DEBUG}" ] || set -x # trace if $DEBUG env. var. is non-zero

SDK_BIN=`dirname $0 | sed -e "s#^\\([^/]\\)#${PWD}/\\1#"` # sed makes absolute

SDK_LIB=$SDK_BIN/../lib

SDK_CONFIG=$SDK_BIN/../config/sdk

java -ea -cp "$SDK_LIB/appengine-tools-api.jar" \

com.google.appengine.tools.KickStart \

--jvm_flag=-Xdebug \

--jvm_flag=-Xrunjdwp:transport=dt_socket,address=8000,server=y,suspend=n \

com.google.appengine.tools.development.DevAppServerMain $*




Next time you start your local appengine, you will be able to start a remote debugger on port 8000. Nice!

So at this point, the application would run on Google App Engine without problems. However, the data is not persistent and will be gone every time you restart. So let's use a real database. Each App Enginefied application can access something what Google calls a datastore. The datastore may be accessed using either JDO or JPA. Neither plain JDBC nor O/R mappers like Hibernate or Toplink will work at this stage. Appengine uses something called datanucleus for datastore access, which I think is another abstraction for different ways of persistence. So datanucleus will process JDO or JPA instrumented classes and do some persistence magic.

I decided to use JDO to access the appengine datastore from my application. In the first step let's add a PersistenceManagerFactory and some new DAO beans to the current Spring configuration.




<bean id="persistenceManagerFactory" class="org.springframework.orm.jdo.LocalPersistenceManagerFactoryBean">
<property name="jdoProperties">
<props>
<prop key="javax.jdo.PersistenceManagerFactoryClass">
org.datanucleus.store.appengine.jdo.DatastoreJDOPersistenceManagerFactory
</prop>
<prop key="javax.jdo.option.ConnectionURL">appengine</prop>
<prop key="javax.jdo.option.NontransactionalRead">true</prop>
<prop key="javax.jdo.option.NontransactionalWrite">true</prop>
<prop key="javax.jdo.option.RetainValues">true</prop>
<prop key="datanucleus.appengine.autoCreateDatastoreTxns">true</prop>
<prop key="datanucleus.DetachOnClose">true</prop>
</props>
</property>
</bean>

<bean id="daoWithPersistenceManagerFactory" abstract="true">
<property name="persistenceManagerFactory" ref="persistenceManagerFactory" />
</bean>

<bean id="userDao" class="com.fernuni.db.jdo.UserJdoDao" parent="daoWithPersistenceManagerFactory" />
<bean id="itemDao" class="com.fernuni.db.jdo.ItemJdoDao" parent="daoWithPersistenceManagerFactory" />
<bean id="tagDao" class="com.fernuni.db.jdo.TagJdoDao" parent="daoWithPersistenceManagerFactory" />
<bean id="taggedItemDao" class="com.fernuni.db.jdo.TaggedItemJdoDao" parent="daoWithPersistenceManagerFactory" />




Next I wrote a set of new DAO classes based on the JdoTemplate from Spring. When compared to JDBC, it is the AbstractJdoDao this time, which contains most code. The concreted DAO classes like UserJdoDao are rather small and reuse almost everything in the AbstractJdoDao.




public abstract class AbstractJdoDao<T extends Entity>
{
private JdoTemplate m_jdoTemplate;

public JdoTemplate getJdoTemplate()
{
return m_jdoTemplate;
}

public void setPersistenceManagerFactory(final PersistenceManagerFactory persistenceManagerFactory)
{
m_jdoTemplate = new JdoTemplate(persistenceManagerFactory);
}

public T readEntityByName(final String name, final Class<T> clazz)
{
final JdoTemplate jdoTemplate = getJdoTemplate();
final Collection found = jdoTemplate.find(
clazz, "m_name == value", "String value", new Object[] {name}
);
return (found != null && !found.isEmpty()) ? (T) found.toArray()[0] : null;
}

public Long createEntity(final T item)
{
final JdoTemplate jdoTemplate = getJdoTemplate();
final T persistentItem = (T) jdoTemplate.makePersistent(item);
return persistentItem.getId();
}

public T readEntity(final Long id, final Class<T> clazz)
{
final JdoTemplate jdoTemplate = getJdoTemplate();
return (T) jdoTemplate.getObjectById(clazz, id);
}

public List<T> readAllEntities(final Class<T> clazz)
{
final JdoTemplate jdoTemplate = getJdoTemplate();
return new ArrayList<T>(jdoTemplate.find(clazz));
}

public void updateEntity(final T transientObject)
{
final JdoTemplate jdoTemplate = getJdoTemplate();
jdoTemplate.refresh(transientObject);
}

public void deleteEntity(final T persistentObject)
{
final JdoTemplate jdoTemplate = getJdoTemplate();
jdoTemplate.deletePersistent(persistentObject);
}
}

public class UserJdoDao extends AbstractJdoDao<User> implements UserDao
{
public User readByName(final String name)
{
return readEntityByName(name, User.class);
}

public Long create(final User user)
{
return createEntity(user);
}

public User read(final Long id)
{
return readEntity(id, User.class);
}

public List<User> readAll()
{
return readAllEntities(User.class);
}

public void update(final User transientObject)
{
updateEntity(transientObject);
}

public void delete(final User persistentObject)
{
deleteEntity(persistentObject);
}
}




Using JDO requires that the bytecode in your domain model classes is instrumented with the persistence information. So first of all, lets put down this information. To be least intrusive, I decided to provide separate .jdo files instead of using annotations. Here is the package.jdo file which I put in the package along with my domain model classes.




<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE jdo PUBLIC
"-//Sun Microsystems, Inc.//DTD Java Data Objects Metadata 2.0//EN"
"http://java.sun.com/dtd/jdo_2_0.dtd">
<jdo>
<package name="com.fernuni.domain">
<class name="Entity" detachable="true" identity-type="application">
<inheritance strategy="complete-table"/>
<field name="id" primary-key="true" value-strategy="identity">
<column name="ENTITY_ID"/>
</field>
<field name="name">
<column name="ENTITY_NAME"/>
</field>
</class>
<class name="User" detachable="true" identity-type="application">

</class>
<class name="Item" detachable="true" identity-type="application">

</class>
<class name="Tag" detachable="true" identity-type="application">

</class>
<class name="TaggedItem" detachable="true" identity-type="application">
<field name="m_userId">
<column name="TAGGED_USER_ID"/>
</field>
<field name="m_itemId">
<column name="TAGGED_ITEM_ID"/>
</field>
<field name="m_tagId">
<column name="TAGGED_TAG_ID"/>
</field>
</class>
</package>
</jdo>




Unfortunately datanucleus does not support primary keys of type Integer and all Id's in my JDBC based applications were Integers. So I had to change from Integer to Long to get it working. Quite intrusive but something I can live with. As a last step, Maven needs to instrument the domain classes using the JDO instructions from my package.jdo file. I added the maven-datanucleus-plugin in my Maven build process.




<build>
<plugins>
<plugin>
<groupId>org.datanucleus</groupId>
<artifactId>maven-datanucleus-plugin</artifactId>
<version>${datanucleus.version}</version>
<configuration>
<mappingIncludes>**/*.class</mappingIncludes>
<verbose>true</verbose>
<enhancerName>ASM</enhancerName>
<api>JDO</api>
</configuration>
<executions>
<execution>
<phase>compile</phase>
<goals>
<goal>enhance</goal>
</goals>
</execution>
</executions>
</plugin>
</plugins>
</build>




Unfortunately it turned out the Google App Engine has huge difficulties when you have a hierarchy of persistence capable subclasses. In my case each User, Item, Tag and TaggedItem all derive from Entity. The Entity class contains the persistent fields id and name. For some reason in the current version of the datanucleus-appengine plugin, the fields from a persistence capable superclass are not accessible within the subclass. In other words, you cannot use inheritance in your domain model if you want to use JDO and Google App Engine. I have started two threads in the App Engine community and Max Ross from Google promised that Inheritance will receive a higher priority in upcoming versions of the datanucleus-appengine plugin.

Let's sum this up. Google App Engine is a really great thing Google came up with. It is also a Google example that Tim O'Reilly is right about the perpetual Beta, which is what I think we see with appengine right now. It has been put into public earlier than you would usually expect it from other Google products, probably to leverage the “power of the community” in making it better and more stable. On the other hand, I do not think that Google App Engine has reached a state where it can be used for real life enterprise applications.

Looking forward to JavaOne

Only one month left to this years JavaOne in San Francisco. About 8 to 10 Java geeks from my company will be heading to the US for the conference. We are leaving Stockholm May 29 to fly to San Francisco via Frankfurt. I am really happy I was selected to go to Java One this year. It is a great opportunity to get some fresh ideas from the Java Community. Also I am meeting a good friend which I haven't visited since 2002.

Today I received an email from JavaOne team that the schedule builder was ready and that it was recommended to put together a list of sessions I would like to attend. Unlike other software development conferences like OOPSLA, where everything is rather small, JavaOne is supposed to be quite crowdy and rooms fill up fast. So I spent some minutes today to build up my preliminary Java One session schedule.

Compared to OOPSLA 2008, the Java One is very Sun focused. A lot of talks are about Sun libraries, applications and tools. Since we are not using Glassfish application server or Java Server Faces, I decided to skip on these talks. The this years big hype seems to be Cloud Computing. There are tons of sessions having the word “cloud” in the title. Another big topic is REST and web services in general. Not sooo many talks are about dynamic languages this year I think. Unfortunately I skipped all the great Scala sessions at last years OOPSLA, as I did not knew what Scala was back then. There are now 3 talks about Scala and the Scala web framework Lift which I will be attending to. Unfortunately there are no sessions about Apache Wicket, my current favorite web framework.

Some of the session I really look forward to:

(TS-6802) Hadoop, a Highly Scalable, Distributed File/Data Processing System
Implemented in Java™ Technology


This is about an open source implementation of Google's Map Reduce algorithm, which comes in handy if you have to deal with gigantic sets of data. This would be the case if you harvest user behavior on the web to use it for Collective Intelligence, an area which fascinates me very much right now.

(BOF-3820) Lift: The Best Way to Create Rich Internet Applications with Scala


As Groovy went more or less by me, I decided to work myself in into Scala. There was a series about Scala in the German Java Magazin that woke my interested. Lift is the Scala webframework that supposingly “steals” the best features from other Java based web frameworks. They borrow the strict separation of layout and business logic from Apache Wicket, which is one of my favorite libraries for web development today.

(BOF-5105) Hudson Community Meet-Up

Writing Hudson plugins is fun. I wrote a plugin for Testability Explorer last Christmas and I am hoping to meet Kohsuke and some nice people I had mail contact with via the Hudson community like Ulli Hafner.

Finally some sessions focusing on scaleability and performance which I am hoping to put into practice in our company as we deal with a gaming application that need to be high performant.

(TS-4407) Best Practices for Large-Scale Web Sites: Lessons from eBay

(TS-4696) JDBC? We Don't Need No Stinkin’ JDBC: How LinkedIn Scaled with memcached, SOA, and a Bit of SQL

(TS-4588) Where’s My I/O: Some Insights into I/O Profiling and Debugging

My agenda also has some general purpose topics like Ajax Push and Comet as well as Central Event Processing (CEP) which might be useful in the future. So we will see about these.

Collective Intelligence in Action


I just finished one of the best books I have read in a long time. It is titled “Collective Intelligence in Action” from Manning Publications and is written by Satnam Alag. We were reading the book in a book circle, which my company runs twice a year.

Collective Intelligence is all about making web applications better by using intelligence gathered from user interactions and behavior. There are a lot of very successful web 2.0 applications out there, which harvest user intelligence and then use this data to improve the user experience. I liked this book because it was very different from other Java related books I have read. It is not focused on one particular technology or framework. It is not too code focused, even though quite mathematical sometimes. While I read it, I came up with all these cool new ideas how I could make my own websites better. I will start to implement a little bit of collective intelligence in the next few months. I can really recommend this book if you want to be inspired about some new advanced features for your web applications.

The book starts off by giving a brief overview about web 2.0 applications and collective intelligence. Then the author explains how users and items can be mapped to each other using either content based mapping or collaboration based mapping. It gets a bit mathematical here with some dot product computations and the cosine based similarity. Chapter 3 is all about tags, tagging and how to leverage tags in a web application. The chapters 4, 5 and 6 introduce some nifty tools that can be used to write a api based blog searcher and a web crawler. I have myself done this stuff when I build my web applications, so it was great to see alternative approaches. In the second and third part of the book the author introduces the different algorithms to make predictions or cluster users and items. You will learn about classification, regression, clustering etc. It is getting very theoretical and sometimes a bit hard to follow but very, very interesting. Since all the code examples in the book are in Java, Alag uses WEKA and JDM (Java Data Mining) to implement the algorithms.

The reader will also learn a bit Lucene, the popular text indexer and searching framework. Lucene is being used to do content based learning. The Lucene parts are pretty basic though and should be familiar to those of you, who have worked with Lucene before. The book ends with a practical example of how to build a recommendation engine similar to Amazon.

Who is the book for? I can recommend it to experienced Java developers who would like to try out some new things in their own web applications. There are tons of great ideas in this book. Having a website with a couple of hundred visitors per days is a plus if you practically want to unleash some collective intelligence.