Friday, June 24, 2011

3 Out Of 4 Developers Could Benefit From Metawidget

I thought it was about time I posted the 'results to date' of the Metawidget survey I've been running.

I started the survey in response to the query 'just how prevelant is the problem of duplication in User Interface development?' By 'duplication' I mean code that you have to write, but that could be inferred using existing sources within your application. For example the maximum length of a UI text box could be inferred from a database schema; the correct format for an e-mail address could be inferred from a validation subsystem; the available navigation buttons could be inferred from a BPM engine. Such duplicated values must be declared identically, and must be kept identical throughout your project's lifetime. If they diverge, for example if the UI allows text to be input that is longer that the database can store, it's likely to cause an error.

So how many developers have to do this error-prone sort of work, duplicating information throughout their applications? Apparently 3 out of 4. Every time they add a new field to their back-end, they also need to:

This lends weight to the idea that a project like Metawidget has the potential to help a great many developers. I've previously blogged a more in-depth discussion on the problem of duplication in User Interfaces. The survey is still open, so please submit your votes!

Monday, June 20, 2011

iPad vs Android: Do Consumers Really Want Flash?

Say what you will about Apple not supporting Flash, consumer choice, and all that. But I know one thing: it completely rocks that I can run ABC Reading Eggs as a full-screen Flash game on my Asus Eee Pad Transformer:

Now the kids can play it comfortably on the couch instead of stuck at a desk in the study! Android FTW!

Friday, June 17, 2011

Price Gouging and the Asus Eee Pad Transformer

A few months ago Australian retailers (perhaps buoyed by the spectacular success of the Minining Industry doing a similar thing) tried to lobby the government to add Goods and Sales Tax (10%) to overseas (i.e. online) purchases. They cited the usual "if we're hurting, we can't create as many jobs, and so everybody suffers".

Personally I enjoy supporting local retailers. I like brick and mortar stores. I like being sold to by an enthusiastic and informed sales person. And I like the comfort of knowing I can return a product and have my warranty serviced locally.

I like all of this more than 10%. But not more than 36%.

Recently I wanted to buy an Asus Eee Pad Transformer so I could test Metawidget on a physical tablet. The Transformer is a great product at a great price: $399 USD for the base model, $149 USD for the keyboard add-on. So with the way the Aussie dollar is at the moment I figure that should be about $550 AUD, right? Wrong.

The best I could find locally was $750 AUD. That's a 36% difference! So instead I got it from for $570 AUD including international delivery. A good $180 cheaper than I could buy it locally. It's a little hard to find in stock at the moment, but took care of that.

So yes, it comes with a U.S. plug which I'd rather not have. And yes, I'll have to ship it Stateside if anything goes wrong with it. And yes, I wish I could buy it locally at a price I felt wasn't price gouging. But I simply can't.

Thursday, June 16, 2011

HornetQ: I Want My... I Want My... I Want My MTV (um, JBM)

(with apologies to Dire Straits)

Following on from my previous two posts I've been wrapping up my migration from JBoss 5.1.0.GA (which used JBoss Messaging (JBM)) to JBoss AS 6.0.0.Final (which uses HornetQ). As I said before the end result is well worth it: HornetQ is very fast. But I've hit a few gotchas.

8. Uneven Nodes

With JBM (and most messaging implementations) the 'natural' thing to do (i.e. if you're not thinking too hard) is store your JMS queue in a relational database. Then all your nodes consume off that single queue. Some disadvantages of this approach are: your database is a Single Point of Failure; relational databases aren't great for implementing high performance queues.

HornetQ doesn't support relational databases at all. Instead all queues are stored directly on the local file system. And you mustn't share that file system. So now the 'natural' thing (i.e. again, if you're not thinking too hard) is for each node to have its own 'slice' of the overall queue.

This change in the 'natural' order of things has a side effect. With JBM, your nodes would naturally consume off the overall queue as quickly as possible. With HornetQ, they only ever naturally consume off their slice of the queue. If, like us, you have grown your cluster over time and some of your nodes are on better spec hardware than others (more RAM, more CPU cores etc) this is a problem. Because your overall queue processing time will be limited to the time it takes the slowest node to finish its slice.

Really what you need is for HornetQ to either: at distribution time, give more messages to the fast nodes and fewer messages to the slow nodes. Or, at consumption time, redistribute messages from the slow nodes whenever the queues on the fast nodes are empty. HornetQ doesn't support either of these features yet (RFE here).

In the meantime, HornetQ can support the 'old JBM model' of a single queue. This is covered (in a slightly outdated JBoss 5 way) under 32.5. Configuring the JBoss Application Server to connect to Remote HornetQ Server. Basically you need to:

  • On all nodes, modify /server/all/deploy/jms-ra.rar/META-INF/ra.xml to use NettyConnectorFactory (instead of InVMConnectorFactory) and supply a host and port (this is your MDB consumer):

             <description>The transport type</description>
             <description>The transport configuration. These values must be in the form of key=val;key=val;</description>

  • On all nodes, modify /server/all/deploy/hornetq/jms-ds.xml to use NettyConnectorFactory (instead of InVMConnectorFactory implicitly) and supply a host and port (this is your producer):

       <config-property name="SessionDefaultType" type="java.lang.String">javax.jms.Topic</config-property>
       <config-property name="ConnectorClassName" type="java.lang.String">org.hornetq.core.remoting.impl.netty.NettyConnectorFactory</config-property>
       <config-property name="ConnectionParameters" type="java.lang.String">host=${node.with.queue:localhost};port=${hornetq.remoting.netty.port:5445}</config-property>

  • On all nodes, pass the same -Dnode.with.queue=192.xx.xx.xx value
Now the MDBs on all nodes will both produce to, and consume from, node.with.queue. You're back to a Single Point of Failure but you're also back to supporting 'uneven' nodes (i.e. some faster than others). In addition may I suggest, in hornetq-configuration.xml:
  • Set <clustered>false</clustered>

  • Remove all <connectors />

  • Leave just the Netty <acceptor />

  • Remove all <broadcast-groups />

  • Remove all <discovery-groups />

  • Remove all <cluster-connections />

  • Remove your cluster username/password
Because now you're no longer using any clustering (at the HornetQ level) so can remove that overhead.

Hope that helps somebody!

Tuesday, June 14, 2011

Stung by HornetQ: The Revenge

Following on from my previous post I've been spending some more time with HornetQ and have discovered a few more gotchas:

4. Stuck By Default

There's some advice here that says...

"Probably the most common messaging anti-pattern we see is users who create a new connection/session/producer for every message they send or every message they consume. This is a poor use of resources... Always re-use them"

...couple that with other advice that says...

"Please note the default value for address-full-policy [when the send buffer is full] is to PAGE [out to disk]"

And you might think the Right Thing To Do is set up a single connection/session/producer and send all your messages to the queue. But if you're doing this in a transaction (most Web applications are) you'd be wrong. Why? Because there's some conflicting advice that says...

"By default, HornetQ does not page messages - this must be explicitly configured to activate it"

And a setting in the JBoss/HornetQ integration (hornetq-configuration.xml) that says...


So by default JBoss will get stuck if you try sending more than 10MB of messages in a single transacted producer (ie. before your MDBs can start consuming them). 10MB is not a lot. For me it was about 1,000 messages of about 5,000 characters each (a Unicode XML string).

Here are my suggestions:
  1. HornetQ should treat its JBoss JMS integration as more of a first class citizen. It should be a primary use case, rather than relegated to a chapter at the back of the user guide. Why? Because most people that just dip into the User Guide are going to be doing so from a JBoss JMS mindset. So if you say something like "Please note the default value is to PAGE" then you need to also say, immediately afterwards, " (except on JBoss where the default value is to BLOCK)"

  2. BLOCKing is a poor default value. Either make it fail (so the user gets an error) or make it PAGE (so the user gets an error when their disk is full). At least then the developer knows where to look. But blocking just results in the queue being 'stuck' - with no clue to the developer who has barely heard of their underlying JMS implementation, let alone blocking versus paging and <address-full-policy>

5. Stuck By Bugs

HornetQ is pretty new and there are a few bugs that can cause your JMS messages to get stuck. There's the fact that MDBs will rollback/retry indefinitely, that messages with different JMS priorities may get forgotten, that messages can be forwarded to dead cluster nodes.

When you have several different bugs interacting to produce an overall symptom (ie. a stuck queue) it can be very hard to separate them to understand their underlying causes. This causes a lot of pain!

6. Stuck By Birth

This one isn't really HornetQ's fault, but its behaviour seems different to JBoss Messaging. If your MDB uses @EJB injection you really need to setup <depends ... > blocks in your jboss.xml...

   <!-- Stop MDB consuming too early -->

...because HornetQ starts up and starts consuming very early. This is particularly bad because these errors go quietly into the boot.log, not the regular system.log, so you don't realise that your MDBs crashed on startup.

Friday, June 3, 2011

Oracle and the Evil Alternative: Taking Java Private

NOT an Oracle employeeIn a recent episode of the Java Spotlight Podcast, Henrik Ståhl (Oracle's Senior Product Director of Java Platform Development) is interviewed about Oracle's strategy for Java past and present.

At 9 minutes 20 seconds he says "[Oracle] looked at the evil alternative, taking Java private". He then goes on to say "we determined it wasn't a good idea". But I was interested that Henrik didn't say "we determined it wasn't possible".

I contacted Henrik and asked for clarification: how would 'taking Java private' work?

Relax, This Isn't Going To Happen

First, let's be clear. Henrik has been explicit about Oracle's strategy for Java (here, here, here and here): they have considered this evil alternative and rejected it. That's not what this blog is about. Let's not start a FUD storm.

Rather I'm interested in how any company could, even hypothetically, take a GPL'd project private.

So, Hypothetically, How Would It Work?

Henrik explained the basic idea was that, given Oracle owns or co-owns all OpenJDK Intellectual Property (IP), they
have the right to release it under a non-GPL license. The owner of the IP can retroactively apply any license he/she wants, provided all prior modifications to the IP were made by that same owner. This appears to sidestep the GPL copyleft clause.

Of course you can't stop the community using the existing GPL'd version. Nor can you stop them forking it (though under a different name). But being able to un-GPL the code is a significant leg up from, say, having to build a clean room implementation before you can privatize it. It means you can have a 100% compatible, closed source product from day one, built using modified GPL'd code. And I assume if you rev the versions quick enough, most corporates will follow you rather than wait for the community version to catch up. Effectively you can close source a GPL project.

So my question is: is this legitimate? Can a corporation take a piece of Open Source software, acquire all IP rights to it, then close source its future versions? Is this a risk of relying on Open Source software? Should steps be taken to mitigate this risk? For example, should an Open Source project ensure its IP is distributed between multiple neutral entities (and avoid having Contributor Agreements that require handing over IP rights)?

Wednesday, June 1, 2011

Stung by HornetQ

I've recently been upgrading our JBoss 5.1.0.GA and JBoss Messaging based application to JBoss 6.0.0.Final and HornetQ. The end result has been positive: HornetQ is very fast. But there were a few gotchas I thought I'd share:

1. No More Database, Lots More Backups

HornetQ eschews the JBoss Messaging approach of storing messages in a shared database. Instead it uses the file system directly (and, on Linux, accelerated AIO). But note this doesn't mean a shared file system. It means each cluster node has its own file system, with peer-to-peer communication to distribute messages between them.

The advantage of this approach is you no longer have a Single Point of Failure (ie. the database). But it's quite a shift in mindset if you're used to thinking of your database as a heavy, fault-tolerant kind-of-thing and your nodes as lightweight, throwaway kind-of-things. Because now, each node stores a portion of your messages. So if you lose a node you lose some messages.

The recommended solution to this is to have backup nodes. Whereas before you might have one big backup instance for your database, and no backups for your nodes, now you need a backup for each node. See this forum thread. Confusingly, when talking about backup nodes you may choose to use a shared file. But this is only for sharing between the 'backup' and 'live' instance of a single node.

2. hornetq-jms.xml Is A Red Herring

Out of the box, JBoss 6.0.0.Final includes a server/all/deploy/hornetq/hornetq-jms.xml that looks like this:

<configuration xmlns="urn:hornetq">

   <connection-factory name="NettyConnectionFactory">
         <connector-ref connector-name="netty"/>
         <entry name="/ConnectionFactory"/>
         <entry name="/XAConnectionFactory"/>
   <connection-factory name="NettyThroughputConnectionFactory">
         <connector-ref connector-name="netty-throughput"/>
         <entry name="/ThroughputConnectionFactory"/>
         <entry name="/XAThroughputConnectionFactory"/>
   <connection-factory name="InVMConnectionFactory">
         <connector-ref connector-name="in-vm"/>
         <entry name="java:/ConnectionFactory"/>
         <entry name="java:/XAConnectionFactory"/>

   <queue name="DLQ">
      <entry name="/queue/DLQ"/>
   <queue name="ExpiryQueue">
      <entry name="/queue/ExpiryQueue"/>


To my mind this is very confusing, because if you're using JMS producers and MDBs all those <connection-factory> configurations aren't used! I would recommend deleting them, for 3 reasons:

  1. You see the /ConnectionFactory JNDI reference in there and think you should start using ic.lookup("/ConnectionFactory") in your code. If you do, you'll get non-transacted queue sessions, duplicated messages, lost messages, and all sorts of other weirdness. Stick with ic.lookup("java:JmsXA")

  2. If using java:JmsXA, those <connection-factory> configurations don't apply! You need to look instead in server/all/deploy/hornetq/jms-ds.xml. This is using InVMConnectorFactory implicitly, but you can explicitly configure it

  3. If using MDBs, again those <connection-factory> configurations don't apply! MDBs are configured in server/all/deploy/jms-ra.rar/META-INF/ra.xml. They use InVMConnectorFactory by default.
This last point is a real doozey. Because if you want to simulate a heavyweight store of messages with lightweight consumers who read from it (like JBoss Messaging) you're going to want to use <consumer-window-size>0</consumer-window-size>. You'll find lots of examples on the Web of putting this in hornetq-jms.xml. But consumer-window-size won't work there, because MDBs don't use hornetq-jms.xml. Instead you need this in ra.xml:

   <description>The consumer window size</description>

3. hornetq-configuration.xml Is Noisy

I'm a 'break it to learn it' kind of guy. So to my mind the connectors and acceptors section of hornetq-configuration.xml is a bit noisy. Here's what you can reduce it to:

      <!-- Node to node communication -->
      <connector name="netty">
         <param key="host" value="${jboss.bind.address:localhost}"/>
         <param key="port" value="${hornetq.remoting.netty.port:5445}"/>

      <!-- Node to node communication -->
      <acceptor name="netty">
         <param key="host" value="${jboss.bind.address:localhost}"/>
         <param key="port" value="${hornetq.remoting.netty.port:5445}"/>
      <!-- jms-ds.xml produces to, and ra.xml consumes from, InVMConnectorFactory by default -->
      <acceptor name="in-vm">
         <param key="server-id" value="0"/>

Your Mileage May Vary

Of course, these tips aren't for everyone. But if you're like me they may save you a few hours banging your head against the desk!