Thursday, June 16, 2011

HornetQ: I Want My... I Want My... I Want My MTV (um, JBM)

(with apologies to Dire Straits)

Following on from my previous two posts I've been wrapping up my migration from JBoss 5.1.0.GA (which used JBoss Messaging (JBM)) to JBoss AS 6.0.0.Final (which uses HornetQ). As I said before the end result is well worth it: HornetQ is very fast. But I've hit a few gotchas.

8. Uneven Nodes

With JBM (and most messaging implementations) the 'natural' thing to do (i.e. if you're not thinking too hard) is store your JMS queue in a relational database. Then all your nodes consume off that single queue. Some disadvantages of this approach are: your database is a Single Point of Failure; relational databases aren't great for implementing high performance queues.

HornetQ doesn't support relational databases at all. Instead all queues are stored directly on the local file system. And you mustn't share that file system. So now the 'natural' thing (i.e. again, if you're not thinking too hard) is for each node to have its own 'slice' of the overall queue.

This change in the 'natural' order of things has a side effect. With JBM, your nodes would naturally consume off the overall queue as quickly as possible. With HornetQ, they only ever naturally consume off their slice of the queue. If, like us, you have grown your cluster over time and some of your nodes are on better spec hardware than others (more RAM, more CPU cores etc) this is a problem. Because your overall queue processing time will be limited to the time it takes the slowest node to finish its slice.

Really what you need is for HornetQ to either: at distribution time, give more messages to the fast nodes and fewer messages to the slow nodes. Or, at consumption time, redistribute messages from the slow nodes whenever the queues on the fast nodes are empty. HornetQ doesn't support either of these features yet (RFE here).

In the meantime, HornetQ can support the 'old JBM model' of a single queue. This is covered (in a slightly outdated JBoss 5 way) under 32.5. Configuring the JBoss Application Server to connect to Remote HornetQ Server. Basically you need to:

  • On all nodes, modify /server/all/deploy/jms-ra.rar/META-INF/ra.xml to use NettyConnectorFactory (instead of InVMConnectorFactory) and supply a host and port (this is your MDB consumer):

             <description>The transport type</description>
             <description>The transport configuration. These values must be in the form of key=val;key=val;</description>

  • On all nodes, modify /server/all/deploy/hornetq/jms-ds.xml to use NettyConnectorFactory (instead of InVMConnectorFactory implicitly) and supply a host and port (this is your producer):

       <config-property name="SessionDefaultType" type="java.lang.String">javax.jms.Topic</config-property>
       <config-property name="ConnectorClassName" type="java.lang.String">org.hornetq.core.remoting.impl.netty.NettyConnectorFactory</config-property>
       <config-property name="ConnectionParameters" type="java.lang.String">host=${node.with.queue:localhost};port=${hornetq.remoting.netty.port:5445}</config-property>

  • On all nodes, pass the same -Dnode.with.queue=192.xx.xx.xx value
Now the MDBs on all nodes will both produce to, and consume from, node.with.queue. You're back to a Single Point of Failure but you're also back to supporting 'uneven' nodes (i.e. some faster than others). In addition may I suggest, in hornetq-configuration.xml:
  • Set <clustered>false</clustered>

  • Remove all <connectors />

  • Leave just the Netty <acceptor />

  • Remove all <broadcast-groups />

  • Remove all <discovery-groups />

  • Remove all <cluster-connections />

  • Remove your cluster username/password
Because now you're no longer using any clustering (at the HornetQ level) so can remove that overhead.

Hope that helps somebody!


Clebert Suconic said...

Actually you are wrong here.. This doesn't have anything to do with being stored or database.

On JBM you can't share the database with any other node. The nodes will still communicate their messages between nodes on redistribution. Nothing new here.

There isn't really a different between JBM and HornetQ on that sense.

Richard said...


I apologise if I've provided incorrect information. Obviously I can only talk from my own experience.

But on JBM what the documentation advised to do was:

1. Remove /server/all/deploy/messaging/hsqldb-persistence-service.xml
2. Copy /docs/examples/jms/mysql-persistence-service.xml

This resulted in all nodes sharing the same MySQL database, and that's how we clustered?

We also moved /deploy/messaging into /deploy-hasingleton/messaging. I'm not sure whether we did the wrong thing, but it always worked for us!

Clebert Suconic said...

Each node will have its own nodeID.

We don't, in any moment, transfer messages from one node to another node through the database.

The Database is just a medium storage on this case. Same way as it is with HornetQ.

Clebert Suconic said...

BTW: The issue you had was because HornetQ expects even distribution across your nodes. I.e. the load distribution is balanced.

Clebert Suconic said...

BTW: The only time we transfer messages through the database on JBoss Messaging is on failover, since we do an update to merge messages. What we call merge-data.

Never during redistribution.