Monday, April 21, 2008

Useful Bounds of Automatic UI Generation

A recent post on the Metawidget forums asked why Metawidget doesn't 'do more'. Given we can automatically create forms from domain objects, and given there are solutions that can automatically persist domain objects (eg. JPA), why not 'fill in the gap' and automatically create the entire CRUD app?

And indeed, there are solutions that do this. The most notable being The drawback with these solutions is they necessarily create very generic UIs, which bear little resemblance to how they would have appeared and functioned had they been designed by hand, with due consideration to their problem domain. For example, with Naked Objects you get this...

...which is simply not how many clients want their apps to look.

Good UI design is both art and science. When you try to 'fill in the gap' you realize there's a lot of 'art' in that gap, and trying to automate that art results in less effective UIs.

So Metawidget is not trying to be that solution. I believe that solution is less useful for many real world apps. Instead, Metawidget tries to identify the bounds between where UI generation can be useful and practical and where it becomes too generic and impractical, and stays within those bounds. I call these the 'Useful Bounds of Generation'.

Staying within the Useful Bounds of Generation lets you apply Metawidget to a large category of real world applications that fully automated CRUD solutions simply aren't interested in. You can even retrofit an existing app and remove lots of your boilerplate code.


Dan said...

Just picking up on the Naked Objects thing ... you are right that an NO GUI will always be generic, though the screenshot you showed is from NO 1.0 whereas in 2.0 it improved somewhat and is now rolled out to >800 users for the Irish Government.

But, in practice we also agree that there will be many cases that a naked domain object needs to be skinned. One solution for this is, which is a sister project to For myself I'm working on an Eclipse RCP viewer ( that also allows skinning using Eclipse RCP GUI.

But all of these allow you to focus on getting the domain model right first, and then adding more efficient UIs later if and when they are needed. We feel this is a better way than working from the UI towards the domain model.

Interested in your thoughts.

Richard said...


Thank you for contacting me. I would like this to be the start of a close dialogue, as you clearly have a lot of valuable experience in the field.

We both very much agree on 'working from the domain to the UI', rather than the other way around. And that UI generation can be automated to a large extent.

I apologise if I seemed dismissive of the success of the Naked Objects approach. I'm sure many, including the Irish Government, have benefitted from it.

It's just that I feel, by trying to automate so much, Naked Objects limits the number of scenarios it can be applied to. For example, one of my hopes for Metawidget is that developers can use it to 'purge' their existing applications of unnecessary (and error-prone) duplication - see this blog entry.

You could think of it as: I think all of us could benefit from a litte bit of what Naked Objects has to offer, whereas only a few of us can benefit from everything Naked Objects tries to automate.

Dan said...

Hi Richard,

Been thinking some more about Metawidget. I guess the difference in philosophy is that while we with NO say that you can get rid of the process/application layer (the explicit 'C' in MVC) and just rely on object behaviours, you're keeping that as an explicitly coded layer.

Actually, I should qualify the above. Using Alan Cooper's terminology, we distinguish between sovereign applications (IDEs, word processors, UNIX shells etc) and transient applications (check-in kiosks, book websites). NO is targetted at the former, with a knowledgeable empowered user, not the latter.

The reason that Rob Matthews (of NOGL) wrote Scimpi ( is to extend the scope of NO into transient applications that need a more scripted interface. Scimpi works by searching for the most appropriate page to render a domain object with, falling back to a generic page if none exist. This allows a domain object represent a process to be replaced with a custom scripted UI. Scimpi provides taglibs to provide a default rendering of objects in these custom pages. I suspect these taglibs are similar to MetaWidget for the web (though obviously MetaWidget targets lots of other UI technologies too).

There is another NO-style framework called JMatter that has annotations to represent wizards and wizard pages. This is an alternative approach to extending the scope into transient apps.

Returning to MetaWidget though; you make the point that MetaWidget isn't intended to replace other frameworks, but rather to work with them. Presumably behind the scenes you are reading some meta-model. Question for you: is this hardcoded to Hibernate, or is it pluggable? If the latter, then you could have an implementation that reads the Naked Objects metamodel, exposed either as a set of Java classes or via REST ( - yet another sister project to NO). Then developers could choose to write their applications following our programming conventions, say, and build an explicit GUI with MetaWidget doing most of the boilerplate stuff.

Let's keep talking, good to throw ideas around.


Richard said...


Thank you for continuing your thoughts. It is exciting to me to be talking with someone from the team whose PhD thesis' and papers I am reading a lot of at the moment!

> Question for you: is this hardcoded to Hibernate, or is it pluggable?

Metawidget is extremely pluggable. In fact, this is a primary charactertistic of Metawidget.

Metawidget can inspect multiple, hetergenous sources, pulling together all the information it finds from different places to drive UI generation. So it can look at your database engine (Hibernate, JPA, etc.), your validation engine, your property style (JavaBeans, GroovyBeans etc.) and bring it all together.

The idea, as you say, is to work with all the frameworks people are already using. The complete list of what we support is on the home page, and you mix and match to suit your preferred architecture (or add your own).

> then you could have an implementation that reads the Naked Objects metamodel

I think that's a great idea. It could perhaps help you in your goal to support different UI types? For example, Metawidget supports mobile phones.

Question for you: is the Naked Objects metamodel an explicit thing? For example, internally Metawidget drives all its inspections to an XML schema that looks like this. Or is it implicit? You inspect the classes and create some kind of in-memory model?

Another question for you: does Naked Objects work at runtime, using reflection, or is the UI pre-generated?

Thanks again for your time.

Dan said...

Thanks for the compliment, but I'm just another consultant-type like you, I imagine. Anyways...

So another thing that MW and NO share is the idea that the metamodel can be built up using more than once source. I think with MW you are in some ways ahead of us, though in the latest version we've completely rewritten the metamodel builder ... which we call the "reflector" for historical reasons. Which kinda answers your later question; all the work is done at runtime, there is no code generation.

Anyway, our new reflector is also pluggable, but at a much more finegrained level than your inspectors. Our pluggable elements are called "facets", and are used either to build up the class structure (inheritance, properties, collections, actions) or to apply hiding/disabling/validation on top of the class members. There's nothing to prevent a facet implementation from reading from XML, or a DB, or whatever.

To compare your design and ours: where your MetawidgetAnnotationInspector is responsible for identifying 10 or so annotations, with us that would be 10 different facets.

The nice things about the facet design is that new rules and programming conventions can be easily added. So, we could easily support @UiMasked, for example.

To answer your other question: the NO metamodel is currently in-memory only, there isn't a serialized representation of it in XML or whatever. However, as I previously mentioned, I am also building a RESTful interface to both the objects and the metamodel too, so in this sense the metamodel *is* exposed.

So... next steps. What I am really casting around for is a nice UI technology that can consume my RESTful interface. MW might well be a good fit. However, there are two points here: (a) how to adapt my metamodel to yours (which sounds like it is doable), and (b) how to bind your widgets to the RESTful GET/PUT/DELETE/POST verbs (which we haven't talked about at all).



Richard said...

> where your MetawidgetAnnotationInspector is responsible for identifying 10 or so annotations, with us that would be 10 different facets

Actually it sounds like a closer comparison would be to 10 of my Inspectors. But yes, it sounds like we are converging here.

> how to bind your widgets to the RESTful GET/PUT/DELETE/POST

Well, I imagine there are some great REST frameworks out there already that take care of widget binding ( So you'd use one of those, and a Metawidget on top to generate the UI and hook it up. Possibly you could even use HtmlMetawidgetTag directly if the REST framework didn't worry about native widgets.

The Metawidget download includes a non-trivial example app (Address Book) written for various platforms - including Struts and Spring versions. They use Metawidget for the UI generation, and wire up Struts/Spring's native binding abilities.

If you get chance to take a look at it, I'd really appreciate your feedback.