Saturday, December 1, 2012

Metawidget Nominated For PhD Thesis Of The Year

I'm thrilled to announce my PhD thesis, Derivation of a General Purpose Architecture for Automatic User Interface Generation has been nominated for the John Makepeace Bennett Award (formerly known as the Best PhD Thesis Award) for the most Distinguished Australasian Doctoral Dissertation. My thesis details the research behind Metawidget.

The award is presented annually to the best PhD thesis in Australasia. Each University may select only one candidate to be put forward. This year, the UTS Research Degrees Committee decided to nominate me! The winner will be announced in January 2013. Previous winners can be found here.

My thanks to my supervisor John Leaney for all his help completing the submission paperwork.

Monday, October 22, 2012

Life without Classes: Generating Dynamic UIs from Dynamic Sources

I was recently asked if Metawidget could...

"read configuration entirely from [an] XML file (without need to create [a] class)?"

...the answer is yes! Being able to generate UIs from a variety of sources, including dynamic ones such as rule engines or database entries, is a core capability of Metawidget. I've put together a simple example that uses an XmlInspector to configure the UI, and a custom MapProcessor to store the data. No Classes or JavaBeans are used at all.

You can download the example here.

Telefonica, one of the largest telecommunications companies in the world, used this same approach for the Spanish National Health System. You can read more about this large-scale deployment in our whitepaper.

Friday, October 19, 2012

Dynamic Form Generator: Metawidget v2.5

Version 2.5 of Metawidget, the dynamic form generator is now available!

This release was focused on:
  • Improved JSF support: better AJAX; XHTML compliance
  • Improved support for RichFaces 4: CollapsiblePanel; UIAutocomplete
  • Move Metawidget to Maven Central
  • Bug fixes, documentation and unit tests
As always, the best place to start is the Reference Documentation:

Your continued feedback is invaluable to us. Please download it and let us know what you think.

Friday, October 12, 2012

Introduction to Software Mining

Kennard, R. & Leaney, J. 2012, 11th International Conference on Software Methodologies, Tools and Techniques


Software mining is related to both data mining and reverse engineering. It is focused on mining software artefacts such as code bases, program states and structural entities for useful information related to the characteristics of a system. This article provides an introduction to the field. It first reviews a representative selection of the ways software mining has been applied. It then divides software mining into three subcategories and explains each one in detail. Finally this article summarises some of the advantages and limitations of software mining, both now and in the future. These advantages and limitations have been informed by the author's own research applying software mining to the field of User Interface generation.

1. Introduction

Software mining is related to both data mining and reverse engineering. It is focused on mining software artefacts such as code bases, program states and structural entities for useful information related to the characteristics of a system (Xie, Pei & Hassan 2007). The subject of software mining has a broad definition, with many different useful applications. This article provides an introduction to the field. It first reviews representative examples from the existing literature. It then divides the discipline and studies each subcategory in detail. Finally it examines some of the advantages and limitations of software mining, both now and in the future.

This article was informed both by the existing literature and by experiences from our own research. This research is concerned with the application of software mining to the field of User Interface (UI) generation (Kennard & Leaney 2010).

2. Related Work

To conceptualise the field of software mining it is instructive to review a selection of the ways it has been applied. This section presents a representative, though by no means exhaustive, selection of the ways software mining has been applied to different stages of the software development life cycle. Specifically, it considers software mining's application to development, debugging and maintenance.

First, with respect to development. Grcar, Grobelnik & Mladenci (2007) describe mining class names, field names and types, along with inheritance and interface information, in order to construct API documentation and domain ontologies. Their approach further mines source code comments by relying on one of the principles of 'literate programming' - keeping documentation close to the code it refers to (Knuth 1984). Ma, Amor & Tempero (2008) do something similar, mining the names of classes and fields to discern their semantics. Sahavechaphan & Claypool (2006) analyse and demarcate sections of code as being relevant snippets for programmers to use when 'developing by example' which they describe as a "largely unwritten, yet standard, practice".

Next, with respect to debugging. Kagdi, Collar & Maletic (2007) describe mining usages of groups of methods that are generally invoked together. A simple example might be a call to followed sometime later by a call to file.close. By identifying such typical call usage patterns it is possible to expose atypical ones, which may be useful indicators of programming defects. As Khatoon, Mahmood & Li (2011) explain "developers often reuse code fragments by copying and pasting (clone code) with or without minor adaptation to reduce programming efforts and shorten developing time... However, clone code may cause potentially maintainability problem for example, when a cloned code fragment needs to be changed, for example change requirement or additional features, all fragments similar to it should be checked for the change". Tan et al. (2007) use natural language processing to compare the consistency of source code comments with the code itself, and therefore identify either misleading comments or bugs in the code. Xie & Notkin (2005) use software mining to inspect classes and generate likely unit tests, which practitioners can then review for inclusion in their own test suites.

Finally, with respect to maintenance. Kim et al. (2007) report on analysing software version control repositories to recognise clusters of files which are generally updated together. They are therefore able to predict which files should be updated, or at least thoroughly reviewed, following future changes. Kim et al. (2011) apply machine learning techniques to crash signatures to prioritise defects. Nagappan, Ball & Zeller (2006) compare different code complexity metrics along with history from defect tracking systems to predict areas of code that are likely to be defect-prone going forward. Breu, Zimmermann & Lindig (2006) mine classes and methods for 'cross cutting concerns' - areas of duplicated functionality in the codebase that emerge unplanned over time and should potentially be refactored into a common subsystem.

From this diverse range of research projects, we can distil a general understanding of what is considered 'software mining'. By way of clarification it is worth noting there is a strong emphasis in the literature on mining software repositories. Indeed one of software mining's premier annual conferences is titled Mining Software Repositories (MSR 2004-present). This is arguably a misleading title because in industry parlance the phrase 'software repository' typically refers to software systems such as CVS (CVS 2011) or SVN (SVN 2011). The primary feature of such systems is their versioning capability - the 'V' in their names - which tracks changes over time. While some of the examples of software mining mentioned do indeed mine such version histories (Kim et al. 2007), most use software repositories simply as a convenient place to locate source code, defect reports and other documents. There are, of course, other places to locate such artefacts in which case the repository has little bearing on the software mining itself.

Note also that although statistical methods are often employed, this is not always the case. For example, Grcar, Grobelnik & Mladenci (2007) employ the discovery dimension of software mining without any statistical dimension. Equally in our own research, the authors have explored the application of software mining to the field of UI generation (Kennard & Steele 2008). Analysing software mining data statistically is undesirable for UI generation because "UI tools which use automatic techniques that are sometimes unpredictable have been poorly received by programmers" and "predictability seems to be very important to [UI] programmers and should not be sacrificed to make the tools 'smarter'" (Myers, Hudson & Pausch 2000).

Having discerned a general understanding of the field of software mining, we turn now to a more in-depth discussion. We shall divide the field into subcategories and examine each in detail.

3. Categories of Software Mining

Cerulo (2006) divides software mining broadly into three categories: static analysis, dynamic analysis and historic analysis. Each of these is discussed in the following sections.

3.1 Static Analysis

Static analysis "happens when software is analysed on its descriptive dimension. It is performed on software artefacts without actually executing them" (Cerulo 2006). The most commonly thought of static artefacts are source code files, which are discussed in the following section. It is also important to consider files that contain externalised behaviour, which are considered in section 3.1.2.

3.1.1. Source Code

The term 'source code' is generally used to refer to the human-readable programming language code that is later compiled or interpreted into machine-readable code. The majority of an application's functionality is crystallised into its source code, making it a rich source for mining interesting software artefacts.

The techniques used to extract software mining information from source code are well established, being the same techniques traditionally used by compilers or IDEs - such as Abstract Syntax Trees (ASTs) and program dependency graphs (PDGs). Neamtiu, Foster & Hicks (2005) describe using ASTs in conjunction with software version histories to mine the evolution of an application's code over time. ASTs enable practitioners to model the semantics in a language - such as its global variables, types and functions. This allows more meaningful reporting on an application's evolution than, say, comparing number of lines of code or number of files. Liu et al. (2006) discuss using PDGs to compare source code for plagiarism between codebases of different applications. PDGs model the program structure of an application, allowing plagiarism comparisons on a deeper level that cannot be deceived by function renaming or statement reordering.

As rich a source of information as source code analysis is, however, not all of an application's behaviour can be determined from source code alone. Some behaviour is externalised, as discussed in the next section.

3.1.2. Externalised Behaviour

As mentioned in the previous section, the majority of an application's functionality is crystallised into its source code. Increasingly, however, significant amounts of behaviour are being externalised (Rouvellou et al. 1999). There are several motivating factors for this.

Firstly, the externalised behaviour can be expressed in a form more natural to its content. For example system configuration settings may be expressed in a hierarchical XML (XML 2008) file with XML Schema validation. This file can be reviewed and modified by system administrators rather than requiring software practitioners. Similarly, business rules may be expressed in a BPM language that is closer to natural language and easier for non-technical, business users to read and verify (Rouvellou et al. 1999).

Secondly, the externalised behaviour can be updated at a different frequency to the application code. This is desirable because behaviour such as business rules are often more volatile than the rest of an application's code. Modifying the application itself requires specialised skills and carries with it the risk of introducing defects, whereas modifying configuration files or business rules is a more defined process with less margin for error and can therefore be performed by system administrators or business users (Rouvellou et al. 1999).

Configuration files, BPML files and other such externalised representations are not generally considered 'source code', but are still valuable repositories of information for software mining. Another valuable set of information can be discerned by dynamic analysis, discussed in the next section.

3.2 Dynamic Analysis

Unlike static analysis, which centres on mining source code and externalised behaviour files, dynamic analysis "happens when software is analysed on its executive dimension" (Cerulo 2006). The rationale for dynamic analysis is similar to that of mining files containing externalised behaviour: whilst source code captures many properties of an application, a significant amount of its behaviour can only be determined from other places.

For dynamic analysis, a notable example is user input. What the user chooses to input can only be determined at execution time, but may have a large impact on the application's behaviour. In the most complicated case the user may be allowed to input source code itself, such as a spreadsheet formula or a scripting language macro, which can add new functionality and screens to an application.

The most common approach to dynamic analysis is reflection, which is discussed in the following section. Some programming environments further support dynamic analysis of embedded metadata, which is considered in section 3.2.2.

3.2.1. Reflection

Reflection allows an executing program to dynamically analyse itself, by inspecting not just the values but the structure of its own data. Unlike other programming paradigms such as procedural and object-oriented programming, which specify pre-determined sequences of operations, reflection allows the sequences of operations to be determined at execution time based on the data being operated on (Maes 1987).

Reflection can be used to mine the characteristics of an executing program more accurately than static source code analysis. For example a method foo may be declared in source code to accept an object of type bar. At runtime, the actual type passed to foo may be subBar, a subtype of bar with additional properties. Reflection can correctly detect the subtype, whereas static source code analysis could never make this prediction. As Cerulo (2006) puts it "static analysis is affected by the undecidability problem". Furthermore, reflection can not only detect the subtype, but can also discern its properties - even if it had no prior knowledge of the subtype existing. This makes reflection a powerful tool to handle cases where user input can dynamically add scripts and screens to an application, as discussed in the previous section.

Despite its power, reflection is limited in that it can only inspect characteristics that are pre-defined by the platform. For example a platform that supports concepts such as classes and methods may allow reflection of class names and method names but may not allow, say, reflection of the cardinality of the relationship between two classes. For example, one-to-one or many-to-one. To determine such application-specific characteristics, the platform can support an extensible mechanism such as embedding arbitrary metadata.

3.2.2. Embedded Metadata

Beyond reflection, some software environments provide explicit support for dynamic analysis of embedded metadata. For example the 'attributes' feature in the .NET Common Language Runtime (CLR) (Miller & Ragsdale 2003) and the 'annotations' feature in the Java Virtual Machine (JVM) (Gosling 2005). Most languages that run atop these environments expose this capability, allowing practitioners to embed metadata into their source code. For example C# (Hejlsberg 2006) and VB.NET (VB.NET 2005) on the CLR and Java (Gosling 2005) and Groovy (Groovy 2011) on the JVM. This metadata can be dynamically extracted from instantiated objects at runtime.

Embedded metadata allows practitioners to tag domain objects with arbitrary information, beyond the normal capabilities of the platform. This can later be inspected by application frameworks to allow specialised processing. For example the Java platform does not natively support specifying a maximum length for a String (Gosling 2005). This is a problem for Object Relational Mapping (ORM) frameworks which require such information when generating database schemas. Using embedded metadata, this information can be added to the source code alongside the Java field it refers to. The metadata is largely ignored by the JVM, but is significant to the ORM framework that is watching for it (DeMichiel & Keith 2006).

Whilst such metadata is valuable, its disadvantage is that it must be specified explicitly by the practitioner, adding complexity to the code. There are other forms of metadata that accrue more transparently, providing a valuable resource to tap into. These are discussed in the next section.

3.3. Historic Analysis

Historic analysis "happens when software is analysed on its evolutive dimension... on software process trails left by developers during their activities and stored in software [version] repositories" (Cerulo 2006). A great deal of insight into the characteristics of a piece of code can be gained by studying how it has changed over time: code that sees frequent changes may be considered less mature; code that sees changes by many different practitioners may be more prone to error; code that is often involved in defect reports may be a candidate for rework.

Historic analysis mines the 'human' element of a project more than static or dynamic analysis can. Version control is a primary resource but there are others, such as defect tracking databases and project mailing lists. Rigby and Hassan (2007) describe using psychometric text analysis across mailing lists to understand the practitioner personalities behind an application and their impact on the relative success of the project.

To summarise the three forms of software mining, it can be seen they offer different perspectives on retrieving information related to the characteristics of a system (table 1). The promise of software mining is to be able to combine these perspectives. The potent combination of mining source code, embedded metadata, version histories and other artefacts brings significant advantages. These are discussed in the next section.

Categories of Software Mining
Table 1: Categories of Software Mining
Static Dynamic Historic
Source Code
Externalized Behaviour
Embedded Metadata
Process Trails

4. Advantages and Limitations

To conclude our overview of software mining, we consider some of its advantages and limitations. This section has been particularly informed by our own research applying software mining to the field of UI generation. Our result is an Open Source project, Metawidget, downloadable from Briefly, the software mining component within Metawidget proceeds as in figure 1.

Figure 1: Application of software mining to UI generation

It can be seen the Metawidget parent invokes a 'composite' (named after the composite design pattern by Gamma et al. 1995) inspector, which in turn invokes multiple child inspectors. Each inspector performs its own variant of software mining, whether static analysis of XML files or runtime reflection of annotations, which the composite then collates into a detailed whole. This detailed whole is used to drive the UI generation.

Consider a domain object named Hotel. Characteristics of this domain object (such as its fields; its constraints; what actions can be performed on it) are defined across various subsystems within an application. For example the database schema may define the maximum length of a field, whereas the property type subsystem may define whether a field is read-only. The inspectors in figure 1 each mine a different subsystem, as summarized in table 2.

Domain object field Application subsystem inspected by software mining
Table 2: Sources of software mining metadata
Property Validation Database Rules Engine
Name label, type max length, required
Type label, type, lookup enum values
Stars label, type min/max value
Rating label, type, read-only
Notes label, type large field (LOB)
Save label, action
Delete label, action

Once collated, this information can be used to drive detailed UI generation, as depicted in figure 2. It can be seen that each element of the UI maps back to some metadata derived from the software mining. For example the Name field has a star because the database schema identified it as a required field. Similarly the Stars field is a slider control because the validation subsystem defined its upper and lower bounds.

Figure 2: UI constructed from software mining metadata

The software mining component of Metawidget has proven very successful. It forms the core of an enterprise-class framework that has seen significant industry adoption. For example it has been deployed to thousands of clinics across the Spanish National Health System, and it has been incorporated into products by Red Hat, an industry leading middleware vendor.

Building Metawidget has given us valuable insights into the advantages and limitations of software mining. We discuss these in the next two sections.

4.1. Advantages of Software Mining

Our research into applying software mining to UI generation has shown it to have two primary advantages.

First is software mining's ability to eliminate respecification. Software practitioners frequently have to respecify information about a system in order to drive different aspects of its architecture. For example, our domain object Hotel has a property Name which is a required field. The practitioner will typically have to respecify information that the field is 'required' to: the database schema; the validation subsystem; the object-to-XML mapping system (e.g. for Web services) and the UI.

Such respecification is error-prone and a common source of application defects (Kennard, Edmonds & Leaney 2009). Software mining can eliminate these problems by examining the existing system architecture. However, examination of any single subsystem invariably leaves some gaps to fill, because no single source of examination is comprehensive. As Schofield et al. (2006) observe, "our understanding, as a community, has shown that multiple types of analyses may be relevant to understanding some aspect of [a] system... different types of evidence might be complementary, in which case their cross-referenced analysis and interpretation should increase the quality of the inferred knowledge, assuming that the computational resources for their extraction are available". Similarly, German, Cubranic & Storey (2005) stress "extracting information from most information sources is relatively straightforward. But many questions can only be answered by correlating information from multiple sources".

The second advantage, therefore, is software mining's ability to collate information from multiple, heterogeneous sources. This is a unique proposition of software mining. Indeed without its collation and analysis dimensions, software mining would be little more than a modern, umbrella term for long established techniques such as parsing and reflection. Collation is made possible because each of the heterogeneous techniques can be applied with the understanding that it is part of a larger software mining process. It can be written to normalise and homogenise the results of its analysis with results from complementary analyses. The promise of software mining is that, by completing such a diverse and thorough analysis, it can obtain sufficient information to avoid the practitioner having to respecify information, or resorting to generalised heuristics.

Despite these advantages, some significant limitations remain. These are discussed in the next section.

4.2. Limitations of Software Mining

A first limitation of software mining is that it can only mine artefacts that are actually part of the software. In a previous journal article concerning the application of software mining to UI generation (Kennard & Leaney 2010), we wrote "automation is difficult because UIs bring together many qualities of a system that are not formally specified in any one place, if at all". One of our reviewers criticised: "The authors are stating that the design of the UI brings together functionalities of a system that are not formally specified. I would question this assertion as an increasing number of MNCs are hiring Human-Factors designers (HFD), with solid understanding of software development best practices, to work with potential users of the system to identify the UI needs. These HFD are part of the development team and work closely with the systems analysts and programmers to incorporate the design of the UI into the software process". Here, the reviewer is pointing out that where we said 'formally specified' what we really meant was 'specified in machine-readable form'. There may well be detailed written documentation on, say, the correct font size for a text box - but unless this is codified in a form that a machine can interpret it is inaccessible to software mining. Tan et al. (2007) have made some progress incorporating natural language processing into software mining, but this research is still in its infancy.

A second limitation is that, even if the software mining can interpret the data, it will be unable to collate it unless it can be mapped to some normalised key. Software mining can be quite flexible in this regard, with different approaches to mapping for different sources, but the mapping must be deterministic. If a database schema cannot be mapped to a corresponding domain object type, or a domain object type cannot be normalised with some fragment of an XML configuration file, then no meaningful collation between them can occur. In practice many of these mappings are already defined in an application and can themselves be mined. For example the persistence subsystem must have a well-defined mapping between object types and database tables, and the validation subsystem must unambiguously understand which types it applies to. The issue is less clear, however, with subsystems such as business rule engines (Rouvellou et al. 1999) whose input may be a collection of domain objects and whose output may be some rule execution. The inner workings and mappings of such subsystems may be opaque to the rest of the application and non-deterministic to mine.

A third limitation is that it may be difficult to determine when software mining has mined 'enough'. In our research on UI generation we explored using guided software mining to prevent mining too much metadata (i.e. that will not actually be needed during UI generation). But it may be impossible for the software to know whether there was additional metadata available that could have proved beneficial. Such metadata may be missed, resulting in a less functional UI, yet this may only be detectable using traditional techniques such as unit testing and Quality Assurance.

Despite these limitations, software mining has significant potential to resolve many long-standing problems. We turn to these in the final section.

5. Conclusion

This article has provided an introduction to the field of software mining. It has reviewed a representative selection of the ways software mining has been applied; it has divided software mining into three subcategories and explained each one in detail; it has summarised some of the advantages and limitations of software mining, both now and in the future. This article has been particularly informed by the author's own research applying software mining to the field of UI generation. Here, software mining has proven to be successful.

We believe software mining has significant potential to resolve many long-standing problems. Not only can it begin to answer previously impenetrable questions, such as which areas of a codebase are more prone to error (Kim et al. 2007; Nagappan, Ball & Zeller 2006), software mining can also dramatically improve existing approaches. For the field of UI generation, our research has shown it can provide a way to extract sufficient metadata to produce non-generic UIs, whilst at the same time not requiring practitioners to restate that metadata in repetitive and error-prone ways.

This approach has proven very popular with practitioners. We could term it 'mining over respecification'. It would have strong parallels to the industry approach of 'convention over configuration' which has also proven very popular with practitioners (DeMichiel & Keith 2006). A philosophy of 'mining over respecification' would be directly applicable to frameworks which currently require error-prone respecification. For example, object-to-XML mapping frameworks such as Java API for XML Binding (JAXB 2011) could determine whether a field was a 'required' field by mining the database schema, rather than needing a framework-specific @XmlAttribute annotation. Of course we should stress this would be mining over respecification, not instead of respecification: the practitioner could still respecify the annotation in cases where the XML mapping differed from the database schema. However this would not be the majority case.

Such insights suggest software mining is a fertile area with a promising future. It is a new field and there is much to be explored, applied and learnt. We hope this article has provided a cogent introduction for research to come.


Breu, S., Zimmermann, T. & Lindig, C. 2006, 'Mining eclipse for cross-cutting concerns', Association for Computing Machinery, pp. 94-97.
Cerulo, L. 2006, 'On the Use of Process Trails to Understand Software Development', Dipartimento di Ingegneria dottorato di Ricerca in ingegneria dell 'informazione
CVS 2011,
DeMichiel, L. & Keith, M. 2006. 'Java Persistence API', retrieved from
Gamma, E., Helm, R., Johnson, R. & Vlissides, J. 1995, 'Design patterns: elements of reusable object-oriented software', Addison-Wesley.
German, D.M., Cubranic, D. & Storey, M.A.D. 2005, 'A framework for describing and understanding mining tools in software development', Proceedings of the 2005 international workshop on Mining Software Repositories, pp. 1-5.
Gosling, J. 2005, 'The Java Language Specification', Addison-Wesley.
Grcar, M., Grobelnik, M. & Mladeni, D. 2007, 'Using text mining and link analysis for software'.
Groovy 2011,
Hejlsberg 2006,
JAXB 2011,
Kagdi, H., Collard, M.L. & Maletic, J.I. 2007, 'Comparing Approaches to Mining Source Code for Call-Usage Patterns', Proceedings of the 4th International Workshop on Mining Software Repositories.
Kennard, R., Edmonds, E. & Leaney, J. 2009, 'Separation Anxiety: stresses of developing a modern day Separable User Interface', 2nd International Conference on Human System Interaction.
Kennard, R. & Leaney, J. 2010, 'Towards a General Purpose Architecture for UI Generation', Journal of Systems and Software.
Kennard, R. & Steele, R. 2008, 'Application of Software Mining to Automatic User Interface Generation', 7th International Conference on Software Methodologies, Tools and Techniques
Khatoon, S., Mahmood, A. & Li, G. 2011, 'An evaluation of source code mining techniques', vol. 3, IEEE, pp. 1929-1933.
Kim, D., Wang, X., Kim, S., Zeller, A., Cheung, S. & Park, S. 2011, 'Which crashes should I fix first?: Predicting top crashes at an early stage to prioritize debugging efforts', IEEE Transactions on Software Engineering, vol. 37, no. 3, pp. 430-447.
Kim, S., Zimmermann, T., Whitehead Jr, E.J. & Zeller, A. 2007, 'Predicting faults from cached history', IEEE Computer Society, pp. 489-498.
Knuth, D.E. 1984, 'Literate programming', The Computer Journal, vol. 27, no. 2, pp. 97-111.
Liu, C., Chen, C., Han, J. & Yu, P.S. 2006, 'GPLAG: detection of software plagiarism by program dependence graph analysis', Association of Computing Machinery, pp. 872-881.
Ma, H., Amor, R. & Tempero, E. 2008, 'Indexing the Java API Using Source Code', 19th Australian Conference on Software Engineering, pp. 451-460.
Maes, P. 1987, 'Concepts and experiments in computational reflection', Conference on Object Oriented Programming Systems Languages and Applications, pp. 147-155.
Miller, J.S. & Ragsdale, S. 2003, 'The Common Language Infrastructure Annotated Standard', Addison-Wesley Professional.
Myers, B., Hudson, S.E. & Pausch, R. 2000, 'Past, present, and future of user interface software tools', ACM Transactions on Computer-Human Interaction (TOCHI), vol. 7, no. 1, pp. 3-28.
Nagappan, N., Ball, T. & Zeller, A. 2006, 'Mining metrics to predict component failures', Association of Computing Machinery, pp. 452-461.
Neamtiu, I., Foster, J.S. & Hicks, M. 2005, 'Understanding source code evolution using abstract syntax tree matching', Association of Computing Machinery, pp. 1-5.
Rigby, P.C. & Hassan, A.E. 2007, 'What can OSS mailing lists tell us? A preliminary psychometric text analysis of the Apache developer mailing list', IEEE Computer Society.
Rouvellou, I., Degenaro, L., Rasmus, K., Ehnebuske, D. & Mc Kee, B. 1999, 'Externalizing Business Rules from Enterprise Applications: An Experience Report', Practitioner Reports in the OOPSLA, vol. 99.
Sahavechaphan, N. & Claypool, K. 2006, 'XSnippet: mining for sample code', Association of Computing Machinery, pp. 413-430.
SVN 2011,
Tan, L., Yuan, D., Krishna, G. & Zhou, Y. 2007, 'iComment: Bugs or Bad Comments?', Operating Systems Review, vol. 41, no. 6, p. 145.
Xie, T. & Notkin, D. 2005, 'Automatically identifying special and common unit tests for object-oriented programs', pp. 277-287.
Xie, T., Pei, J. & Hassan, A.E. 2007, 'Mining Software Engineering Data', International Conference on Software Engineering, pp. 172-173.
XML 2008,

Friday, October 5, 2012

PhD - Graduation

I was proud to graduate from UTS today!

The Dean of the Faculty of Engineering and Information Technology, Professor Hung Nguyen, announced my PhD thesis Derivation of a General Purpose Architecture for Automatic User Interface Generation with the following words:

"Building User Interfaces for modern software applications consumes a significant proportion of a development team's time. Existing approaches to reducing this burden are either labour intensive and error-prone, or impractical for real world scenarios".

"This thesis set out to address these limitations in practice, using Action Research guided by interviews, case studies and close collaboration with practitioners. It also applied the emerging field of software mining".

"To verify its research, this thesis released an Open Source implementation and tracked its reception. It achieved mainstream industry acceptance on a number of levels, including being used across the Spanish National Health System and being adopted by Red Hat, a world leader in enterprise middleware".

Chancellor Professor Vicki Sara then presented me with my testamur. Thanks to my family for making it such a special occasion!

Saturday, September 29, 2012

How To Get (Almost) Everything You Ever Wanted In One (Not Very) Easy Step

I'm excited to have an article published in the October 2012 edition of Software Development Times.

What's it about? At some stage in their careers, many software developers realise they have a laundry list of career aspirations they'd like to achieve 'one day'. The problem is each of these aspirations requires a substantial investment of time. There are many reasonable objections against any one of them.

The article presents a way to lower that barrier of entry by combining all the items on the laundry list around a single project. It does this by framing an Open Source project in the context of a PhD thesis. It then provides detail, showing how to combine research methodologies such as Action Research and Grounded Theory with industry approaches such as Iterative Development to deliver a unified result. Finally it provides concrete examples in the form of conference papers, journal articles and a finished thesis.

Here's a peek:

You can download it for free here.

Wednesday, September 12, 2012

Metawidget and JBoss Forge: A White Paper

JBoss Forge is set to make its JavaOne debut next month:

To coincide, I thought I'd put together a White Paper describing how Forge uses Metawidget behind the scenes.

Hopefully it makes interesting reading!

Sunday, August 12, 2012

Declarative Form Generator: Metawidget v2.4

Version 2.4 of Metawidget, the declarative form generator is now available!

This release was focused on:
As always, the best place to start is the Reference Documentation:

Your continued feedback is invaluable to us. Please download it and let us know what you think.

Friday, August 10, 2012

Play 500 Online: Visual Refresh

Our venerable Play 500 Online site is 10 years old this year. We took some time to do a visual refresh so that it looked, well, less like a 10 year old Web site:

Huge thanks to Lee Judaya for his excellent graphic design skills!

Tuesday, July 24, 2012

Metawidget On Mojarra

I'm thrilled to announce the next release of Metawidget will be fully compatible with Oracle Mojarra, the Java Server Faces Reference Implementation.

The Metawidget and Mojarra teams have been working hard to resolve long-standing issues (1313, 1383, 1402, 1414, 1717, 1826, 1972, 2231 and 2283) and ensure dynamic component manipulation is reliable and robust.

Here's a screenshot of Metawidget 2.4 running with Mojarra 2.1.7 and RichFaces 4:

Of course, these fixes will also benefit other component developers looking to use dynamic component manipulation in Mojarra. I blogged about the 'best practice' way to implement this here.

Huge thanks to everybody involved over the years, especially Ed Burns, Ryan Lubke, Roger Kitain and Manfred Riem!

Wednesday, July 4, 2012

Forge and Metawidget: Now With Added Prettiness

I've just refreshed the JBoss Forge Scaffolding Look & Feel (which uses Metawidget) based on work done by Cheyenne Weaver. We're now using Bootstrap to get lots of visual goodies with minimal additional code.

Thanks Cheyenne!

You can download a WAR for JBoss AS 7 here, or full source code here. You can also compare to the previous Look & Feel here.

Tuesday, July 3, 2012

Metawidget: Collections Support (Nested Metawidgets)

A recent post on the forum started:

"I've written a WidgetBuilder to handle collections... [by using] Metawidget to produce a [nested Metawidget] component for each object in the collection..."

We've covered something like this before, but this time things were complicated by the need to support BeansBindingProcessor with both save and rebind.

Here's what we came up with (needs Metawidget 2.3):

package com.myapp;

import static org.metawidget.inspector.InspectionResultConstants.*;

import java.awt.*;
import java.awt.event.*;
import java.util.Collection;
import java.util.Collections;
import java.util.List;
import java.util.Map;

import javax.swing.*;

import org.metawidget.inspector.annotation.UiComesAfter;
import org.metawidget.swing.SwingMetawidget;
import org.metawidget.swing.widgetbuilder.SwingWidgetBuilder;
import org.metawidget.swing.widgetprocessor.binding.beansbinding.BeansBindingProcessor;
import org.metawidget.util.*;
import org.metawidget.widgetbuilder.composite.*;
import org.metawidget.widgetbuilder.iface.WidgetBuilder;

public class Application {

   public static void main( final String[] args ) {

      // Model

      final Person homerSimpson = new Person( "Homer Simpson", "Bart", "Lisa",
            "Maggie" );
      final Person nedFlanders = new Person( "Ned Flanders", "Rod", "Todd" );

      // UI

      final JFrame frame = new JFrame();
      frame.setDefaultCloseOperation( JFrame.EXIT_ON_CLOSE );

      // Metawidget

      final SwingMetawidget metawidget = new SwingMetawidget();
      metawidget.setBorder( BorderFactory.createEmptyBorder( 5, 5, 5, 5 ) );
      CompositeWidgetBuilderConfig<JComponent, SwingMetawidget> config = new CompositeWidgetBuilderConfig<JComponent, SwingMetawidget>();
      config.setWidgetBuilders( new CollectionWidgetBuilder(),
            new SwingWidgetBuilder() );
            .setWidgetBuilder( new CompositeWidgetBuilder<JComponent, SwingMetawidget>(
                  config ) );
      metawidget.addWidgetProcessor( new CollectionBeansBindingProcessor() );
      metawidget.setToInspect( homerSimpson );
      frame.add( metawidget, BorderLayout.CENTER );

      // Buttons

      JButton rebindButton = new JButton( "Rebind" );
      rebindButton.addActionListener( new ActionListener() {

         public void actionPerformed( ActionEvent e ) {

            try {
               BeansBindingProcessor beansBindingProcessor = metawidget
                     .getWidgetProcessor( BeansBindingProcessor.class );
      metawidget );
                     "Saved children as "
                           + ( (Person) metawidget.getToInspect() )
                                 .getChildren() );

               beansBindingProcessor.rebind( nedFlanders, metawidget );
            } catch ( Exception ex ) {
               JOptionPane.showMessageDialog( frame, ex.getMessage(),
                     "Error", JOptionPane.ERROR_MESSAGE );
      } );

      JPanel buttonPanel = new JPanel();
      buttonPanel.add( rebindButton );
      frame.add( buttonPanel, BorderLayout.SOUTH );

      frame.setSize( 400, 200 );
      frame.setVisible( true );

   // Model

   public static class Person {

      private String         mName;
      private List<Person>   mChildren   = CollectionUtils.newArrayList();

      public Person( String name, String... children ) {

         mName = name;
         for ( String child : children ) {
            mChildren.add( new Person( child ) );

      public String getName() {

         return mName;

      public void setName( String name ) {

         mName = name;

      @UiComesAfter( "name" )
      public List<Person> getChildren() {

         return mChildren;

      public String toString() {

         return mName;

   // Widget Builder

   public static class CollectionWidgetBuilder
      implements WidgetBuilder<JComponent, SwingMetawidget> {

      public JComponent buildWidget( String elementName,
            Map<String, String> attributes, SwingMetawidget metawidget ) {

         // Not for us?

         Class<?> clazz = WidgetBuilderUtils.getActualClassOrType(
               attributes, null );
         if ( clazz == null ) {
            return null;
         if ( !Collection.class.isAssignableFrom( clazz ) ) {
            return null;

         // Create nested Metawidgets

         List<?> list = (List<?>) ClassUtils.getProperty(
               metawidget.getToInspect(), attributes.get( NAME ) );

         if ( list == null || list.isEmpty() ) {
            return null;

         JPanel panel = new JPanel();
         panel.setLayout( new BoxLayout( panel, javax.swing.BoxLayout.Y_AXIS ) );

         for ( Object child : list ) {
            SwingMetawidget nestedMetawidget = new SwingMetawidget();
            Map<String, String> emptyAttributes = Collections.emptyMap();
            metawidget.initNestedMetawidget( nestedMetawidget,
                  emptyAttributes );
            nestedMetawidget.setToInspect( child );
            nestedMetawidget.setPath( child.getClass().getName() );
            panel.add( nestedMetawidget );
         return panel;

   // Widget Processor

   public static class CollectionBeansBindingProcessor
      extends BeansBindingProcessor {

      public JComponent processWidget( JComponent component,
            String elementName, Map<String, String> attributes,
            SwingMetawidget metawidget ) {

         // Support nested widgets from the CollectionWidgetBuilder

         if ( component instanceof JPanel ) {
            getNestedPanels( metawidget ).put( attributes.get( NAME ),
                  (JPanel) component );
            return component;

         return super.processWidget( component, elementName, attributes,
               metawidget );

      public void save( SwingMetawidget metawidget ) {
 metawidget );

         // Nested save

         for ( JPanel nestedPanel : getNestedPanels( metawidget ).values() ) {
            for ( Component nestedComponent : nestedPanel.getComponents() ) {
               if ( nestedComponent instanceof SwingMetawidget ) {
                  SwingMetawidget nestedMetawidget = (SwingMetawidget) nestedComponent;
                  save( nestedMetawidget );

      public void rebind( Object toRebind, SwingMetawidget metawidget ) {

         super.rebind( toRebind, metawidget );

         // Cannot expect to rebind nestedPanels, so rebuild them instead

         for ( Map.Entry<String, JPanel> entry : getNestedPanels( metawidget )
               .entrySet() ) {

            // Clear the panel

            JPanel panel = entry.getValue();

            // Use a dummy Metawidget to create just the child Metawidgets
            // of the Collection

            SwingMetawidget dummyMetawidget = new SwingMetawidget();
            Map<String, String> attributes = CollectionUtils.newHashMap();
            attributes.put( NAME, entry.getKey() );
            metawidget.initNestedMetawidget( dummyMetawidget, attributes );

            // Copy them into our real Metawidget

            for ( Component child : ( (JPanel) dummyMetawidget
                  .getComponent( 1 ) ).getComponents() ) {
               panel.add( child );


      private Map<String, JPanel> getNestedPanels( SwingMetawidget metawidget ) {

         Map<String, JPanel> nestedPanels = (Map<String, JPanel>) metawidget
               .getClientProperty( CollectionBeansBindingProcessor.class );

         if ( nestedPanels == null ) {
            nestedPanels = CollectionUtils.newHashMap();
                  CollectionBeansBindingProcessor.class, nestedPanels );

         return nestedPanels;

Thursday, June 21, 2012

Simple Trick To Speed Up Any Java Web App

Recently I was inspired to speed up a client's Java Web application. It had grown steadily over the years from having hundreds of records to millions, and was becoming sluggish. Of course optimization is a complex subject, and should be undertaken in light of proper performance measurement. There's no silver bullet.

Having said that, here's an easy-to-implement, non-intrusive, and in many cases very effective way. As a bonus, it should work with any Java Web application framework (this client used JSF).

The Trick

The approach is based on the premise that while Web applications are inherently multi-user, often you don't care what other users are doing. This may be because:

  • the application security prevents users from seeing each other's records; or
  • updates made by other users don't have to be reflected immediately.

If your Web application falls into this category, you may be able to take advantage of HTTP's 304 status code. A 304 falls somewhere between the Expires header (whereby the client doesn't access the server at all, and only uses its local cache) and a full re-rendering of the page. 304 lets the client check the server, but if the server says 'nothing has changed' then the client will defer to its local cache.

With a careful invalidation algorithm, you can return 304s quite often even for dynamic pages. Even a crude invalidation algorithm seems to work quite well.

The Code

The Filter below says:

"Return a 304 for every dynamic page you've already rendered, unless the user does a POST, in which case invalidate all the dynamic pages under that folder"

Here's how it's implemented:

package com.myapp;

import java.text.*;
import java.util.*;

import javax.servlet.*;
import javax.servlet.http.*;

* Returns '304 Not Modified' response as much as possible.
* <p>
* 304s are an intermediary between full client side caching (which is problematic because you
* cannot trigger it to expire) and full server re-rendering (which is expensive).

public class NotModifiedFilter
   implements Filter {

   // Private statics

   private static final SimpleDateFormat HTTP_DATE_FORMAT = new SimpleDateFormat( "EEE, dd MMM yyyy HH:mm:ss z" );

   private static final String LAST_MODIFIED_SESSION_ATTRIBUTE_NAME= NotModifiedFilter.class.getName() + ".LAST_MODIFIED_CACHE";

   // Public methods

   public void init( FilterConfig config ) { }

   public void doFilter( ServletRequest request, ServletResponse response, FilterChain chain )
      throws IOException, ServletException {

      // For every request...
      // Note: we only track and expire to the 'folder' level (eg. '/members' of
      // '/members/member.jsf'). If any page within a folder gets expired, all pages get expired.

      HttpServletRequest httpRequest = (HttpServletRequest) request;
      String folderUrl = substringBeforeLast( httpRequest.getRequestURI(), "/" );

      // ...track in the session whether we've already served it...

      @SuppressWarnings( "unchecked" )
      Map<String, Long> lastModifiedCache = (Map<String, Long>) httpRequest.getSession().getAttribute( LAST_MODIFIED_SESSION_ATTRIBUTE_NAME );

      if ( lastModifiedCache == null ) {
         lastModifiedCache = new HashMap<String, Long>();
         httpRequest.getSession().setAttribute( LAST_MODIFIED_SESSION_ATTRIBUTE_NAME, lastModifiedCache );

      if ( "post".equalsIgnoreCase( httpRequest.getMethod() ) ) {

         // ...upon POSTback, reset the cache for this folder

         lastModifiedCache.remove( folderUrl );

         // There is no point checking "If-Modified-Since", because browsers will not send it for
         // a POST

      } else {

         // ...if we have served it, return an SC_NOT_MODIFIED

         HttpServletResponse httpResponse = (HttpServletResponse) response;
         Long lastModified = lastModifiedCache.get( folderUrl );

         // Note: we must check the timestamp, in case the browser is asking about an old
         // page it has cached from a previous session

         if ( !isModifiedSince( httpRequest, lastModified ) ) {
            httpResponse.setStatus( HttpServletResponse.SC_NOT_MODIFIED );


         // Set the 'Last-Modified' header so that next time the client will pass us the
         // 'If-Modified-Since' header
         // Note: this will not work on Chrome with a local (invalid) SSL certificate: see
         // But it words fine in
         // production

         Date now = new Date();
         setLastModified( httpResponse, now );

         // Don't overwrite. lastModifiedCache should store the earliest generation
         // timestamp for the folder

         if ( !lastModifiedCache.containsKey( folderUrl ) ) {
            lastModifiedCache.put( folderUrl, now.getTime() );

      chain.doFilter( request, response );

   public void destroy() { }

   // Private methods

    * Returns the portion of the overall string that comes after the last occurance of the given
    * string. If the given string is not found in the overall string, returns the entire string.

   private String substringBeforeLast( String text, String before ) {

      int iIndexOf = text.lastIndexOf( before );

      if ( iIndexOf == -1 ) {
         return text;

      return text.substring( 0, iIndexOf );

    * Returns 'true' if the lastModified date is *after* the 'If-Modified-Since' header in the HTTP
    * request.
    * <p>
    * Also returns true if lastModified is null, because we should err on the side of caution and
    * avoid cached versions if we're unsure.

   private boolean isModifiedSince( HttpServletRequest httpRequest, Long lastModified ) {

      if ( lastModified == null ) {
         return true;

      String ifModifiedSince = httpRequest.getHeader( "If-Modified-Since" );

      if ( ifModifiedSince == null ) {
         return true;

      synchronized ( HTTP_DATE_FORMAT ) {
         try {

            // Deduct 1000ms because HTTP_DATE_FORMAT does not contain milliseconds (so will
            // always be zero)

            if ( HTTP_DATE_FORMAT.parse( ifModifiedSince ).getTime() >= lastModified - 1000 ) {
               return false;

         } catch ( ParseException e ) {
            throw new RuntimeException( e );

      return true;

   private void setLastModified( HttpServletResponse httpResponse, Date lastModified ) {

      synchronized ( HTTP_DATE_FORMAT ) {
         httpResponse.setHeader( "Last-Modified", HTTP_DATE_FORMAT.format( lastModified ) );

You can install it under web.xml like this:



   <!-- NotModifiedFilter (comes first, because performance is the whole point!) -->


   <!-- Only map to dynamic pages. Static resources (CSS/JS/images etc) should use Expires header -->



   <!-- If your Web framework uses a ViewState (e.g. JSF), need to make it client side -->


There are many improvements you can make (such as excluding certain folders, ignoring POSTs from certain buttons, invalidating based on database triggers etc). But I've left it simple and non-intrusive for this example.

The Use Case

Let's see how it works. Say our Web application looks like the one below (generated by Seam Forge). There's a navigation bar on the left for different types of entity. Each entity has a 'Search' screen where you can enter criteria and list matching entities. Then you can click through to a 'View' screen to view the entity in detail, and a further 'Edit' screen to edit the entity:

A typical user interaction might be:

  1. User clicks 'Address' on navigation bar
  2. User enters '123 Smart Street' as the search criteria and clicks Search
  3. User sees two results and clicks the first one
  4. User views the detail of the result and realises its the wrong one
  5. User clicks back to 'Address' screen and clicks on second result
  6. User views the detail of the second result
  7. User clicks on 'Customer' on navigation bar
  8. User clicks on first customer in the list and views the detail
  9. User clicks back on 'Customer'
  10. User clicks back on 'Address'

The lines in italics are ones that can be rendered using a 304. You can see they account for around 30% of the page displays. So roughly 1 in 4 user interactions can be rendered using the local cache. This not only makes the Web application feel more responsive for the user, it also reduces the burden on the server.

Saturday, June 16, 2012

JavaOne 2012: Kennard Consulting's Contributions

Kennard Consulting's work will be featured in three sessions at JavaOne 2012:

My thanks to all the presenters. The complete Conference Catalog can be found here.

Wednesday, June 6, 2012

SwingMetawidget: Limiting the Length of a JTextField

I was recently asked on the Metawidget forum whether SwingMetawidget could limit the maximum number of characters in a JTextField.

This isn't a capability JTextField offers by default, so SwingMetawidget doesn't offer it either. However it's easy to add using a custom WidgetProcessor and a Swing Document. Here's an example:

package com.myapp;

import static org.metawidget.inspector.InspectionResultConstants.*;

import java.util.Map;

import javax.swing.JComponent;
import javax.swing.JFrame;
import javax.swing.JTextField;
import javax.swing.text.AttributeSet;
import javax.swing.text.BadLocationException;
import javax.swing.text.PlainDocument;

import org.metawidget.inspector.annotation.UiAttribute;
import org.metawidget.swing.SwingMetawidget;
import org.metawidget.widgetprocessor.iface.WidgetProcessor;

public class Main {

   public static void main( String[] args ) {

      Person person = new Person();

      SwingMetawidget metawidget = new SwingMetawidget();
      metawidget.addWidgetProcessor( new MaxLengthProcessor() );
      metawidget.setToInspect( person );

      JFrame frame = new JFrame( "Metawidget Tutorial" );
      frame.setDefaultCloseOperation( JFrame.EXIT_ON_CLOSE );
      frame.getContentPane().add( metawidget );
      frame.setSize( 400, 250 );
      frame.setVisible( true );

   static class Person {

      private String   mName;

      @UiAttribute( name = MAXIMUM_LENGTH, value = "10" )
      public String getName() {

         return mName;

      public void setName( String name ) {

         mName = name;

   static class MaxLengthProcessor
      implements WidgetProcessor<JComponent, SwingMetawidget> {

      public JComponent processWidget( JComponent widget, String elementName,
                                 Map<String, String> attributes, SwingMetawidget metawidget ) {

         String maximumLength = attributes.get( MAXIMUM_LENGTH );

         if ( maximumLength != null ) {

            final int maximumChars = Integer.parseInt( maximumLength );

            ( (JTextField) widget ).setDocument( new PlainDocument() {

               public void insertString( int offset, String string, AttributeSet attributeSet )
                     throws BadLocationException {

                  if ( string != null && ( getLength() + string.length() <= maximumChars ) ) {
                     super.insertString( offset, string, attributeSet );
            } );

         return widget;


Hope that helps somebody! Note you don't have to use @UiAttribute to set MAXIMUM_LENGTH. It could come from JPA's @Column( length ), or Bean Validator's @Size, or XML, or any number of other sources.

Tuesday, May 15, 2012

Metawidget Adds Vaadin Support

The latest release of Metawidget adds support for the popular Vaadin Web framework! Cool features include:
  • Full support for the complete Metawidget pipeline of pluggable Inspectors, InspectionResultProcessors, WidgetBuilders, WidgetProcessors and Layouts
  • Support for Vaadin components including Buttons, CheckBoxes, PopupDateFields, Selects, Sliders, TextAreas and more
  • Automated property binding including a simple Converter system
  • Support for Vaadin layouts including FormLayout, HorizontalLayout, VerticalLayout and TabSheets
Full documentation is included in the Reference Guide. Example usage would be:

VaadinMetawidget metawidget = new VaadinMetawidget();
metawidget.setToInspect( new Person() );
pagebody.addComponent( metawidget );

There's a full Vaadin Address Book example included in the distribution:

This release was made possible by the work of Loghman Barari. Thanks Loghman!

Monday, May 14, 2012

UI Generator: Metawidget v2.2

Version 2.2 of Metawidget, the UI Generator is now available!

This release was focused on:
Special thanks to Loghman Barari for his help with this release!

As always, the best place to start is the Reference Documentation:

Your continued feedback is invaluable to us. Please download it and let us know what you think.

Thursday, April 19, 2012

PhD - Bound

Today I picked up the final, bound copy of my PhD thesis (having done all the corrections requested by my examiners) and delivered it to my University. It'll now be shelved in the University Library. There's also a digital copy that can be downloaded online.
PhD Thesis
Next stop: conferment!

Friday, March 30, 2012

PhD - Completed!

This post bookends the post that started this entire blog four and a half years ago. Today I received the result for my PhD in Software Engineering! My thesis, Derivation of a General Purpose Architecture for Automatic User Interface Generation was highly praised by the examiners:

"In summary, a very satisfying, inspiring, carefully laid out, meticulously planned and executed thesis. I unhesitatingly recommend that the candidate be admitted to the degree of PhD"

I would like to acknowledge and thank a number of people for their support, advice and encouragement.

My supervisor, John Leaney, was instrumental in opening me up to the world of academic rigour. I would like to thank him for challenging me to think about epistemologies and research methodologies, interview techniques, reflection, VVT and GQM, and a host of other disciplines. His support and enthusiasm for my work never faltered. Most of all I would like to thank him for being able to readily understand my goal, and bridge the divide between the commercial world I knew and the research community I aspired to join.

I would like to thank those people who posted forum messages, blogged, tweeted, wrote magazine articles, and published papers regarding my work. I would like to thank my interviewees and adoption study participants for their time and feedback. Given the Action Research methodology of my thesis, their observations were critical to my progress. I would like to thank Dan Haywood of the Naked Objects team, who was always friendly and supportive, and very professional considering the sometimes conflicting nature of our projects! I would also like to thank Dan Allen, for his help shepherding my research within Red Hat.

Finally I would like to thank and apologise to my family who lived with my long absences throughout – both physical absences during the weeks, and mental absences during the weekends when I couldn't drag my head out of the work.