Join us on:
Facebook
LinkedIn
Plaxo

Using Automation to Improve UML Models, Designs & Test Cases

Session Chair: Torsten Layda, SIX Swiss Exchange
Refactoring UML models. Using OpenArchitectureWare to measure UML model quality and perform pattern matching on UML models with OCL queries.
Twan Van Enckevort, Xebia BV

In object oriented software development, the Unified Modeling Language (UML) has become the de-facto modeling standard. UML plays an important role for software factories, in which a high quality abstract UML model is the primary source of input used to generate a working system. While there are many tools that enable assisted refactoring of source code, there are few tools that enable assisted refactoring of UML models.

In order to determine UML model quality for UML models used in code generation projects, a selection of quality metrics has been made. While there are a large number of metrics available to determine code quality, there are only a limited number of metrics applicable to UML models. Most model quality metrics have been derived from code quality metrics. Syntactic and semantic model check rules have been implemented, that allow detection of undesirable model properties. The syntactic model checkers have been derived directly from the UML specification. The semantic model checkers have been derived from a range of anti-pattern descriptions.

We have delivered a prototype that detects undesirable model features in order to test the model improvement capabilities. The prototype contains selected model quality metrics, syntactic and semantic model check rules. Both metrics and rules have been formulated in the Object Constraint Language (OCL), which operates on UML models. The system is built using Open Source tools, allowing easy extensions of the prototype. The effects of suggested repair actions on the model are measurable through the selected model quality metrics and by subjective comparison. The prototype was able to improve model quality for four industry models both by metrics and subjective comparison.

An Extensible Framework for Tracing Model Evolution in SOA Solution Design
Renuka Sindhgatta, IBM India Research Laboratory
Bikram Sengupta, IBM India Research Laboratory

Existing tools for model-driven development support automated change management across predefined models with precisely known dependencies. These tools cannot be easily applied to scenarios where we have a diverse set of models and relationships, and where human judgment and impact analysis are critical to introducing and managing changes. Such scenarios arise in model-based development of service oriented architectures (SOA), where a plethora of high-level models representing different aspects of the business (requirements, processes, data) need to be translated into service models, and changes across these models need to be carefully analyzed and propagated. To support the process of model evolution, we present an extensible framework that can automatically identify possible changes in any MOF-compliant model. Changes across different model types can be easily related through a user interface and via rules that are programmed at specified plug-in points. At runtime, when an instance of a model is changed, the framework performs fine-grained analysis to identify impacted models and elements therein. It also allows analysts to selectively apply or reject changes based on the specific context and summarizes the incremental impact on downstream elements as choices are made. We share our experience in using our framework during the design of a SOA-based system that underwent several changes in business models, necessitating changes in the associated service design

Reverse Generation and Refactoring of Fit Acceptance Tests for Legacy Code
Martin Kropp, University of Applied Sciences Northwestern Switzerland
Wolfgang Schwaiger, University of Applied Sciences Northwestern Switzerland

The Fit framework is a well established tool for creating early and automated acceptance tests. Available Eclipse plug-ins like FITPro support the writing of test data and the creation of test stubs quite well for new requirements and new code. In our project we faced the problem, that a large legacy system should undergo a major refactoring. Before this, acceptance tests had to be added to the system to ensure equivalent program behavior before and after the changes. Writing acceptance tests manually for existing code is very laborious, cumbersome and very costly. However reverse generation of tests based on legacy code is not foreseen in the current Fit framework, and there are no other tools available to do so. So we decided to develop a tool which allows generation of the complete Fit test code and test specification based on existing code. The tool also includes automatic refactoring of test data when refactoring production code and vice versa, when changing the Fit test specification, it also updates production code accordingly. This reduces the maintenance effort of Fit tests in general and we hope, this will help to spread the usage of Fit for acceptance and integration testing even more.

Please email any questions to . This e-mail address is being protected from spambots. You need JavaScript enabled to view it