OCTOBER 25 TO 29, 2009
Programs written in managed languages are compiled to a platform-independent intermediate representation, such as Java bytecode. The relative high level of Java bytecode has engendered a widespread practice of changing the bytecode directly, without modifying the maintained version of the source code. This practice, called bytecode engineering or enhancement, has become indispensable in introducing various concerns, including persistence, distribution, and security, transparently. For example, transparent persistence architectures help avoid the entanglement of business and persistence logic in the source code by changing the bytecode directly to synchronize objects with stable storage. With functionality added directly at the bytecode level, the source code reflects only partial semantics of the program. Specifically, the programmer can neither ascertain the program's runtime behavior by browsing its source code, nor map the runtime behavior back to the original source code.
This paper presents an approach that improves the utility of source-level programming tools by providing enhancement specifications written in a domain-specific language. By interpreting the specifications, a source-level programming tool can gain an awareness of the bytecode enhancements and improve its precision and usability. We demonstrate the applicability of our approach by making a source code editor and a symbolic debugger enhancements-aware.
A static object diagram represents all objects and inter-object relations possibly created, and is recovered by static analysis of the program. An object diagram makes explicit the object structures that are only implicit in a class diagram. An object diagram may be missing, hence the need to extract one from code. Alternatively, an existing diagram may be inconsistent with the code, then there is value in analyzing its conformance to the implementation. One can generalize the global object diagram of a system into a runtime architecture which abstracts objects into components, represents how those components interact, and can decompose a component into a nested sub-architecture.
Existing static analyses extract static object diagrams that are non-hierarchical and as result, do not scale or provide meaningful architectural abstraction. Indeed, architectural hierarchy is not readily observable in arbitrary code. Previous approaches used breaking language extensions to specify architectural hierarchy and instances in code, or used dynamic analyses to extract dynamic object diagrams that show objects and relations for a few program runs.
Typecheckable ownership domain annotations use existing language-support for annotations and specify in code, object encapsulation, logical containment and architectural tiers. These annotations enable a points-to static analysis to extract a sound global object graph that provides architectural abstraction by ownership hierarchy and by types, where architecturally significant objects appear near the top of the hierarchy and data structures are demoted further down.
Another analysis can abstract an object graph into a built runtime architecture. Then, a third analysis can compare the built architecture to a target, analyze and measure their structural conformance, establish traceability between the two and identify interesting differences.
Model-driven development (MDD) is widely used to develop modern business applications. MDD involves creating models at different levels of abstractions. Starting with models of domain concepts, these abstractions are successively refined, using automated transformers, to design-level models and, eventually, code-level artifacts. Although many tools exist that support transformer creation and verification, tools that help users in understanding and using transformers are rare. In this paper, we present an approach for assisting users in understanding model transformations and debugging their input models. We use automated program-analysis techniques to analyze the transformer code and compute constraints under which a transformation may fail or be incomplete. These code-level constraints are mapped to the input model elements to generate model-level rules. The rules can be used to validate whether an input model violates transformer constraints, and to support general user queries about a transformation. We have implemented the analysis in a tool called Xylem. We present the results of empirical studies, which indicate that (1) our approach can be effective in inferring useful rules, and (2) the rules let users efficiently diagnose a failing transformation without examining the transformer source code.