Join us on:
Facebook
LinkedIn
Plaxo

Papers 4

Optimization of Dynamic Languages Using Hierarchical Layering of Virtual Machines
Alexander Yermolovich, University of California Irvine
Christian Wimmer, University of California Irvine
Michael Franz, University of California Irvine

Creating an interpreter is a simple and fast way to implement a dynamic programming language. With this ease also come major drawbacks. Interpreters are significantly slower than compiled machine code because they have a high dispatch overhead and cannot perform optimizations. To overcome these limitations, interpreters are commonly combined with just-in-time compilers to improve the overall performance. However, for dynamic languages this means that a new just-in-time compiler has to be implemented for each dynamic language.

We explore an approach of taking an interpreter of a dynamic language and running it on top of an optimizing tracing virtual machine, i.e., we run a guest VM on top of a host VM. The host VM uses trace recording to observe the guest VM executing the application program. Each recorded trace represents a sequence of guest VM bytecodes corresponding to a given execution path through the application program. The host VM optimizes and compiles these traces to machine code, eliminating the need for a custom just-in-time compiler for the guest VM. The guest VM only needs to provide basic information about its interpreter loop to the host VM.

The Ruby Intermediate Language
Michael Furr, University of Maryland, College Park
Jong-hoon (David) An, University of Maryland, College Park
Jeff Foster, University of Maryland
Michael Hicks, University of Maryland

Ruby is a popular, dynamic scripting language that aims to “feel natural to programmers” and give users the “freedom to choose”
among many different ways of doing the same thing. While this arguably makes programming in Ruby easier, it makes it hard to build analysis and transformation tools that operate on Ruby source code. In this paper, we present the Ruby Intermediate Language (RIL), a Ruby front-end and intermediate representation that addresses these challenges. RIL includes an extensible GLR parser for Ruby, and an automatic translation into an easy-to-analyze intermediate form. This translation eliminates redundant language constructs, unravels the often subtle ordering among side effecting operations, and makes implicit interpreter operations explicit. We also describe several additional useful features of RIL, such as a dynamic instrumentation library for profiling source code and a dataflow analysis engine.

We demonstrate the usefulness of RIL by presenting a static and dynamic analysis to eliminate null pointer errors in Ruby programs. We hope that RIL’s features will enable others to more easily build analysis tools for Ruby, and that our design will inspire the creation of similar frameworks for other dynamic languages.

Hosting an Object Heap on Manycore Hardware: An Exploration
David Ungar, IBM Research
Sam Adams, IBM Research

In order to construct a test-bed for investigating new programming paradigms for future "manycore" systems (i.e. those with at least a thousand cores), we are building a Smalltalk virtual machine that attempts to efficiently use a collection of 56-on-chip caches of 64KB each to host a multi-megabyte object heap. In addition to the cost of inter-core communication, two hardware characteristics influenced our design: the absence of hardware-provided cache-coherence, and the inability to move a single object from one core's cache to another's without changing its address. Our design relies on an object table, and the exploitation of a user-managed caching regime for read-mostly objects. At almost every stage of our process, we obtained measurements in order to guide the evolution of our system. When we became overconfident and skipped a measurement step, we had to go back and analyze performance regressions.

The architecture and performance characteristics of a manycore platform confound old intuitions by deviating from both traditional multicore systems and from distributed systems. The implementor confronts a wide variety of design choices, such as when to share address space, when to share memory as opposed to sending a message, and how to eke out the most performance from a memory system that is far more tightly integrated than a distributed system yet far less centralized than in a several-core system. Our system is far from complete, let alone optimal, but our experiences have helped us develop new intuitions needed to rise to the manycore software challenge.

Please email any questions to . This e-mail address is being protected from spambots. You need JavaScript enabled to view it