http://www.oopsla.org/2006/2006/index.php?option=com_content&task=view&id=54&Itemid=177

program registration submissions committee lodging portland

The DaCapo Benchmarks: Java Benchmarking Development and Analysis

The DaCapo Benchmarks: Java Benchmarking Development and Analysis

Research Paper

Wednesday, Oct 25, from 10:30 to 12:00

Benchmarks often drive computer science research and industry product development. As Java applications became more prevalent, SPEC (the current purveyor of the most popular benchmarks) introduced Java benchmarks. However, many in industry and academia continued to use the same evaluation criteria as they had for C and Fortran, languages which are not subject to the complex run-time tradeoffs which exist in Java due to dynamic compilation and garbage collection. SPEC did not change their evaluation criteria, compounding the problem. Since the community's progress on virtual machine, compiler, operating system, and architecture technologies are measured on these benchmarks, poor benchmark selection and evaluation limits innovation and impact across the entire field. In this paper, we suggest evaluation methodologies and introduce the DaCapo benchmarks, a new set of open source, client-side Java benchmarks. We first demonstrate that the complex interaction of (1) application, (2) memory management policy, (3) heap size, and (4) architecture requires more extensive evaluation than for C and Fortran in which (2) and (3) are typically not variables. The DaCapo benchmarks are of course incomplete and not definitive either, but they improve over SPEC Java benchmarks in a variety of ways, including more complex code, richer object behaviors, and more demanding memory system requirements. This paper thus takes a step towards improved methodologies for choosing and evaluating benchmarks in order to foster innovation in programming language design and implementation for Java and other managed languages.

Stephen M Blackburn, Intel
Robin Garner, Australian National University
Chris Hoffmann, University of Massachusetts at Amherst
Asjad M Khan, University of Massachusetts at Amherst
Kathryn S. McKinley, University of Texas at Austin
Rotem Bentzur, University of New Mexico
Amer Diwan, University of Colorado
Daniel Feinberg, University of New Mexico
Daniel Frampton, Australian National University
Samuel Z. Guyer, Tufts University
Martin Hirzel, IBM TJ Watson Research Center
Antony Hosking, Purdue University
Maria Jump, University of Texas at Austin
Han Lee, Intel
J. Eliot B. Moss, University of Massachusetts
Aashish Phansalkar, University of Texas at Austin
Darko Stefanovic, University of New Mexico
Thomas VanDrunen, Purdue
Daniel von Dincklage, University of Colorado
Benjamin Wiedermann, University of Texas at Austin

 
Research Papers in the same session
Related Onward! Papers
Related Panels
Related Practitioner Reports
Related Research Papers
Related Tutorials
Related Workshops

While Space Available
Search
program registration submissions committee lodging portland
For comments and questions about the web site
please contact us at support@oopsla.org
© 2005 OOPSLA