Abstract
Comparing the performance of programming languages is difficult because they differ in many aspects including preferred programming abstractions, available frameworks, and their runtime systems. Nonetheless, the question about relative performance comes up repeatedly in the research community, industry, and wider audience of enthusiasts.
This paper presents 14 benchmarks and a novel methodology to assess the compiler effectiveness across language implementations. Using a set of common language abstractions, the benchmarks are implemented in Java, JavaScript, Ruby, Crystal, Newspeak, and Smalltalk. We show that the benchmarks exhibit a wide range of characteristics using language-agnostic metrics. Using four different languages on top of the same compiler, we show that the benchmarks perform similarly and therefore allow for a comparison of compiler effectiveness across languages. Based on anecdotes, we argue that these benchmarks help language implementers to identify performance bugs and optimization potential by comparing to other language implementations.
- S. M. Blackburn, R. Garner, C. Hoffmann, A. M. Khang, K. S. McKinley, R. Bentzur, A. Diwan, D. Feinberg, D. Frampton, S. Z. Guyer, M. Hirzel, A. Hosking, M. Jump, H. Lee, J. E. B. Moss, B. Moss, A. Phansalkar, D. Stefanovi´c, T. Van-Drunen, D. von Dincklage, and B. Wiedermann. The DaCapo Benchmarks: Java Benchmarking Development and Analysis. In Proc. of OOPSLA, pages 169–190. ACM, 2006. doi: 10.1145/1167473.1167488. Google Scholar
Digital Library
- H.-J. Boehm. Space Efficient Conservative Garbage Collection. In Proc. of PLDI, pages 197–206. ACM, 1993. doi: 10.1145/155090.155109. Google Scholar
Digital Library
- B. Dufour, K. Driesen, L. Hendren, and C. Verbrugge. Dynamic Metrics for Java. In Proc. of OOPSLA, pages 149–168. ACM, 2003. doi: 10.1145/949305.949320. Google Scholar
Digital Library
- P. Havlak. Nesting of Reducible and Irreducible Loops. ACM Trans. Program. Lang. Syst., 19(4):557–567, 1997. doi: 10.1145/262004.262005. Google Scholar
Digital Library
- K. Hoste and L. Eeckhout. Microarchitecture-Independent Workload Characterization. IEEE Micro, 27(3):63–72, 2007. Google Scholar
Digital Library
- doi: 10.1109/MM.2007.56.Google Scholar
- R. Hundt. Loop Recognition in C++/Java/Go/Scala. In Proc. of Scala Days, 2011.Google Scholar
- T. Kalibera, J. Hagelberg, P. Maj, F. Pizlo, B. Titzer, and J. Vitek. CDx: A Family of Real-time Java Benchmarks. Concurrency and Computation: Practice and Experience, 23 (14):1679–1700, 2011. doi: 10.1002/cpe.1677. Google Scholar
Digital Library
- W. H. Li, D. R. White, and J. Singer. JVM-hosted Languages: They Talk the Talk, but Do They Walk the Walk? In Proc. of PPPJ, pages 101–112. ACM, 2013. doi: 10.1145/ 2500828.2500838. Google Scholar
Digital Library
- P. Ratanaworabhan, B. Livshits, and B. G. Zorn. JSMeter: Comparing the Behavior of JavaScript Benchmarks with Real Web Applications. In Proc. of WebApps. USENIX, 2010. Google Scholar
Digital Library
- G. Richards, S. Lebresne, B. Burg, and J. Vitek. An Analysis of the Dynamic Behavior of JavaScript Programs. In Proc. of PLDI, pages 1–12. ACM, 2010. Google Scholar
Digital Library
- doi: 10.1145/ 1809028.1806598.Google Scholar
- M. Richards. Bench, 1999.Google Scholar
- A. Sarimbekov, A. Podzimek, L. Bulej, Y. Zheng, N. Ricci, and W. Binder. Characteristics of Dynamic JVM Languages. In Proc. of VMIL, pages 11–20. ACM, 2013. doi: 10.1145/ 2542142.2542144. Google Scholar
Digital Library
- A. Sewe, M. Mezini, A. Sarimbekov, and W. Binder. Da Capo Con Scala: Design and Analysis of a Scala Benchmark Suite for the Java Virtual Machine. In Proc. of OOPSLA, pages 657–676. ACM, 2011. doi: 10.1145/2048066.2048118. Google Scholar
Digital Library
- K. Shiv, K. Chow, Y. Wang, and D. Petrochenko. SPECjvm2008 Performance Characterization. In Proc. of SPEC Benchmark Workshop, pages 17–35. Springer, 2009. Google Scholar
Digital Library
- doi: 10.1007/978-3-540-93799-9 2.Google Scholar
- M. Wolczko. Benchmarking Java with Richards and Deltablue, 2013.Google Scholar
- T. Würthinger, C. Wimmer, A. Wöß, L. Stadler, G. Duboscq, C. Humer, G. Richards, D. Simon, and M. Wolczko. One VM to Rule Them All. In Proc. of Onward!, pages 187–204. ACM, 2013. doi: 10.1145/2509578.2509581. Google Scholar
Digital Library
- A. Wöß, C. Wirth, D. Bonetta, C. Seaton, C. Humer, and H. Mössenböck. An Object Storage Model for the Truffle Language Implementation Framework. In Proc. of PPPJ, pages 133–144. ACM, 2014. doi: 10.1145/2647508.2647517. Google Scholar
Digital Library
- T. Würthinger, A. Wöß, L. Stadler, G. Duboscq, D. Simon, and C. Wimmer. Self-Optimizing AST Interpreters. In Proc. of DLS, pages 73–82, 2012. doi: 10.1145/2384577.2384587. Google Scholar
Digital Library
Index Terms
Cross-language compiler benchmarking: are we fast yet?
Recommendations
Cross-language compiler benchmarking: are we fast yet?
DLS 2016: Proceedings of the 12th Symposium on Dynamic LanguagesComparing the performance of programming languages is difficult because they differ in many aspects including preferred programming abstractions, available frameworks, and their runtime systems. Nonetheless, the question about relative performance ...
Improving compiler-runtime separation with XIR
VEE '10Intense research on virtual machines has highlighted the need for flexible software architectures that allow quick evaluation of new design and implementation techniques. The interface between the compiler and runtime system is a principal factor in the ...
Compiler algorithm language (CAL): an interpreter and compiler
ACST'07: Proceedings of the third conference on IASTED International Conference: Advances in Computer Science and TechnologyWe have designed a Compiler Algorithm Language (CAL) to provide compiler writers with a language which is close to actual algorithmic notation. In this work, we have developed an interpreter and debugger for CAL which can be used by researchers for ...







Comments