skip to main content
research-article

Automated construction of JavaScript benchmarks

Published:22 October 2011Publication History
Skip Abstract Section

Abstract

JavaScript is a highly dynamic language for web-based applications. Innovative implementation techniques for improving its speed and responsiveness have been developed in recent years. Industry benchmarks such as WebKit SunSpider are often cited as a measure of the efficacy of these techniques. However, recent studies have shown that these benchmarks fail to accurately represent the dynamic nature of modern JavaScript applications, and so may be poor predictors of real-world performance. Worse, they may guide the development of optimizations which are unhelpful for real applications. Our goal is to develop a tool and techniques to automate the creation of realistic and representative benchmarks from existing web applications. We propose a record-and-replay approach to capture JavaScript sessions which has sufficient fidelity to accurately recreate key characteristics of the original application, and at the same time is sufficiently flexible that a recording produced on one platform can be replayed on a different one. We describe JSBench, a flexible tool for workload capture and benchmark generation, and demonstrate its use in creating eight benchmarks based on popular sites. Using a variety of runtime metrics collected with instrumented versions of Firefox, Internet Explorer, and Safari, we show that workloads created by JSBench match the behavior of the original web applications.

References

  1. S. M. Blackburn et al. The DaCapo benchmarks: Java benchmarking development and analysis. In Conference on Object-Oriented Programming Systems Languages and Applications (OOPSLA), pages 169--190, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. F. Cornelis, A. Georges, M. Christiaens, M. Ronsse, T. Ghesquiere, and K. D. Bosschere. A taxonomy of execution replay systems. In Conference on Advances in Infrastructure for Electronic Business, Education, Science, Medicine, and Mobile Technologies on the Internet, 2003.Google ScholarGoogle Scholar
  3. J. de Halleux and N. Tillmann. Moles: Tool-assisted environment isolation with closures. In International Conference on Objects, Models, Components, Patterns (TOOLS), pages 253--270, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. S. Dieckmann and U. Hölzle. A study of the allocation behaviour of the SPECjvm98 Java benchmarks. In European Conference on Object Oriented Programming, (ECOOP), pages 92--115, 1999. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. C. Dionne, M. Feeley, and J. Desbien. A taxonomy of distributed debuggers based on execution replay. pages 203--214, 1996.Google ScholarGoogle Scholar
  6. A. Gal et al. Trace-based just-in-time type specialization for dynamic languages. In Conference on Programming Language Design and Implementation (PLDI), pages 465--478, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. E. Kiciman and B. Livshits. Ajaxscope: a platform for remotely monitoring the client-side behavior of Web 2.0 applications. In Symposium on Operating Systems Principles (SOSP), pages 17--30, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. B. Livshits and E. Kiciman. Doloto: code splitting for network-bound Web 2.0 applications. In Conference on Foundations of Sofware Engineering (FSE), pages 350--360, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. J. Mickens, J. Elson, and J. Howell. Mugshot: Deterministic capture and replay for JavaScript applications. In Symposium on Networked Systems Design and Implementation (NSDI), 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. K. Pattabiraman and B. Zorn. DoDOM: Leveraging DOM invariants for Web 2.0 applications robustness testing. In International Symposium on Software Reliability Engineering (ISRE), 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. P. Ratanaworabhan, B. Livshits, and B. Zorn. JSMeter: Comparing the behavior of JavaScript benchmarks with real Web applications. In Conference on Web Application Development, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. G. Richards, C. Hammer, B. Burg, and J. Vitek. The eval that men do: A large-scale study of the use of eval in JavaScript applications. In European Conference on Object-Oriented Programming (ECOOP), 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. G. Richards, S. Lebresne, B. Burg, and J. Vitek. An analysis of the dynamic behavior of JavaScript programs. In Conference on Programming Language Design and Implementation (PLDI), pages 1--12, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. . Unger and R. B. Smith. Self: The power of simplicity. In Conference on Object-Oriented Programming Systems, Languages, and Applications (OOSPLA), pages 227--242, 1987. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. K. Vikram, A. Prateek, and B. Livshits. Ripley: automatically securing Web 2.0 applications through replicated execution. In Conference on Computer and Communications Security (CCS), pages 173--186, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Automated construction of JavaScript benchmarks

              Recommendations

              Comments

              Login options

              Check if you have access through your login credentials or your institution to get full access on this article.

              Sign in

              Full Access

              • Published in

                cover image ACM SIGPLAN Notices
                ACM SIGPLAN Notices  Volume 46, Issue 10
                OOPSLA '11
                October 2011
                1063 pages
                ISSN:0362-1340
                EISSN:1558-1160
                DOI:10.1145/2076021
                Issue’s Table of Contents
                • cover image ACM Conferences
                  OOPSLA '11: Proceedings of the 2011 ACM international conference on Object oriented programming systems languages and applications
                  October 2011
                  1104 pages
                  ISBN:9781450309400
                  DOI:10.1145/2048066

                Copyright © 2011 ACM

                Publisher

                Association for Computing Machinery

                New York, NY, United States

                Publication History

                • Published: 22 October 2011

                Check for updates

                Qualifiers

                • research-article

              PDF Format

              View or Download as a PDF file.

              PDF

              eReader

              View online with eReader.

              eReader
              About Cookies On This Site

              We use cookies to ensure that we give you the best experience on our website.

              Learn more

              Got it!