skip to main content
research-article

Tracking down performance variation against source code evolution

Published:21 October 2015Publication History
Skip Abstract Section

Abstract

Little is known about how software performance evolves across software revisions. The severity of this situation is high since (i) most performance variations seem to happen accidentally and (ii) addressing a performance regression is challenging, especially when functional code is stacked on it. This paper reports an empirical study on the performance evolution of 19 applications, totaling over 19 MLOC. It took 52 days to run our 49 benchmarks. By relating performance variation with source code revisions, we found out that: (i) 1 out of every 3 application revisions introduces a performance variation, (ii) performance variations may be classified into 9 patterns, (iii) the most prominent cause of performance regression involves loops and collections. We carefully describe the patterns we identified, and detail how we addressed the numerous challenges we faced to complete our experiment.

References

  1. Juan Pablo Sandoval Alcocer and Alexandre Bergel. Tracking performance failures with rizel. In Proceedings of the 2013 International Workshop on Principles of Software Evolution, IWPSE 2013, pages 38–42, New York, NY, USA, 2013. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Alexandre Bergel. Counting messages as a proxy for average execution time in pharo. In Proceedings of the 25th European Conference on Object-Oriented Programming (ECOOP’11), LNCS, pages 533–557. Springer-Verlag, July 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Stephen M. Blackburn, Robin Garner, Chris Hoffmann, Asjad M. Khang, Kathryn S. McKinley, Rotem Bentzur, Amer Diwan, Daniel Feinberg, Daniel Frampton, Samuel Z. Guyer, Martin Hirzel, Antony Hosking, Maria Jump, Han Lee, J. Eliot B. Moss, Aashish Phansalkar, Darko Stefanovi´c, Thomas VanDrunen, Daniel von Dincklage, and Ben Wiedermann. The dacapo benchmarks: java benchmarking development and analysis. In Proceedings of the 21st annual ACM SIGPLAN conference on Object-oriented programming systems, languages, and applications, OOPSLA ’06, pages 169–190, New York, NY, USA, 2006. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Adriana E. Chis, Nick Mitchell, Edith Schonberg, Gary Sevitsky, Patrick O’Sullivan, Trevor Parsons, and John Murphy. Patterns of memory inefficiency. In Proceedings of the 25th European Conference on Object-oriented Programming, ECOOP’11, pages 383–407, Berlin, Heidelberg, 2011. Springer-Verlag. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Erich Gamma, Richard Helm, Ralph Johnson, and John Vlissides. Design Patterns: Elements of Reusable Object-Oriented Software. Addison Wesley Professional, Reading, Mass., 1995. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Andy Georges, Dries Buytaert, and Lieven Eeckhout. Statistically rigorous java performance evaluation. In Proceedings of the 22nd annual ACM SIGPLAN conference on Object-oriented programming systems and applications, OOPSLA ’07, pages 57–76, New York, NY, USA, 2007. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Peng Huang, Xiao Ma, Dongcai Shen, and Yuanyuan Zhou. Performance regression testing target prioritization via performance risk analysis. In Proceedings of the 36th International Conference on Software Engineering, ICSE 2014, pages 60–71, New York, NY, USA, 2014. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Guoliang Jin, Linhai Song, Xiaoming Shi, Joel Scherpelz, and Shan Lu. Understanding and detecting real-world performance bugs. SIGPLAN Not., 47(6):77–88, June 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Tomas Kalibera and Richard Jones. Rigorous benchmarking in reasonable time. In Proceedings of the 2013 International Symposium on Memory Management, ISMM ’13, pages 63–74, New York, NY, USA, 2013. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Manny Lehman and Les Belady. Program Evolution: Processes of Software Change. London Academic Press, London, 1985. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Ian Molyneaux. The Art of Application Performance Testing: Help for Programmers and Quality Assurance. O’Reilly Media, Inc., 1st edition, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Todd Mytkowicz, Amer Diwan, Matthias Hauswirth, and Peter F. Sweeney. Producing wrong data without doing anything obviously wrong! In Proceeding of the 14th international conference on Architectural support for programming languages and operating systems, ASPLOS ’09, pages 265–276, New York, NY, USA, 2009. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Todd Mytkowicz, Amer Diwan, Matthias Hauswirth, and Peter F. Sweeney. Evaluating the accuracy of java profilers. In Proceedings of the 31st conference on Programming language design and implementation, PLDI ’10, pages 187–197, New York, NY, USA, 2010. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Thanh H. D. Nguyen, Meiyappan Nagappan, Ahmed E. Hassan, Mohamed Nasser, and Parminder Flora. An industrial case study of automatically identifying performance regressioncauses. In Proceedings of the 11th Working Conference on Mining Software Repositories, MSR 2014, pages 232–241, New York, NY, USA, 2014. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Adrian Nistor, Tian Jiang, and Lin Tan. Discovering, reporting, and fixing performance bugs. In Proceedings of the 10th Working Conference on Mining Software Repositories, MSR ’13, pages 237–246, Piscataway, NJ, USA, 2013. IEEE Press. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Adrian Nistor, Linhai Song, Darko Marinov, and Shan Lu. Toddler: Detecting performance problems via similar memoryaccess patterns. In Proceedings of the 2013 International Conference on Software Engineering, ICSE ’13, pages 562– 571, Piscataway, NJ, USA, 2013. IEEE Press. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Ohad Shacham, Martin Vechev, and Eran Yahav. Chameleon: Adaptive selection of collections. In Proceedings of the 2009 ACM SIGPLAN Conference on Programming Language Design and Implementation, PLDI ’09, pages 408–418, New York, NY, USA, 2009. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Bastian Steinert, Damien Cassou, and Robert Hirschfeld. Coexist: overcoming aversion to change. In Proceedings of the 8th symposium on Dynamic languages, DLS ’12, pages 107–118, New York, NY, USA, 2012. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. Haiying Xu, Christopher J. F. Pickett, and Clark Verbrugge. Dynamic purity analysis for java programs. In Proceedings of the 7th ACM SIGPLAN-SIGSOFT workshop on Program analysis for software tools and engineering (PASTE ’07), pages 75–82, New York, NY, USA, 2007. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Shahed Zaman, Bram Adams, and Ahmed E. Hassan. A qualitative study on performance bugs. In Proceedings of the 9th IEEE Working Conference on Mining Software Repositories, MSR ’12, pages 199–208, Piscataway, NJ, USA, 2012. IEEE Press. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Tracking down performance variation against source code evolution

          Recommendations

          Comments

          Login options

          Check if you have access through your login credentials or your institution to get full access on this article.

          Sign in

          Full Access

          • Published in

            cover image ACM SIGPLAN Notices
            ACM SIGPLAN Notices  Volume 51, Issue 2
            DLS '15
            Feburary 2016
            176 pages
            ISSN:0362-1340
            EISSN:1558-1160
            DOI:10.1145/2936313
            • Editor:
            • Andy Gill
            Issue’s Table of Contents
            • cover image ACM Conferences
              DLS 2015: Proceedings of the 11th Symposium on Dynamic Languages
              October 2015
              176 pages
              ISBN:9781450336901
              DOI:10.1145/2816707

            Copyright © 2015 ACM

            Publisher

            Association for Computing Machinery

            New York, NY, United States

            Publication History

            • Published: 21 October 2015

            Check for updates

            Qualifiers

            • research-article

          PDF Format

          View or Download as a PDF file.

          PDF

          eReader

          View online with eReader.

          eReader
          About Cookies On This Site

          We use cookies to ensure that we give you the best experience on our website.

          Learn more

          Got it!