skip to main content
research-article

Performance problems you can fix: a dynamic analysis of memoization opportunities

Published:23 October 2015Publication History
Skip Abstract Section

Abstract

Performance bugs are a prevalent problem and recent research proposes various techniques to identify such bugs. This paper addresses a kind of performance problem that often is easy to address but difficult to identify: redundant computations that may be avoided by reusing already computed results for particular inputs, a technique called memoization. To help developers find and use memoization opportunities, we present MemoizeIt, a dynamic analysis that identifies methods that repeatedly perform the same computation. The key idea is to compare inputs and outputs of method calls in a scalable yet precise way. To avoid the overhead of comparing objects at all method invocations in detail, MemoizeIt first compares objects without following any references and iteratively increases the depth of exploration while shrinking the set of considered methods. After each iteration, the approach ignores methods that cannot benefit from memoization, allowing it to analyze calls to the remaining methods in more detail. For every memoization opportunity that MemoizeIt detects, it provides hints on how to implement memoization, making it easy for the developer to fix the performance issue. Applying MemoizeIt to eleven real-world Java programs reveals nine profitable memoization opportunities, most of which are missed by traditional CPU time profilers, conservative compiler optimizations, and other existing approaches for finding performance bugs. Adding memoization as proposed by MemoizeIt leads to statistically significant speedups by factors between 1.04x and 12.93x.

Skip Supplemental Material Section

Supplemental Material

References

  1. Apache Poi. http://poi.apache.org.Google ScholarGoogle Scholar
  2. Google Guava. http://code.google.com/p/ guava-libraries.Google ScholarGoogle Scholar
  3. MemoizeIt - Repository. https://github.com/lucadt/ memoizeit.Google ScholarGoogle Scholar
  4. NetBeans. http://www.netbeans.org.Google ScholarGoogle Scholar
  5. E. R. Altman, M. Arnold, S. Fink, and N. Mitchell. Performance analysis of idle programs. In OOPSLA ’10, pages 739–753. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. M. Attariyan, M. Chow, and J. Flinn. X-ray: Automating root-cause diagnosis of performance anomalies in production. In OSDI ’12, pages 307–320. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. S. Biswas, J. Huang, A. Sengupta, and M. D. Bond. Doublechecker: Efficient sound and precise atomicity checking. In PLDI ’14, pages 28–39, 2014. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. S. M. Blackburn et al. The DaCapo benchmarks: Java benchmarking development and analysis. In OOPSLA ’06, pages 169–190. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. E. Bruneton, R. Lenglet, and T. Coupaye. ASM: a code manipulation tool to implement adaptable systems. In Adaptable and extensible component systems, Nov. 2002.Google ScholarGoogle Scholar
  10. J. Burnim, S. Juvekar, and K. Sen. WISE: Automated test generation for worst-case complexity. In ICSE ’09, pages 463–473. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. E. Coppa, C. Demetrescu, and I. Finocchi. Input-sensitive profiling. In PLDI ’12, pages 89–98. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. I. Costa, P. Alves, H. N. Santos, and F. M. Q. Pereira. Just-in-time value specialization. In CGO ’13, pages 1–11.Google ScholarGoogle Scholar
  13. Y. Ding and Z. Li. A compiler scheme for reusing intermediate computation results. In CGO ’04, pages 279–290, 2004. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. M. Dmitriev. Design of JFluid: a profiling technology and tool based on dynamic bytecode instrumentation. Technical Report SMLI TR- 2003-125, 2003. Google ScholarGoogle Scholar
  15. B. Dufour, B. G. Ryder, and G. Sevitsky. Blended analysis for performance understanding of framework-based applications. In ISSTA ’07, pages 118–128. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. A. Georges, D. Buytaert, and L. Eeckhout. Statistically rigorous Java performance evaluation. In OOPSLA ’07, pages 57–76. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. S. Goldsmith, A. Aiken, and D. S. Wilkerson. Measuring empirical computational complexity. In FSE ’07, pages 395–404. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. L. Gong, M. Pradel, and K. Sen. JITProf: Pinpointing JIT-unfriendly JavaScript code. In FSE ’15, 2015. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. S. L. Graham, P. B. Kessler, and M. K. Mckusick. Gprof: A call graph execution profiler. In SIGPLAN Symposium on Compiler Construction ’82, pages 120–126. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. P. J. Guo and D. Engler. Using automatic persistent memoization to facilitate data analysis scripting. In ISSTA ’11, pages 287–297. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. S. Han, Y. Dang, S. Ge, D. Zhang, and T. Xie. Performance debugging in the large via mining millions of stack traces. In ICSE ’12, pages 145–155. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. X. Huang, S. M. Blackburn, and K. S. McKinley. The garbage collection advantage: improving program locality. In OOPSLA ’04, pages 69–80. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. G. Jin, L. Song, X. Shi, J. Scherpelz, and S. Lu. Understanding and detecting real-world performance bugs. In PLDI ’12, pages 77–88. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. M. Jovic, A. Adamoli, and M. Hauswirth. Catch me if you can: performance bug detection in the wild. In OOPSLA ’11, pages 155– 170. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. R. E. Korf. Depth-first iterative-deepening: An optimal admissible tree search. Artif. Intell., pages 97–109, 1985. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. Y. Liu, C. Xu, and S. Cheung. Characterizing and detecting performance bugs for smartphone applications. In ICSE ’14, pages 1013– 1024. Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. Y. Ma, X. Liu, S. Zhang, R. Xiang, Y. Liu, and T. Xie. Measurement and analysis of mobile web cache performance. In WWW ’15, pages 691–701. Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. D. Marinov and R. O’Callahan. Object equality profiling. In OOPSLA ’03, pages 313–325. Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. T. Mytkowicz, A. Diwan, M. Hauswirth, and P. F. Sweeney. Evaluating the accuracy of Java profilers. In PLDI ’10, pages 187–197. Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. K. Nguyen and G. Xu. Cachetor: Detecting cacheable data to remove bloat. In FSE ’13, pages 268–278. Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. A. Nistor, L. Song, D. Marinov, and S. Lu. Toddler: detecting performance problems via similar memory-access patterns. In ICSE ’13, pages 562–571, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. A. Nistor, P.-C. Chang, C. Radoi, and S. Lu. CARAMEL: Detecting and Fixing Performance Problems That Have Non-Intrusive Fixes. In ICSE ’15, 2015. Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. M. Pradel, M. Huggler, and T. R. Gross. Performance regression testing of concurrent classes. In ISSTA ’14, pages 13–25, 2014. Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. M. Pradel, P. Schuh, G. Necula, and K. Sen. EventBreak: Analyzing the responsiveness of user interfaces through performance-guided test generation. In OOPSLA ’14, pages 33–47, 2014. Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. W. Pugh and T. Teitelbaum. Incremental computation via function caching. In POPL ’89, pages 315–328. Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. J. B. Sartor, S. Venkiteswaran, K. S. McKinley, and Z. Wang. Cooperative Caching with Keep-Me and Evict-Me. In INTERACT ’05, pages 1–11. Google ScholarGoogle ScholarDigital LibraryDigital Library
  37. A. Shankar, M. Arnold, and R. Bod´ık. Jolt: lightweight dynamic analysis and removal of object churn. In OOPSLA ’08, pages 127– 142. Google ScholarGoogle ScholarDigital LibraryDigital Library
  38. A. Shankar, S. S. Sastry, R. Bod´ık, and J. E. Smith. Runtime specialization with optimistic heap analysis. In OOPSLA ’05, pages 327–343, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  39. L. Song and S. Lu. Statistical debugging for real-world performance problems. In OOPSLA ’14, pages 561–578. Google ScholarGoogle ScholarDigital LibraryDigital Library
  40. V. St-Amour, S. Tobin-Hochstadt, and M. Felleisen. Optimization coaching: optimizers learn to communicate with programmers. In OOPSLA ’12, pages 163–178. Google ScholarGoogle ScholarDigital LibraryDigital Library
  41. R. Vallée-Rai, P. Co, E. Gagnon, L. J. Hendren, P. Lam, and V. Sundaresan. Soot - a Java bytecode optimization framework. In CASCON ’99, pages 125–135. IBM, 1999.Google ScholarGoogle Scholar
  42. X. Xiao, S. Han, D. Zhang, and T. Xie. Context-sensitive delta inference for identifying workload-dependent performance bottlenecks. In ISSTA ’13, pages 90–100. Google ScholarGoogle ScholarDigital LibraryDigital Library
  43. G. Xu. Finding reusable data structures. In OOPSLA ’12, pages 1017– 1034, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  44. G. Xu and A. Rountev. Detecting inefficiently-used containers to avoid bloat. In PLDI ’10, pages 160–173, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  45. G. Xu, M. Arnold, N. Mitchell, A. Rountev, and G. Sevitsky. Go with the flow: profiling copies to find runtime bloat. In PLDI ’09, pages 419–430, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  46. G. Xu, N. Mitchell, M. Arnold, A. Rountev, E. Schonberg, and G. Sevitsky. Finding low-utility data structures. In PLDI ’10, pages 174– 186, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  47. H. Xu, C. J. F. Pickett, and C. Verbrugge. Dynamic purity analysis for Java programs. In PASTE ’07, pages 75–82. Google ScholarGoogle ScholarDigital LibraryDigital Library
  48. D. Yan, G. Xu, and A. Rountev. Uncovering performance problems in Java applications with reference propagation profiling. In ISCE ’12, pages 134–144. Google ScholarGoogle ScholarDigital LibraryDigital Library
  49. X. Yu, S. Han, D. Zhang, and T. Xie. Comprehending Performance from Real-world Execution Traces: A Device-driver Case. In ASPLOS ’14, pages 193–206. Google ScholarGoogle ScholarDigital LibraryDigital Library
  50. S. Zaman, B. Adams, and A. E. Hassan. A qualitative study on performance bugs. In MSR ’12, pages 199–208. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Performance problems you can fix: a dynamic analysis of memoization opportunities

            Recommendations

            Comments

            Login options

            Check if you have access through your login credentials or your institution to get full access on this article.

            Sign in

            Full Access

            • Published in

              cover image ACM SIGPLAN Notices
              ACM SIGPLAN Notices  Volume 50, Issue 10
              OOPSLA '15
              October 2015
              953 pages
              ISSN:0362-1340
              EISSN:1558-1160
              DOI:10.1145/2858965
              • Editor:
              • Andy Gill
              Issue’s Table of Contents
              • cover image ACM Conferences
                OOPSLA 2015: Proceedings of the 2015 ACM SIGPLAN International Conference on Object-Oriented Programming, Systems, Languages, and Applications
                October 2015
                953 pages
                ISBN:9781450336895
                DOI:10.1145/2814270

              Copyright © 2015 ACM

              Publisher

              Association for Computing Machinery

              New York, NY, United States

              Publication History

              • Published: 23 October 2015

              Check for updates

              Qualifiers

              • research-article

            PDF Format

            View or Download as a PDF file.

            PDF

            eReader

            View online with eReader.

            eReader
            About Cookies On This Site

            We use cookies to ensure that we give you the best experience on our website.

            Learn more

            Got it!