skip to main content
research-article
Open Access

Correctness of speculative optimizations with dynamic deoptimization

Published:27 December 2017Publication History
Skip Abstract Section

Abstract

High-performance dynamic language implementations make heavy use of speculative optimizations to achieve speeds close to statically compiled languages. These optimizations are typically performed by a just-in-time compiler that generates code under a set of assumptions about the state of the program and its environment. In certain cases, a program may execute code compiled under assumptions that are no longer valid. The implementation must then deoptimize the program on-the-fly; this entails finding semantically equivalent code that does not rely on invalid assumptions, translating program state to that expected by the target code, and transferring control. This paper looks at the interaction between optimization and deoptimization, and shows that reasoning about speculation is surprisingly easy when assumptions are made explicit in the program representation. This insight is demonstrated on a compiler intermediate representation, named sourir, modeled after the high-level representation for a dynamic language. Traditional compiler optimizations such as constant folding, unreachable code elimination, and function inlining are shown to be correct in the presence of assumptions. Furthermore, the paper establishes the correctness of compiler transformations specific to deoptimization: namely unrestricted deoptimization, predicate hoisting, and assume composition.

Skip Supplemental Material Section

Supplemental Material

speculativeoptimization.webm

References

  1. Vasanth Bala, Evelyn Duesterwald, and Sanjeev Banerjia. 2000. Dynamo: A Transparent Dynamic Optimization System. In Programming Language Design and Implementation (PLDI). Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Michael Bebenita, Florian Brandner, Manuel Fahndrich, Francesco Logozzo, Wolfram Schulte, Nikolai Tillmann, and Herman Venter. 2010. SP UR: A Trace-based JIT Compiler for CIL. In Conference on Object Oriented Programming Systems Languages and Applications (OOPSLA). Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Clément Béra, Eliot Miranda, Marcus Denker, and Stéphane Ducasse. 2016. Practical validation of bytecode to bytecode JIT compiler dynamic deoptimization. Journal of Object Technology (JOT) 15, 2 (2016). Google ScholarGoogle ScholarCross RefCross Ref
  4. Project Chromium. 2017. V8 JavaScript Engine. https://chromium.googlesource.com/v8/v8.git .Google ScholarGoogle Scholar
  5. Daniele Cono D’Elia and Camil Demetrescu. 2016. Flexible on-stack replacement in LLVM. In Code Generation and Optimization (CGO). Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Stefano Dissegna, Francesco Logozzo, and Francesco Ranzato. 2014. Tracing Compilation by Abstract Interpretation. In Principles of Programming Languages (POPL). Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Gilles Duboscq, Thomas Würthinger, and Hanspeter Mössenböck. 2014. Speculation without regret: reducing deoptimization meta-data in the Graal compiler. In Principles and Practices of Programming on the Java Platform (PPPJ). Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Gilles Duboscq, Thomas Würthinger, Lukas Stadler, Christian Wimmer, Doug Simon, and Hanspeter Mössenböck. 2013. An Intermediate Representation for Speculative Optimizations in a Dynamic Compiler. In Virtual Machines and Intermediate Languages (VMIL). Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Stephen J. Fink and Feng Qian. 2003. Design, Implementation and Evaluation of Adaptive Recompilation with On-stack Replacement. In Code Generation and Optimization (CGO). Google ScholarGoogle ScholarCross RefCross Ref
  10. Andreas Gal, Brendan Eich, Mike Shaver, David Anderson, David Mandelin, Mohammad R. Haghighat, Blake Kaplan, Graydon Hoare, Boris Zbarsky, Jason Orendorff, Jesse Ruderman, Edwin W. Smith, Rick Reitmaier, Michael Bebenita, Mason Chang, and Michael Franz. 2009. Trace-based Just-in-time Type Specialization for Dynamic Languages. In Programming Language Design and Implementation (PLDI). Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Shu-yu Guo and Jens Palsberg. 2011. The Essence of Compiling with Traces. In Principles of Programming Languages (POPL). Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Urs Hölzle, Craig Chambers, and David Ungar. 1992. Debugging Optimized Code with Dynamic Deoptimization. In Programming Language Design and Implementation (PLDI). Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Kazuaki Ishizaki, Motohiro Kawahito, Toshiaki Yasue, Hideaki Komatsu, and Toshio Nakatani. 2000. A Study of Devirtualization Techniques for a Java Just-In-Time Compiler. In Object-oriented Programming Systems, Language, and Applications (OOPSLA). Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Gary A. Kildall. 1973. A Unified Approach to Global Program Optimization. In Principles of Programming Languages (POPL). Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Xavier Leroy and Sandrine Blazy. 2008. Formal verification of a C-like memory model and its uses for verifying program transformations. Journal of Automated Reasoning 41, 1 (2008). Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Magnus O. Myreen. 2010. Verified Just-in-time Compiler on x86. In Principles of Programming Languages (POPL). Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Rei Odaira and Kei Hiraki. 2005. Sentinel PRE: Hoisting Beyond Exception Dependency with Dynamic Deoptimization. In Code Generation and Optimization (CGO). Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Michael Paleczny, Christopher Vick, and Cliff Click. 2001. The Java Hotspot Server Compiler. In Java Virtual Machine Research and Technology (JVM). http://www.usenix.org/events/jvm01/full_papers/paleczny/paleczny.pdfGoogle ScholarGoogle Scholar
  19. Amr Sabry and Matthias Felleisen. 1992. Reasoning About Programs in Continuation-passing Style. In LISP and Functional Programming (LFP). Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. David Schneider and Carl Friedrich Bolz. 2012. The efficient handling of guards in the design of RPython’s tracing JIT. In Workshop on Virtual Machines and Intermediate Languages (VMIL). Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. Peter Sewell, Francesco Zappa Nardelli, Scott Owens, Gilles Peskine, Thomas Ridge, Susmit Sarkar, and Rok Strniša. 2007. Ott: Effective Tool Support for the Working Semanticist. In International Conference on Functional Programming (ICFP). Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. Sunil Soman and Chandra Krintz. 2006. Efficient and General On-Stack Replacement for Aggressive Program Specialization. In Software Engineering Research and Practice (SERP).Google ScholarGoogle Scholar
  23. Kunshan Wang, Yi Lin, Stephen M. Blackburn, Michael Norrish, and Antony L. Hosking. 2015. Draining the Swamp: Micro Virtual Machines as Solid Foundation for Language Development. In Summit on Advances in Programming Languages (SNAPL), Vol. 32. Google ScholarGoogle ScholarCross RefCross Ref
  24. Yudi Zheng, Lubomír Bulej, and Walter Binder. 2017. An Empirical Study on Deoptimization in the Graal Compiler. In European Conference on Object-Oriented Programming (ECOOP). Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. Correctness of speculative optimizations with dynamic deoptimization

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in

    Full Access

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader
    About Cookies On This Site

    We use cookies to ensure that we give you the best experience on our website.

    Learn more

    Got it!