skip to main content
article
Free Access

A region-based compilation technique for dynamic compilers

Published:01 January 2006Publication History
Skip Abstract Section

Abstract

Method inlining and data flow analysis are two major optimization components for effective program transformations, but they often suffer from the existence of rarely or never executed code contained in the target method. One major problem lies in the assumption that the compilation unit is partitioned at method boundaries. This article describes the design and implementation of a region-based compilation technique in our dynamic optimization framework, in which the compiled regions are selected as code portions without rarely executed code. The key parts of this technique are the region selection, partial inlining, and region exit handling. For region selection, we employ both static heuristics and dynamic profiles to identify and eliminate rare sections of code. The region selection process and method inlining decisions are interwoven, so that method inlining exposes other targets for region selection, while the region selection in the inline target conserves the inlining budget, allowing more method inlining to be performed. The inlining process can be performed for parts of a method, not just for the entire body of the method. When the program attempts to exit from a region boundary, we trigger recompilation and then use on-stack replacement to continue the execution from the corresponding entry point in the recompiled code. We have implemented these techniques in our Java JIT compiler, and conducted a comprehensive evaluation. The experimental results show that our region-based compilation approach achieves approximately 4% performance improvement on average, while reducing the compilation overhead by 10% to 30%, in comparison to the traditional method-based compilation techniques.

References

  1. Arnold, M., Hind, M., and Ryder, B. G. 2002. Online feedback-directed optimization of Java. In Proceedings of the ACM SIGPLAN Conference on Object-Oriented Programming, Systems, Languages and Applications. ACM Press, New York, 111--129.]] Google ScholarGoogle Scholar
  2. Bala, V., Duesterwald, E., and Banerjia, S. 2000. Dynamo: A transparent dynamic optimization system. In Proceedings of the ACM SIGPLAN Conference on Programming Language Design and Implementation. ACM Press, New York, 1--12.]] Google ScholarGoogle Scholar
  3. Ball, T. and Larus, J. 1993. Branch prediction for free. In Proceedings of the ACM SIGPLAN Conference on Programming Language Design and Implementation. ACM Press, New York, 300--313.]] Google ScholarGoogle Scholar
  4. Bruening, D. and Duesterwald, E. 2000. Exploring optimal compilation unit shapes for an embedded just-in-time compiler. In Proceedings of the ACM SIGPLAN Workshop on Feedback-Directed and Dynamic Optimization (FDDO-3). ACM Press, New York.]]Google ScholarGoogle Scholar
  5. Bruening, D., Garnett, T., and Amarasinghe, S. 2003. An infrastructure for adaptive dynamic optimization. In Proceedings of the ACM/IEEE International Symposium on Code Generation and Optimization (CGO). IEEE Computer Society, Los Alamitos, 265--275.]] Google ScholarGoogle Scholar
  6. Chambers, C. and Ungar, D. 1991. Making pure object-oriented languages practical. In Proceedings of the ACM SIGPLAN Conference on Object-Oriented Programming, Systems, Languages and Applications. ACM Press, New York, 1--15.]] Google ScholarGoogle Scholar
  7. Chang, P., Mahlke, S., and Hwu, W. 1991. Using profile information to assist classic code optimizations. Softw. Pract. Exper. 21, 12 (Dec.), 1301--1321.]] Google ScholarGoogle Scholar
  8. Chen, W., Lerner, S., Chaiken, R., and Gillies, D. 2000. Mojo: A dynamic optimization system. In Proceedings of the ACM SIGPLAN Workshop on Feedback-Directed and Dynamic Optimization (FDDO-3). ACM Press, New York.]]Google ScholarGoogle Scholar
  9. Cierniak, M., Lueh, G., and Stichnoth, J. 2000. Practicing judo: Java under dynamic optimizations. In Proceedings of the ACM SIGPLAN Conference on Programming Language Design and Implementation. ACM Press, New York, 13--26.]] Google ScholarGoogle Scholar
  10. Cohn, R. and Lowney, P. 1996. Hot cold optimization of large Windows/NT applications. In Proceedings of the 29th International Symposium on Microarchitecture. IEEE Computer Society, Los Alamitos, 80--89.]] Google ScholarGoogle Scholar
  11. Detlefs, D. and Agesen, O. 1999. Inlining of virtual methods. In 13th European Conference on Object-Oriented Programming. LNCS 1628, Springer-Verlag, Berlin, 258--278.]] Google ScholarGoogle Scholar
  12. Duesterwald, E. and Bala, V. 2000. Software profiling for hot path prediction: Less is more. In Proceedings of the 9th ACM Conference on Architectural Support on Programming Languages and Operating Systems. ACM Press, New York, 202--211.]] Google ScholarGoogle Scholar
  13. Fink, S. J. and Qian, F. 2003. Design, implementation and evaluation of adaptive recompilation with on-stack replacement. In Proceedings of the ACM/IEEE International Symposium on Code Generation and Optimization (CGO). IEEE Computer Society, Los Alamitos, 241--252.]] Google ScholarGoogle Scholar
  14. Hank, R. E. 1996. Region-based compilation. Ph.D. thesis, University of Illinois at Urbana-Champaign, Urbana.]] Google ScholarGoogle Scholar
  15. Hank, R. E., Hwu, W. W., and Rau, B. R. 1995. Region-based compilation: An introduction and motivation. In Proceedings of the 28th International Symposium on Microarchitecture. IEEE Computer Society, Los Alamitos, 158--168.]] Google ScholarGoogle Scholar
  16. Hank, R. E., Mahlke, S. A., Bringmann, R. A., Gyllenhaal, J. C., and Hwu, W. W. 1993. Superblock formation using static program analysis. In Proceedings of the 26th International Symposium on Microarchitecture. IEEE Computer Society, Los Alamitos, 247--255.]] Google ScholarGoogle Scholar
  17. Hölzle, U. 1994. Adaptive optimization for SELF: Reconciling high performance with exploratory programming. Ph.D. thesis, (CS-TR-94-1520) Stanford University, Stanford.]]Google ScholarGoogle Scholar
  18. Hwu, W., Mahlke, S., Chen, W., Chang, P., Warter, N., Bringmann, R., Ouellete, R., Hank, R., Kiyohara, T., Haab, G., Holm, J., and Lavery, D. 1993. The superblock: An effective technique for VLIW and superscalar compilation. J. Supercomput. 7, 1-2 (May), 229--248.]] Google ScholarGoogle Scholar
  19. IBM Corporation. 2002. WebSphere Studio Application Developer. Available at http://www.ibm.com/software/ad/studio.]]Google ScholarGoogle Scholar
  20. Ishizaki, K., Kawahito, M., Yasue, T., Komatsu, H., and Nakatani, T. 2000. A study of devirtualization techniques for a Java just-in-time compiler. In Proceedings of the ACM SIGPLAN Conference on Object-Oriented Programming, Systems, Languages and Applications. ACM Press, New York, 294--310.]] Google ScholarGoogle Scholar
  21. Java Grande Forum. 2000. Java grande benchmark. Available at http://www.epcc.ed.ac.uk/javagrande/sequential.html.]]Google ScholarGoogle Scholar
  22. Just System Corporation. 1998. IchitaroArk for Java, Japanese word processor system. Available at http://www.justsystem.com/ark/index.html.]]Google ScholarGoogle Scholar
  23. Knoop, J., Ruthing, O., and Steffen, B. 1994. Partial dead code elimination. In Proceedings of the ACM SIGPLAN Conference on Programming Language Design and Implementation. ACM Press, New York, 147--158.]] Google ScholarGoogle Scholar
  24. Lowney, P., Freudenberger, S., Karzes, T., Lichtenstein, W., Nix, R., O'Donnell, J., and Ruttenberg, J. 1993. The multiflow trace scheduling compiler. J. Supercomput. 7, 1-2 (May), 51--142.]] Google ScholarGoogle Scholar
  25. Mikheev, V. V., Fedoseev, S. A., Sukharev, V. V., and Lipsky, N. V. 2002. Effective enhancement of loop versioning in Java. In Proceedings of the 11th International Conference on Compiler Construction. LNCS 2304, Springer-Verlag, Berlin, 293--306.]] Google ScholarGoogle Scholar
  26. Ogasawara, T., Komatsu, H., and Nakatani, T. 2001. A study of exception handling and its dynamic optimization for Java. In Proceedings of the ACM SIGPLAN Conference on Object-Oriented Programming, Systems, Languages and Applications. ACM Press, New York, 83--95.]] Google ScholarGoogle Scholar
  27. Paleczny, M., Vick, C., and Click, C. 2001. The Java HotSpot server compiler. In Proceedings of the 1st Java Virtual Machine Research and Technology Symposium (JVM '01). USENIX Association, Berkley, 1--12.]] Google ScholarGoogle Scholar
  28. Pettis, K. and Hansen, R. 1990. Profile guided code positioning. In Proceedings of the ACM SIGPLAN Conference on Programming Language Design and Implementation. ACM Press, New York, 16--27.]] Google ScholarGoogle Scholar
  29. Standard Performance Evaluation Corporation. 2000. SPECjvm98 and SPECjbb2000 benchmarks. Available at http://www.spec.org/osg.]]Google ScholarGoogle Scholar
  30. Suganuma, T., Ogasawara, T., Takeuchi, M., Yasue, T., Kawahito, M., Ishizaki, K., Komatsu, H., and Nakatani, T. 2000. Overview of the IBM Java just-in-time compiler. IBM Syst. J. 39, 1 (Jan.), 175--193.]] Google ScholarGoogle Scholar
  31. Suganuma, T., Yasue, T., Kawahito, M., Komatsu, H., and Nakatani, T. 2001. A dynamic optimization framework for a Java just-in-time compiler. In Proceedings of the ACM SIGPLAN Conference on Object-Oriented Programming, Systems, Languages and Applications. ACM Press, New York, 180--194.]] Google ScholarGoogle Scholar
  32. Suganuma, T., Yasue, T., Kawahito, M., Komatsu, H., and Nakatani, T. 2005. Design and evaluation of dynamic optimizations for a Java just-in-time compiler. ACM Trans. Prog. Lang. Syst. 27, 4 (July). 732--785.]] Google ScholarGoogle Scholar
  33. Suganuma, T., Yasue, T., and Nakatani, T. 2002. An empirical study of method inlining for a Java just-in-time compiler. In Proceedings of the 2nd Java Virtual Machine Research and Technology Symposium (JVM '02). USENIX Association, Berkeley, 91--104.]] Google ScholarGoogle Scholar
  34. Suganuma, T., Yasue, T., and Nakatani, T. 2003. A region-based compilation technique for a Java just-in-time compiler. In Proceedings of the ACM SIGPLAN Conference on Programming Language Design and Implementation. ACM Press, New York, 312--323.]] Google ScholarGoogle Scholar
  35. Triantafyllis, S., Vachharajani, M., and August, D. I. 2002. Procedure boundary elimination for EPIC compilers. In Proceedings of the Second Workshop on Explicitly Parallel Instruction Computer Architectures and Compiler Technology. IEEE Computer Society, Los Alamitos.]]Google ScholarGoogle Scholar
  36. Way, T., Breech, B., and Pollock, L. 2000. Region formation analysis with demand-driven inlining for region-based optimization. In Proceedings of International Conference on Parallel Architectures and Compilation Techniques. IEEE Computer Society, Los Alamitos, 24--36.]] Google ScholarGoogle Scholar
  37. Way, T. and Pollock, L. 2002. Evaluation of a region-based partial inlining algorithm for an ILP optimizing compiler. In Proceedings of Conference on Parallel and Distributed Processing Techniques and Applications. IEEE Computer Society, Los Alamitos, 552--556.]] Google ScholarGoogle Scholar
  38. Whaley, J. 2001. Partial method compilation using dynamic profile information. In Proceedings of the ACM SIGPLAN Conference on Object-Oriented Programming, Systems, Languages and Applications. ACM Press, New York, 166--179.]] Google ScholarGoogle Scholar
  39. Whaley, J. and Rinard, M. 1999. Compositional pointer and escape analysis for Java programs. In Proceedings of the ACM SIGPLAN Conference on Object-Oriented Programming, Systems, Languages and Applications. ACM Press, New York, 187--206.]] Google ScholarGoogle Scholar

Index Terms

  1. A region-based compilation technique for dynamic compilers

      Recommendations

      Reviews

      Elliot Jaffe

      As humans, we use introspection to identify sloppy thinking, or areas where we might improve our cognitive performance. In this paper, the authors provide a very good review of how software can perform the same type of introspection, improving its performance by rewriting its own code. Just-in-time (JIT) compilers review the frequency of use of code segments, and compile the most frequently used items into more efficient code. This paper provides both a broad view of the methods used to identify the code segments, and an insight into the challenges and tradeoffs that arise when attempting to rewrite in-line code. JIT compilation requires central processing unit (CPU) cycles that might otherwise be used to the run the program. It also increases the memory footprint of the program as it adds newly compiled functions. Early implementations of JIT compilers worked at the level of a function or subroutine. A frequently used function was identified, and then the whole body of that function was replaced by new, more efficient code. Thanks to structured programming concepts, functions are self-enclosed pieces of code with clean entry and exit points. The JIT compiler thus has the freedom to modify the implementation of a function without worrying about external constraints. The problem is that the execution paths within functions are not uniform. That is, not all portions of a function are executed with equal frequency. Functions typically contain conditional paths. For example, the most common conditional path is for error handling. Error handlers are designed to be executed very infrequently, yet the JIT compiler will waste time and space when it compiles them along with the rest of the function. In this paper, the authors explore the many ways that a JIT compiler can identify those frequently used regions of a function, and then compile just those regions. The challenges are many, including possible dead code and side effects that might be needed for other regions within this function. I found this paper to be a very good overview of the field, and a primer for anyone interested in language design and implementation. I would recommend this paper as required reading for any course in compiler design. Online Computing Reviews Service

      Access critical reviews of Computing literature here

      Become a reviewer for Computing Reviews.

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in

      Full Access

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader
      About Cookies On This Site

      We use cookies to ensure that we give you the best experience on our website.

      Learn more

      Got it!