ABSTRACT
The advent of new parallel architectures has increased the need for parallel optimizing compilers to assist developers in creating efficient code. OpenUH is a state-of-the-art optimizing compiler, but it only performs a limited set of optimizations for OpenMP programs due to its conservative assumptions of shared memory programming. These limitations may prevent some OpenMP applications from being fully optimized to the extent of its sequential counterpart. This paper describes our design and implementation of a parallel data flow framework, consisting of a Parallel Control Flow Graph (PCFG) and a Parallel SSA (PSSA) representation in OpenUH, to model data flow for OpenMP programs. This framework enables the OpenUH compiler to perform all classical scalar optimizations for OpenMP programs, in addition to conducting OpenMP specific optimizations.
- Vasanth Balasundaram and Ken Kennedy. Compile-time detection of race conditions in a parallel program. In ICS '89: Proceedings of the 3rd international conference on Supercomputing, pages 175--185, Crete, Greece, June 1989. ACM Press. Google Scholar
Digital Library
- D. Callahan, K. Kennedy, and J. Subhlok. Analysis of event synchronization in a parallel programming tool. In PPOPP '90: Proceedings of the second ACM SIGPLAN symposium on Principles & practice of parallel programming, pages 21--30, Seattle, Washington, United States, March 1990. ACM Press. Google Scholar
Digital Library
- Jens Knoop, Bernhard Steffen, and Jurgen Vollmer. Parallelism for free: efficient and optimal bitvector analyses for parallel programs. ACM Trans. Program. Lang. Syst., 18(3):268--299, 1996. Google Scholar
Digital Library
- Arvind Krishnamurthy and Katherine A. Yelick. Optimizing parallel programs with explicit synchronization. In SIGPLAN Conference on Programming Language Design and Implementation, pages 196--204, La Jolla, California, United States, June 1995. Google Scholar
Digital Library
- J. Lee, D. A. Padua, and S. P. Midkiff. Basic compiler algorithms for parallel programs. In Proceedings of the ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming (PPoPP'99), pages 1--12, Atlanta, Georgia, United States, August 1999. ACM SIGPLAN. Google Scholar
Digital Library
- Jurgen Vollmer. Data flow analysis of parallel programs. In PACT '95: Proceedings of the IFIP WG10.3 working conference on Parallel architectures and compilation techniques , pages 168--177, Manchester, United Kingdom, 1995. IFIP Working Group on Algol. Google Scholar
Digital Library
Index Terms
Exploiting global optimizations for openmp programs in the openuh compiler
Recommendations
Exploiting global optimizations for openmp programs in the openuh compiler
PPoPP '09The advent of new parallel architectures has increased the need for parallel optimizing compilers to assist developers in creating efficient code. OpenUH is a state-of-the-art optimizing compiler, but it only performs a limited set of optimizations for ...
Compiler and Runtime Support for Running OpenMP Programs on Pentium- and Itanium-Architectures
IPDPS '03: Proceedings of the 17th International Symposium on Parallel and Distributed ProcessingExploiting Thread-Level Parallelism (TLP) is a promising way to improve the performance of applications with the advent of general-purpose cost effective uni-processor and shared-memory multiprocessor systems. In this paper, we describe the OpenMP ...
Compiler and Runtime Support for Running OpenMP Programs on Pentium-and Itanium-Architectures
HIPS '03: Proceedings of the Eighth International Workshop on High-Level Parallel Programming Models and Supportive Environments (HIPS'03)Exploiting Thread-Level Parallelism (TLP) is a promisingway to improve the performance of applications with theadvent of general-purpose cost effective uni-processor andshared-memory multiprocessor systems. In this paper, wedescribe the OpenMP ...







Comments