skip to main content
research-article

Automatic compilation of MATLAB programs for synergistic execution on heterogeneous processors

Published:04 June 2011Publication History
Skip Abstract Section

Abstract

MATLAB is an array language, initially popular for rapid prototyping, but is now being increasingly used to develop production code for numerical and scientific applications. Typical MATLAB programs have abundant data parallelism. These programs also have control flow dominated scalar regions that have an impact on the program's execution time. Today's computer systems have tremendous computing power in the form of traditional CPU cores and throughput oriented accelerators such as graphics processing units(GPUs). Thus, an approach that maps the control flow dominated regions to the CPU and the data parallel regions to the GPU can significantly improve program performance.

In this paper, we present the design and implementation of MEGHA, a compiler that automatically compiles MATLAB programs to enable synergistic execution on heterogeneous processors. Our solution is fully automated and does not require programmer input for identifying data parallel regions. We propose a set of compiler optimizations tailored for MATLAB. Our compiler identifies data parallel regions of the program and composes them into kernels. The problem of combining statements into kernels is formulated as a constrained graph clustering problem. Heuristics are presented to map identified kernels to either the CPU or GPU so that kernel execution on the CPU and the GPU happens synergistically and the amount of data transfer needed is minimized. In order to ensure required data movement for dependencies across basic blocks, we propose a data flow analysis and edge splitting strategy. Thus our compiler automatically handles composition of kernels, mapping of kernels to CPU and GPU, scheduling and insertion of required data transfer. The proposed compiler was implemented and experimental evaluation using a set of MATLAB benchmarks shows that our approach achieves a geometric mean speedup of 19.8X for data parallel benchmarks over native execution of MATLAB.

References

  1. A. V. Aho, Ravi Sethi, J. D. Ullman, M. S. Lam. Compilers: Principles, Techniques, & Tools. Pearson Education, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. G. Almasi, D. Padua. MaJIC: Compiling MATLAB for Speed and Responsiveness. In the ACM SIGPLAN 2002 Conference on Programming Language Design and Implementation (PLDI '02). Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. ATI Technologies, http://ati.amd.com/products/index.htmlGoogle ScholarGoogle Scholar
  4. M. Baskaran, U. Bondhugula, S. Krishnamoorthy, J. Ramanujam, A. Rountev, P. Sadayappan. A Compiler Framework for Optimization of Affine Loop Nests for GPGPUs. In the 22nd Annual International Conference on Supercomputing (ICS '08). Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. M. Baskaran, U. Bondhugula, S. Krishnamoorthy, J. Ramanujam, A. Rountev, P. Sadayappan. Automatic Data Movement and Computation Mapping for Multi-level Parallel Architectures with Explicitly Managed Memories. In the 13th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming (PPoPP '08). Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. U. Bondhugula, A. Hartono, J. Ramanujam, P. Sadayappan. A Practical Automatic Polyhedral Parallelizer and Locality Optimizer. In the 2008 ACM SIGPLAN conference on Programming language design and implementation (PLDI '08). Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. A. Chauhan, C. McCosh, K. Kennedy, and R. Hanson. Automatic Type-Driven Library Generation for Telescoping Languages. In the 2003 ACM/IEEE Conference on Supercomputing (SC '03). Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. M. Chevalier-Boisvert, L. Hendren, C. Verbrugge. Optimizing MATLAB Through Just-In-Time Specialization. In the 2010 International Conference on Compiler Construction (CC '10). Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. R. Cytron, J. Ferrante, B. K. Rosen, M. N. Wegman, F. K. Zadeck. Efficiently Computing Static Single Assignment Form and the Control Dependence Graph. In the ACM Transactions on Programming Languages and Systems, 13(4):451--490, Oct. 1991. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. L. De Rose, D. Padua. Techniques for the translation of MATLAB programs into Fortran 90. In the ACM Transactions on Programming Languages and Systems, 21(2):286--323, Mar. 1999. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. J. W. Eaton. GNU Octave Manual, Network Theory Limited, 2002.Google ScholarGoogle Scholar
  12. GPUMat Home Page. http://gp-you.org/Google ScholarGoogle Scholar
  13. M. Haldar et. al. MATCH Virtual Machine: An Adaptive Run-Time System to Execute MATLAB in Parallel. In the 2000 International Conference on Parallel Processing (ICPP '00). Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Jacket Home Page. http://www.accelereyes.com/Google ScholarGoogle Scholar
  15. P. Joisha, P. Banerjee. Static Array Storage Optimization in MATLAB. In the ACM SIGPLAN 2003 conference on Programming language design and implementation (PLDI '03). Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. P. Joisha, P. Banerjee. An Algebraic Array Shape Inference System for MATLAB. ACM Transactions on Programming Languages and Systems, 28(5):848--907, September 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. P. Joisha, P. Banerjee. A Translator System for the MATLAB Language, Research Articles on Software Practices and Experience '07. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. K. Kennedy, K. S. McKinley. Maximizing Loop Parallelism and Improving Data Locality via Loop Fusion and Distribution. In the 6th International Workshop on Languages and Compilers for Parallel Computing (LCPC '93). Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. R. Khoury, B. Burgstaller, B. Scholz, Accelerating the Execution of Matrix Languages on the Cell Broadband Engine Architecture. IEEE Transactions on Parallel and Distributed Systems, 22(1):7--21, Jan. 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. E. Lindholm, J. Nickolls, S. Oberman, J. Montrym. NVIDIA Tesla: A Unified Graphics and Computing Architecture. IEEE Micro, March 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. Mathworks Home Page. http://www.mathworks.com/Google ScholarGoogle Scholar
  22. NVIDIA Corp, NVIDIA CUDA: Compute Unified Device Architecture: Programming Guide, Version 3.0, 2010.Google ScholarGoogle Scholar
  23. NVIDIA Corp, Fermi Home Page, http://www.nvidia.com/object/fermi_architecture.htmlGoogle ScholarGoogle Scholar
  24. S. K. Singhai, K. S. Mckinley. A Parametrized Loop Fusion Algorithm for Improving Parallelism and Cache Locality, Computer Journal, 1997.Google ScholarGoogle ScholarCross RefCross Ref
  25. D. Tarditi, S. Puri, J. Oglesby. Accelerator: Using Data Parallelism to Program GPUs for General-Purpose Uses. In the 12th International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS-XII). Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. A. Udupa, R. Govindarajan, M. J. Thazhuthaveetil. Software Pipelined Execution of Stream Programs on GPUs. In the 7th annual IEEE/ACM International Symposium on Code Generation and Optimization (CGO '09). Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. A. Udupa, R. Govindarajan, M. J. Thazhuthaveetil. Synergistic Execution of Stream Programs on Multicores with Accelerators. In the 2009 ACM SIGPLAN/SIGBED Conference on Languages, Compilers, and Tools for Embedded Systems (LCTES '09). Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. V. Volkov, J. W. Demmel. Benchmarking GPUs to Tune Dense Linear Algebra. In the 2008 ACM/IEEE Conference on Supercomputing (SC '08). Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. Y. Yang, P. Xiang, J. Kong, H. Zhou. A GPGPU Compiler for Memory Optimization and Parallelism Management. In the 2010 ACM SIGPLAN conference on Programming Language Design and Implementation (PLDI '10). Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Automatic compilation of MATLAB programs for synergistic execution on heterogeneous processors

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in

      Full Access

      • Published in

        cover image ACM SIGPLAN Notices
        ACM SIGPLAN Notices  Volume 46, Issue 6
        PLDI '11
        June 2011
        652 pages
        ISSN:0362-1340
        EISSN:1558-1160
        DOI:10.1145/1993316
        Issue’s Table of Contents
        • cover image ACM Conferences
          PLDI '11: Proceedings of the 32nd ACM SIGPLAN Conference on Programming Language Design and Implementation
          June 2011
          668 pages
          ISBN:9781450306638
          DOI:10.1145/1993498
          • General Chair:
          • Mary Hall,
          • Program Chair:
          • David Padua

        Copyright © 2011 ACM

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 4 June 2011

        Check for updates

        Qualifiers

        • research-article

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader
      About Cookies On This Site

      We use cookies to ensure that we give you the best experience on our website.

      Learn more

      Got it!