Abstract
The number of cores in multi- and many-core high-performance processors is steadily increasing. MPI, the de-facto standard for programming high-performance computing systems offers a distributed memory programming model. MPI's semantics force a copy from one process' send buffer to another process' receive buffer. This makes it difficult to achieve the same performance on modern hardware than shared memory programs which are arguably harder to maintain and debug. We propose generalizing MPI's communication model to include ownership passing, which make it possible to fully leverage the shared memory hardware of multi- and many-core CPUs to stream communicated data concurrently with the receiver's computations on it. The benefits and simplicity of message passing are retained by extending MPI with calls to send (pass) ownership of memory regions, instead of their contents, between processes. Ownership passing is achieved with a hybrid MPI implementation that runs MPI processes as threads and is mostly transparent to the user. We propose an API and a static analysis technique to transform legacy MPI codes automatically and transparently to the programmer, demonstrating that this scheme is easy to use in practice. Using the ownership passing technique, we see up to 51% communication speedups over a standard message passing implementation on state-of-the art multicore systems. Our analysis and interface will lay the groundwork for future development of MPI-aware optimizing compilers and multi-core specific optimizations, which will be key for success in current and next-generation computing platforms.
- A. V. Aho, M. S. Lam, R. Sethi, and J. D. Ullman. Compilers: Principles, Techniques, and Tools. Addison-Wesley, 2 edition, 2007. Google Scholar
Digital Library
- S. Borkar. Will interconnect help or limit the future of computing. Presented as the 19th IEEE Conference on Hot Interconnects, 2011.Google Scholar
- G. Bronevetsky. Communication-sensitive static dataflow for parallel message passing applications. In International Symposium on Code Generation and Optimization (CGO), Mar. 2009. Google Scholar
Digital Library
- Y. Dotsenko. Expressiveness, programmability and portable high performance of global address space languages. Technical report, 2006.Google Scholar
- S. J. Frank. Tightly Coupled Multiprocessor System Speeds Memory- Access Times. Electronics, 57(1):164--169, Jan. 1984.Google Scholar
- X. Gonze et al. A brief introduction to the ABINIT software package. Zeitschrift fr Kristallographie, 220(5-6-2005):558--562, 2005.Google Scholar
- M. A. Heroux, D. W. Dorfler, P. S. Crozier, J. M. Willenbring, H. C. Edwards, A. Williams, M. Rajan, E. R. Keiter, H. K. Thornquist, and R. W. Numrich. Improving performance via mini-applications. 2009.Google Scholar
- T. Hoefler and M. Snir. Writing Parallel Libraries with MPI - Common Practice, Issues, and Extensions. In 18th European MPI Users' Group Meeting, EuroMPI, Proc., volume 6960, pages 345--355, Sep. 2011. Google Scholar
Digital Library
- S. Kumar, C. Huang, G. Zheng, E. Bohm, A. Bhatele, J. C. Phillips, H. Yu, and L. V. Kale. Scalable molecular dynamics with NAMD on the IBM Blue Gene/L system. IBM J. Res. Dev., 52:177--188, January 2008. Google Scholar
Digital Library
- E. A. Lee. The problem with threads. Computer, 39(5):33--42, May 2006. Google Scholar
Digital Library
- L.-Q. Lee and A. Lumsdaine. Generic programming for high performance scientific applications. In Proc. of the 2002 Joint ACM Java Grande -- ISCOPE Conference, pages 112--121. ACM Press, 2002. Google Scholar
Digital Library
- L.-Q. Lee and A. Lumsdaine. The generic message passing framework. In Proceedings of the International Parallel and Distributed Processing Symposium (IPDPS), page 53, April 2003. Google Scholar
Digital Library
- MPI Forum. MPI: A message-passing interface standard. version 2.2, September 4th 2009.Google Scholar
- S. Negara, G. Zheng, K.-C. Pan, N. Negara, R. E. Johnson, L. V. Kale, and P. M. Ricker. Automatic MPI to AMPI Program Transformation using Photran. In 3rd Workshop on Productivity and Performance (PROPER 2010), number 10-14, Ischia/Naples/Italy, August 2010. Google Scholar
Digital Library
- S. Negara, R. K. Karmani, and G. Agha. Inferring ownership transfer for efficient message passing. In Proceedings of the 16th ACM symposium on Principles and practice of parallel programming, PPoPP '11, pages 81--90, New York, NY, USA, 2011. ACM. Google Scholar
Digital Library
- M. Perache, P. Carribault, and H. Jourdren. MPC-MPI: An MPI implementation reducing the overall memory consumption. In Proc. of the 16th European PVM/MPI Users' Group Meeting, pages 94--103, Berlin, Heidelberg, 2009. Springer-Verlag. Google Scholar
Digital Library
- W. H. Press, S. A. Teukolsky, W. T. Vetterling, and B. P. Flannery. Numerical recipes in C (2nd ed.): the art of scientific computing. Cambridge University Press, 1992. Google Scholar
Digital Library
- J. Protic, M. Tomasevic, and V. Milutinovic, editors. Distributed Shared Memory: Concepts and Systems. IEEE Computer Society Press, Los Alamitos, CA, USA, 1st edition, 1997. Google Scholar
Digital Library
- D. J. Quinlan. Rose: Compiler support for object-oriented frameworks. Parallel Processing Letters, 10(2/3):215--226, 2000.Google Scholar
Cross Ref
- R. Rabenseifner. Hybrid parallel programming on HPC platforms. In In proceedings of the Fifth European Workshop on OpenMP, EWOMP'03, Aachen, Germany, 2003.Google Scholar
- R. Rabenseifner, G. Hager, and G. Jost. Hybrid mpi/openmp parallel programming on clusters of multi-core smp nodes. In Proceedings of the 2009 17th Euromicro International Conference on Parallel, Distributed and Network-based Processing, PDP '09, pages 427--436, Washington, DC, USA, 2009. IEEE Computer Society. Google Scholar
Digital Library
- H. Tang and T. Yang. Optimizing threaded MPI execution on SMP clusters. In ACM International Conference on Supercomputing (ICS), pages 381 -- 392, 2001. Google Scholar
Digital Library
- D. Turner and X. Chen. Protocol-dependent message-passing performance on linux clusters. In Proceedings of the IEEE International Conference on Cluster Computing, CLUSTER '02, pages 187,Washington, DC, USA, 2002. IEEE Computer Society. Google Scholar
Digital Library
- S.-Y. Tzou and D. P. Anderson. The performance of message-passing using restricted virtual memory remapping. Software - Practice and Experience, 21:251--267, 1991. Google Scholar
Digital Library
- M. Woodacre, D. Robb, D. Roe, and K. Feind. The SGI Altix 3000 global shared-memory architecture. 2005.Google Scholar
Index Terms
Ownership passing: efficient distributed memory programming on multi-core systems
Recommendations
Ownership passing: efficient distributed memory programming on multi-core systems
PPoPP '13: Proceedings of the 18th ACM SIGPLAN symposium on Principles and practice of parallel programmingThe number of cores in multi- and many-core high-performance processors is steadily increasing. MPI, the de-facto standard for programming high-performance computing systems offers a distributed memory programming model. MPI's semantics force a copy ...
Optimizing Message-Passing on Multicore Architectures Using Hardware Multi-threading
PDP '14: Proceedings of the 2014 22nd Euromicro International Conference on Parallel, Distributed, and Network-Based ProcessingShared-memory and message-passing are two opposite models to develop parallel computations. The shared-memory model, adopted by existing frameworks such as OpenMP, represents a de-facto standard on multi-/many-core architectures. However, message-...
Efficient programming paradigm for video streaming processing on TILE64 platform
Advances at an unprecedented rate in computer hardware and networking technologies have made the many-core computing affordable and readily available in a matter of few years. Nonetheless, it incurs challenges to programmers to build scalable parallel ...







Comments