ABSTRACT
Message Passing Interface (MPI) is a widely used standard for managing coarse-grained concurrency on distributed computers. Debugging parallel MPI applications, however, has always been a particularly challenging task due to their high degree of concurrent execution and non-deterministic behavior. Deterministic replay is a potentially powerful technique for addressing these challenges, with existing MPI replay tools adopting either data-replay or order-replay approaches. Unfortunately, each approach has its tradeoffs. Data-replay generates substantial log sizes by recording every communication message. Order-replay generates small logs, but requires all processes to be replayed together. We believe that these drawbacks are the primary reasons that inhibit the wide adoption of deterministic replay as the critical enabler of cyclic debugging of MPI applications.
This paper describes subgroup reproducible replay (SRR), a hybrid deterministic replay method that provides the benefits of both data-replay and order-replay while balancing their trade-offs. SRR divides all processes into disjoint groups. It records the contents of messages crossing group boundaries as in data-replay, but records just message orderings for communication within a group as in order-replay. In this way, SRR can exploit the communication locality of traffic patterns in MPI applications. During replay, developers can then replay each group individually. SRR reduces recording overhead by not recording intra-group communication, and reduces replay overhead by limiting the size of each replay group. Exposing these tradeoffs gives the user the necessary control for making deterministic replay practical for MPI applications.
We have implemented a prototype, MPIWiz, to demonstrate and evaluate SRR. MPIWiz employs a replay framework that allows transparent binary instrumentation of both library and system calls. As a result, MPIWiz replays MPI applications with no source code modification and relinking, and handles non-determinism in both MPI and OS system calls. Our preliminary results show that MPIWiz can reduce recording overhead by over a factor of four relative to data-replay, yet without requiring the entire application to be replayed as in order-replay. Recording increases execution time by 27% while the application can be replayed in just 53% of its base execution time.
- D. Bailey, T. Harris, W. Saphir, R. van der Wijngaart, A. Woo, and M. Yarrow. The NAS Parallel Benchmarks 2.0. Technical Report Report NAS-95-020, Numerical Aerodynamic Simulation Facility, NASA Ames Research Center, Mail Stop T 27 A-1, Moffett Field, CA 94035- 1000, USA, Dec. 05 1995.Google Scholar
- A. Bouteiller, G. Bosilca, and J. Dongarra. Retrospect: Deterministic Replay of MPI Applications for Interactive Distributed Debugging. In 14th European PVM/MPI User's Group Meeting, pages 297--306, 2007. Google Scholar
Digital Library
- C.-K. Cheng and Y.-C. A. Wei. An Improved Two-way Partitioning Algorithm with Stable Performance. IEEE Transactions on Computer Aided Design, 10(12):1502--1511, 1991.Google Scholar
Digital Library
- C. Clénc¸on, J. Fritscher, M. J. Meehan, and R. Rühl. An Implementation of Race Detection and Deterministic Replay with MPI. In EuroPar'95, pages 155--166, Aug. 1995. Google Scholar
Digital Library
- J. C. de Kergommeaux, M. Ronsse, and K. D. Bosschere. MPL*: Efficient Record/Play of Nondeterministic Features of Message Passing Libraries. In 6th European PVM/MPI Users' Group Meeting, pages 141--148, 1999. Google Scholar
Digital Library
- J. DeSouza, B. Kuhn, B. R. de Supinski, V. Samofalov, S. Zheltov, and S. Bratanov. Automated, Scalable Debugging of MPI programs with Intel Message Checker. In SE-HPCS'05, pages 78--82, 2005. Google Scholar
Digital Library
- C. Falzone, A. Chan, E. L. Lusk, and W. Gropp. Collective Error Detection for MPI Collective Operations. In PVM/MPI'05, pages 138--147, 2005. Google Scholar
Digital Library
- A. Faraj and X. Yuan. Communication Characteristics in the NAS Parallel Benchmarks. In PDCS'02, pages 724--729, 2002.Google Scholar
- J. Garbers, H. J. Prömel, and A. Steger. Finding Clusters in VLSI Circuits. In IEEE International Conference on Computer Aided Design, pages 520--523, Nov. 1990.Google Scholar
- W. Gropp, E. Lusk, N. Doss, and A. Skjellum. A High-performance, Portable Implementation of the MPI Message Passing Interface Standard. Parallel Computing, 22(6):789--828, Sept. 1996. Google Scholar
Digital Library
- Z. Guo, X. Wang, J. Tang, X. Liu, Z. Xu, M. Wu, M. F. Kaashoek, and Z. Zhang. R2: An Application-Level Kernel for Record and Replay. In OSDI'08, To appear, 2008. Google Scholar
Digital Library
- W. Haque. Concurrent Deadlock Detection in Parallel Programs. Int. J. Comput. Appl., 28(1):19--25, 2006. Google Scholar
Digital Library
- HPCC. Hpcc 1998 blue book (computing, information, and communications: Technologies for the 21st century). Computing, Information, and Communications (CIC) R&D Subcommittee of the National Science and Technology Council's Committee on Computing, Information, and Communications (CCIC), 1998.Google Scholar
- Z. Huang, M. K. Purvis, and P. Werstein. Performance Evaluation of View-Oriented Parallel Programming. In ICPP'05, pages 251--258, 2005. Google Scholar
Digital Library
- NAS Parallel Benchmarks: ProActive implementations. http://proactive.inria.fr/index.php?page=nas_benchmarks.Google Scholar
- G. Karypis and V. Kumar. Multilevel k-way Partitioning Scheme for Irregular Graphs. Journal of Parallel and Distributed Computing, 48(1):96--129, 1998. Google Scholar
Digital Library
- T. Kielmann, R. F. H. Hofman, H. E. Bal, A. Plaat, and R. A. F. Bhoedjang. MagPIe: MPI's Collective Communication Operations for Clustered Wide Area Systems. ACM SIGPLAN Notices, 34(8):131--140, 1999. Google Scholar
Digital Library
- J. Kim and D. J. Lilja. Characterization of Communication Patterns in Message-Passing Parallel Scientific Application Programs. In CANPC'98, pages 202--216, 1998. Google Scholar
Digital Library
- B. Krammer and M. S. Müller. MPI Application Development with MARMOT. In ParCo'05, pages 893--900, 2005.Google Scholar
- D. Kranzlmüller, C. Schaubschläger, and J. Volkert. An Integrated Record&Replay Mechanism for Nondeterministic Message Passing Programs. In 8th European PVM/MPI Users' Group Meeting, pages 192--200, 2001. Google Scholar
Digital Library
- D. Kranzlmüller and J. Volkert. NOPE: A Nondeterministic Program Evaluator. In ACPC'99, pages 490--499, 1999. Google Scholar
Digital Library
- T. J. LeBlanc and J. M. Mellor-Crummey. Debugging Parallel Programs with Instant Replay. IEEE Trans. Computers, 36(4):471--482, 1987. Google Scholar
Digital Library
- R. Lovas and P. Kacsuk. Correctness Debugging of Message Passing Programs Using Model Verification Techniques. In 14th European PVM/MPI User's Group Meeting, pages 335--343, 2007. Google Scholar
Digital Library
- G. R. Luecke, H. Chen, J. Coyle, J. Hoekstra, M. Kraeva, and Y. Zou. MPI-CHECK: A Tool for Checking Fortran 90 MPI Programs. Concurrency and Computation: Practice and Experience, 15(2):93--100, 2003.Google Scholar
Cross Ref
- M. Maruyama, T. Tsumura, and H. Nakashima. Parallel Program Debugging based on Data-Replay. In PDCS'05, pages 151--156, 2005.Google Scholar
- G. L. Miller, S.-H. Teng, and S. A. Vavasis. A Unified Geometric Approach to Graph Separators. In 32th Annual Symposium on Foundations of Computer Science, pages 538--547, Oct. 1991. Google Scholar
Digital Library
- SIM-MPI Library. http://www.hpctest.org.cn/resources/sim-mpi.tgz.Google Scholar
- N. Neophytou and P. Evripidou. Net-dbx: A Web-Based Debugger of MPI Programs Over Low-Bandwidth Lines. IEEE Trans. Parallel Distrib. Syst., 12(9):986--995, 2001. Google Scholar
Digital Library
- S. Pervez, G. Gopalakrishnan, R. M. Kirby, R. Palmer, R. Thakur, and W. Gropp. Practical Model-Checking Method for Verifying Correctness of MPI Programs. In 14th European PVM/MPI User's Group Meeting, pages 344--353, 2007. Google Scholar
Digital Library
- PGDBG Graphical Symbolic Debugger. http://www.pgroup.com/products/pgdbg.htm.Google Scholar
- M. Rudgyard. Novel Techniques for Debugging and Optimizing Parallel Applications. In SC'06, page 281, 2006. Google Scholar
Digital Library
- B. Schroeder and G. A. Gibson. A Large-scale Study of Failures in High-performance Computing Systems. In International Conference on Dependable Systems and Networks (DSN 2006), pages 249--258, 2006. Google Scholar
Digital Library
- S. F. Siegel. Model Checking Nonblocking MPI Programs. In 8th International Conference on Verification, Model Checking, and Abstract Interpretation (VMCAI 2007), pages 44--58, 2007. Google Scholar
Digital Library
- S. F. Siegel and G. S. Avrunin. Modeling Wildcard-free MPI Programs for Verification. In Proceedings of the ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming (PPOPP 2005), pages 95--106, 2005. Google Scholar
Digital Library
- S. F. Siegel, A. Mironova, G. S. Avrunin, and L. A. Clarke. Using Model Checking with Symbolic Execution to Verify Parallel Numerical Programs. In Proceedings of the ACM/SIGSOFT International Symposium on Software Testing and Analysis (ISSTA 2006), pages 157--168, 2006. Google Scholar
Digital Library
- Totalview. http://www.totalviewtech.com/.Google Scholar
- J. L. Träff and J. Worringen. Verifying Collective MPI Calls. In 11th European PVM/MPI Users' Group Meeting, pages 18--27, 2004.Google Scholar
- J. S. Vetter and B. R. de Supinski. Dynamic Software Testing of MPI Applications with Umpire. In SC'00, pages 70--70, November, 4-10 2000. Google Scholar
Digital Library
- J. S. Vetter and F. Mueller. Communication Characteristics of Large-scale Scientific Applications for Contemporary Cluster Architectures. J. Parallel Distrib. Comput., 63(9): 853--865, 2003. Google Scholar
Digital Library
Index Terms
MPIWiz: subgroup reproducible replay of mpi applications
Recommendations
MPIWiz: subgroup reproducible replay of mpi applications
PPoPP '09Message Passing Interface (MPI) is a widely used standard for managing coarse-grained concurrency on distributed computers. Debugging parallel MPI applications, however, has always been a particularly challenging task due to their high degree of ...
Implementing record and refinement for debugging timing-dependent communication
Distributed applications are hard to debug because timing-dependent network communication is a source of non-deterministic behavior. Current approaches to debug non-deterministic failures include post-mortem debugging as well as record and replay. ...
PGDB: A Debugger for MPI Applications
XSEDE '14: Proceedings of the 2014 Annual Conference on Extreme Science and Engineering Discovery EnvironmentAs MPI applications scale to larger machines, errors that had been hidden from testing at smaller scales begin to manifest themselves. It is therefore necessary to extend debuggers to work at these scales, in order for efficient development of correct ...







Comments