skip to main content
research-article

vSensor: leveraging fixed-workload snippets of programs for performance variance detection

Published:10 February 2018Publication History
Skip Abstract Section

Abstract

Performance variance becomes increasingly challenging on current large-scale HPC systems. Even using a fixed number of computing nodes, the execution time of several runs can vary significantly. Many parallel programs executing on supercomputers suffer from such variance. Performance variance not only causes unpredictable performance requirement violations, but also makes it unintuitive to understand the program behavior. Despite prior efforts, efficient on-line detection of performance variance remains an open problem.

In this paper, we propose vSensor, a novel approach for light-weight and on-line performance variance detection. The key insight is that, instead of solely relying on an external detector, the source code of a program itself could reveal the runtime performance characteristics. Specifically, many parallel programs contain code snippets that are executed repeatedly with an invariant quantity of work. Based on this observation, we use compiler techniques to automatically identify these fixed-workload snippets and use them as performance <u>v</u>ariance <u>sensor</u>s (v-sensors) that enable effective detection. We evaluate vSensor with a variety of parallel programs on the Tianhe-2 system. Results show that vSensor can effectively detect performance variance on HPC systems. The performance overhead is smaller than 4% with up to 16,384 processes. In particular, with vSensor, we found a bad node with slow memory that slowed a program's performance by 21%. As a showcase, we also detected a severe network performance problem that caused a 3.37X slowdown for an HPC kernel program on the Tianhe-2 system.

References

  1. 2016. MPI Documents. (2016). http://mpi-forum.org/docs/Google ScholarGoogle Scholar
  2. 2017. Intel Trace Analyzer and Collector. (2017). https://software.intel.com/en-us/intel-trace-analyzerGoogle ScholarGoogle Scholar
  3. 2017. top500 website. (2017). http://top500.org/Google ScholarGoogle Scholar
  4. Saurabh Agarwal, Rahul Garg, and Nisheeth K Vishnoi. 2005. The impact of noise on the scaling of collectives: A theoretical approach. In High Performance Computing-HiPC 2005. Springer, 280--289. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Dorian C Arnold, Dong H Ahn, BR De Supinski, Gregory Lee, BP Miller, and Martin Schulz. 2007. Stack trace analysis for large scale applications. In 21st IEEE International Parallel & Distributed Processing Symposium (IPDPSâĂ&Zacute;07), Long Beach, CA.Google ScholarGoogle Scholar
  6. D. Bailey, T. Harris, W. Saphir, R. V. D. Wijngaart, A. Woo, and M. Yarrow. 1995. The NAS Parallel Benchmarks 2.0. NAS Systems Division, NASA Ames Research Center, Moffett Field, CA.Google ScholarGoogle Scholar
  7. Pete Beckman, Kamil Iskra, Kazutomo Yoshii, and Susan Coghlan. 2006. The influence of operating systems on the performance of collective operations at extreme scale. In 2006 IEEE International Conference on Cluster Computing. IEEE, 1--12.Google ScholarGoogle ScholarCross RefCross Ref
  8. Alexandru Calotoiu, David Beckinsale, Christopher W Earl, Torsten Hoefler, Ian Karlin, Martin Schulz, and Felix Wolf. 2016. Fast Multi-Parameter Performance Modeling. In 2016 IEEE International Conference on Cluster Computing (CLUSTER). IEEE, 172--181.Google ScholarGoogle Scholar
  9. Kurt B Ferreira, Patrick G Bridges, Ron Brightwell, and Kevin T Pedretti. 2013. The impact of system design parameters on application noise sensitivity. 2010 IEEE International Conference on Cluster Computing 16, 1 (2013), 117--129. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Markus Geimer, Felix Wolf, Brian JN Wylie, Erika Ábrahám, Daniel Becker, and Bernd Mohr. 2010. The Scalasca performance toolset architecture. Concurrency and Computation: Practice and Experience 22, 6 (2010), 702--719. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Yifan Gong, Bingsheng He, and Dan Li. 2014. Finding constant from change: Revisiting network performance aware optimizations on iaas clouds. In SC14: International Conference for High Performance Computing, Networking, Storage and Analysis. IEEE, 982--993. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Torsten Hoefler, Timo Schneider, and Andrew Lumsdaine. 2010. Characterizing the Influence of System Noise on Large-Scale Applications by Simulation. In Proceedings of the 2010 ACM/IEEE International Conference for High Performance Computing, Networking, Storage and Analysis (SC'10). 1--11. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Terry Jones. 2012. Linux kernel co-scheduling and bulk synchronous parallelism. International Journal of High Performance Computing Applications (2012), 1094342011433523. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. TR Jones, LB Brenner, and JM Fier. 2003. Impacts of operating systems on the scalability of parallel applications. Lawrence Livermore National Laboratory, Technical Report (2003).Google ScholarGoogle Scholar
  15. Ian Karlin, Abhinav. Bhatele, Bradford L.. Chamberlain, Jonathan. Cohen, Zachary Devito, Maya Gokhale, Riyaz Haque, Rich Hornung, Jeff Keasler, Dan Laney, Edward Luke, Scott Lloyd, Jim McGraw, Rob Neely, David Richards, Martin Schulz, Charle H. Still, Felix Wang, and Daniel Wong. 2012. LULESH Programming Model and Performance Ports Overview. Technical Report LLNL-TR-608824. 1--17 pages.Google ScholarGoogle Scholar
  16. Ignacio Laguna, Dong H Ahn, Bronis R de Supinski, Saurabh Bagchi, and Todd Gamblin. 2015. Diagnosis of Performance Faults in LargeScale MPI Applications via Probabilistic Progress-Dependence Inference. IEEE Transactions on Parallel and Distributed Systems 26, 5 (2015), 1280--1289.Google ScholarGoogle ScholarCross RefCross Ref
  17. Chris Lattner and Vikram Adve. 2004. LLVM: A compilation framework for lifelong program analysis & transformation. In Proceedings of the international symposium on Code generation and optimization: feedback-directed and runtime optimization. IEEE Computer Society, 75. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Seyong Lee, Jeremy S Meredith, and Jeffrey S Vetter. 2015. Compass: A framework for automated performance modeling and prediction. In Proceedings of the 29th ACM on International Conference on Supercomputing. ACM, 405--414. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. Yu Jung Lo, Samuel Williams, Brian Van Straalen, Terry J Ligocki, Matthew J Cordery, Nicholas J Wright, Mary W Hall, and Leonid Oliker. 2014. Roofline model toolkit: A practical tool for architectural and program analysis. In International Workshop on Performance Modeling, Benchmarking and Simulation of High Performance Computer Systems. Springer, 129--148.Google ScholarGoogle Scholar
  20. Subrata Mitra, Ignacio Laguna, Dong H Ahn, Saurabh Bagchi, Martin Schulz, and Todd Gamblin. 2014. Accurate application progress analysis for large-scale parallel debugging. In ACM SIGPLAN Notices, Vol. 49. ACM, 193--203. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. Oscar H Mondragon, Patrick G Bridges, Scott Levy, Kurt B Ferreira, and Patrick Widener. 2016. Understanding performance interference in next-generation HPC systems. In High Performance Computing, Networking, Storage and Analysis, SC16: International Conference for. IEEE, 384--395. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. Philip Mucci, Jack Dongarra, Shirley Moore, Fengguang Song, Felix Wolf, and Rick Kufrin. 2004. Automating the Large-Scale Collection and Analysis of Performance Data on Linux Clusters1. In Proceedings of the 5th LCI International Conference on Linux Clusters: The HPC Revolution.Google ScholarGoogle Scholar
  23. Fabrizio Petrini, Darren J. Kerbyson, and Scott Pakin. 2003. The Case of the Missing Supercomputer Performance: Achieving Optimal Performance on the 8,192 Processors of ASCI Q. In Proceedings of the 2003 ACM/IEEE Conference on Supercomputing (SC'03). ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. Wayne Pfeiffer and Alexandros Stamatakis. 2010. Hybrid MPI/Pthreads parallelization of the RAxML phylogenetics code. In 2010 IEEE International Symposium on Parallel & Distributed Processing, Workshops and Phd Forum (IPDPSW). IEEE, 1--8.Google ScholarGoogle ScholarCross RefCross Ref
  25. J. C. Phillips, Gengbin Zheng, S. Kumar, and L. V. Kale. {n. d.}. NAMD: Biomolecular Simulation on Thousands of Processors. In Supercomputing, ACM/IEEE 2002 Conference. 36--36. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. David Skinner and William Kramer. 2005. Understanding the causes of performance variability in HPC workloads. In Workload Characterization Symposium, 2005. Proceedings of the IEEE International. IEEE, 137--149.Google ScholarGoogle ScholarCross RefCross Ref
  27. David Skinner and William Kramer. 2005. Understanding the causes of performance variability in HPC workloads. In Proceedings of the IEEE International Workload Characterization Symposium, 2005. IEEE, 137--149.Google ScholarGoogle ScholarCross RefCross Ref
  28. Nathan R. Tallent, Laksono Adhianto, and John M. Mellor-Crummey. 2010. Scalable Identification of Load Imbalance in Parallel Executions Using Call Path Profiles. In Proceedings of the 2010 ACM/IEEE International Conference for High Performance Computing, Networking, Storage and Analysis (SC '10). IEEE Computer Society, Washington, DC, USA, 1--11. Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. Nathan R Tallent, John Mellor-Crummey, Michael Franco, Reed Landrum, and Laksono Adhianto. 2011. Scalable fine-grained call path tracing. In Proceedings of the international conference on Supercomputing. ACM, 63--74. Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. Dan Tsafrir, Yoav Etsion, Dror G. Feitelson, and Scott Kirkpatrick. 2005. System Noise, OS Clock Ticks, and Fine-grained Parallel Applications. In Proceedings of the 19th Annual International Conference on Supercomputing (ICS'05). ACM, New York, NY, USA, 303--312. Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. Jeffrey Vetter and Chris Chambreau. 2005. mpip: Lightweight, scalable mpi profiling. (2005).Google ScholarGoogle Scholar
  32. Vincent M Weaver, Dan Terpstra, and Shirley Moore. 2013. Non-determinism and overcount on modern hardware performance counter implementations. In Performance Analysis of Systems and Software (ISPASS), 2013 IEEE International Symposium on. IEEE, 215--224.Google ScholarGoogle ScholarCross RefCross Ref
  33. Samuel Williams, Andrew Waterman, and David Patterson. 2009. Roofline: an insightful visual performance model for multicore architectures. Commun. ACM 52, 4 (2009), 65--76. Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. Nicholas J Wright, Shava Smallen, Catherine Mills Olschanowsky, Jim Hayes, and Allan Snavely. 2009. Measuring and understanding variation in benchmark performance. In DoD High Performance Computing Modernization Program Users Group Conference (HPCMP-UGC), 2009. IEEE, 438--443. Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. Xing Wu and Frank Mueller. 2013. Elastic and scalable tracing and accurate replay of non-deterministic events. In Proceedings of the 27th international ACM conference on International conference on supercomputing. ACM, 59--68. Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. Brian J. N. Wylie, Markus Geimer, and Felix Wolf. 2008. Performance measurement and analysis of large-scale parallel applications on leadership computing systems. Scientific programming 16, 2--3 (April 2008), 167--181. Google ScholarGoogle ScholarDigital LibraryDigital Library
  37. Ulrike Meier Yang et al. 2002. BoomerAMG: a parallel algebraic multigrid solver and preconditioner. Applied Numerical Mathematics 41, 1 (2002), 155--177. Google ScholarGoogle ScholarDigital LibraryDigital Library
  38. Jae-Seung Yeom, Jayaraman J Thiagarajan, Abhinav Bhatele, Greg Bronevetsky, and Tzanio Kolev. 2016. Data-driven performance modeling of linear solvers for sparse matrices. In International Workshop on Performance Modeling, Benchmarking and Simulation of High Performance Computer Systems (PMBS). IEEE, 32--42. Google ScholarGoogle ScholarDigital LibraryDigital Library
  39. Jidong Zhai, Jianfei Hu, Xiongchao Tang, Xiaosong Ma, and Wenguang Chen. 2014. Cypress: combining static and dynamic analysis for top-down communication trace compression. In Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis. IEEE Press, 143--153. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. vSensor: leveraging fixed-workload snippets of programs for performance variance detection

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in

        Full Access

        • Published in

          cover image ACM SIGPLAN Notices
          ACM SIGPLAN Notices  Volume 53, Issue 1
          PPoPP '18
          January 2018
          426 pages
          ISSN:0362-1340
          EISSN:1558-1160
          DOI:10.1145/3200691
          Issue’s Table of Contents
          • cover image ACM Conferences
            PPoPP '18: Proceedings of the 23rd ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming
            February 2018
            442 pages
            ISBN:9781450349826
            DOI:10.1145/3178487

          Copyright © 2018 ACM

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          • Published: 10 February 2018

          Check for updates

          Qualifiers

          • research-article

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader
        About Cookies On This Site

        We use cookies to ensure that we give you the best experience on our website.

        Learn more

        Got it!