skip to main content
research-article

Parallelizing top-down interprocedural analyses

Published:11 June 2012Publication History
Skip Abstract Section

Abstract

Modularity is a central theme in any scalable program analysis. The core idea in a modular analysis is to build summaries at procedure boundaries, and use the summary of a procedure to analyze the effect of calling it at its calling context. There are two ways to perform a modular program analysis: (1) top-down and (2) bottomup. A bottom-up analysis proceeds upwards from the leaves of the call graph, and analyzes each procedure in the most general calling context and builds its summary. In contrast, a top-down analysis starts from the root of the call graph, and proceeds downward, analyzing each procedure in its calling context. Top-down analyses have several applications in verification and software model checking. However, traditionally, bottom-up analyses have been easier to scale and parallelize than top-down analyses.

In this paper, we propose a generic framework, BOLT, which uses MapReduce style parallelism to scale top-down analyses. In particular, we consider top-down analyses that are demand driven, such as the ones used for software model checking. In such analyses, each intraprocedural analysis happens in the context of a reachability query. A query Q over a procedure P results in query tree that consists of sub-queries over the procedures called by P. The key insight in BOLT is that the query tree can be explored in parallel using MapReduce style parallelism -- the map stage can be used to run a set of enabled queries in parallel, and the reduce stage can be used to manage inter-dependencies between queries. Iterating the map and reduce stages alternately, we can exploit the parallelism inherent in top-down analyses. Another unique feature of BOLT is that it is parameterized by the algorithm used for intraprocedural analysis. Several kinds of analyses, including may analyses, mustanalyses, and may-must-analyses can be parallelized using BOLT.

We have implemented the BOLT framework and instantiated the intraprocedural parameter with a may-must-analysis. We have run BOLT on a test suite consisting of 45 Microsoft Windows device drivers and 150 safety properties. Our results demonstrate an average speedup of 3.71x and a maximum speedup of 7.4x (with 8 cores) over a sequential analysis. Moreover, in several checks where a sequential analysis fails, BOLT is able to successfully complete its analysis.

References

  1. A. Aiken, S. Bugrara, I. Dillig, T. Dillig, B. Hackett, and P. Hawkins. An overview of the Saturn project. In Program Analysis for Software Tools and Engineering (PASTE 2007), pages 43--48, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. A. Albarghouthi, A. Gurfinkel, and M. Chechik. Whale: An interpolation-based algorithm for interprocedural verification. In Verification, Model Checking, and Abstract Interpretation (VMCAI 2012), 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. T. Ball and S. K. Rajamani. Bebop: A symbolic model checker for boolean programs. In SPIN Workshop on Model Checking of Software (SPIN 2000), pages 113--130, 2000. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. T. Ball and S. K. Rajamani. The SLAM toolkit. In Computer Aided Verification (CAV 2001), pages 260--264, 2001. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. T. Ball, R. Majumdar, T. D. Millstein, and S. K. Rajamani. Automatic Predicate Abstraction of C Programs. In Programming language design and implementation (PLDI'01), pages 203--213. ACM, 2001. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. T. Ball, B. Cook, V. Levin, and S. K. Rajamani. SLAM and Static Driver Verifier: Technology transfer of formal methods inside Microsoft. In Integrated Formal Methods (IFM 2004), pages 1--20, 2004.Google ScholarGoogle ScholarCross RefCross Ref
  7. T. Ball, V. Levin, and S. K. Rajamani. A decade of software model checking with SLAM. Communications of the ACM, 54: 68--76, July 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. J. Barnat, L. Brim, and J. Stribrna. Distributed LTL model-checking in SPIN. In SPIN Workshop on Model Checking of Software (SPIN 2001), pages 200--216, 2001. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. N. E. Beckman, A. V. Nori, S. K. Rajamani, and R. J. Simmons. Proofs from tests. In International Symposium on Software Testing and Analysis (ISSTA 2008), pages 3--14, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. D. Beyer, T. A. Henzinger, R. Jhala, and R. Majumdar. "The Software Model Checker";. STTT: International Journal on Software Tools for Technology Transfer, 9 (5--6): 505--525, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. E. Clarke, O. Grumberg, S. Jha, Y. Lu, and H. Veith. Counterexample-Guided Abstraction Refinement. In Computer Aided Verification (CAV 2000), pages 154--169, 2000. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. L. de Moura and N. Bjørner. Z3: An efficient SMT solver. In Tools and Algorithms for the Construction and Analysis of Systems (TACAS 2008), pages 337--340, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. J. Dean and S. Ghemawat. MapReduce: Simplified data processing on large clusters. In Operating Systems Design and Implementation (OSDI 2004), pages 137--150, 2004. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. I. Dillig, T. Dillig, and A. Aiken. Sound, complete and scalable path-sensitive analysis. In Programming Language Design and Implementation (PLDI 2008), pages 270--280, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. M. B. Dwyer, S. G. Elbaum, S. Person, and R. Purandare. Parallel randomized state-space search. In International Conference on Software Engineering (ICSE 2007), pages 3--12, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. P. Godefroid. Compositional dynamic test generation. In Principles of Programming Languages (POPL 2007), pages 47--54, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. P. Godefroid, N. Klarlund, and K. Sen. DART: Directed automated random testing. In Programming Language Design and Implementation (PLDI 2005), pages 213--223, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. P. Godefroid, M. Y. Levin, and D. A. Molnar. Automated whitebox fuzz testing. In Network and Distributed System Security Symposium (NDSS 2008), 2008.Google ScholarGoogle Scholar
  19. P. Godefroid, A. Nori, S. Rajamani, and S. Tetali. Compositional may-must program analysis: Unleashing the power of alternation. In Principles of Programming Languages (POPL 2010), pages 43--56, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. S. Graf and H. Saïdi. Construction of abstract state graphs with PVS. In Computer Aided Verification (CAV 1997), pages 72--83, 1997. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. O. Grumberg, T. Heyman, N. Ifergan, and A. Schuster. Achieving speedups in distributed symbolic reachability analysis through asynchronous computation. In Correct Hardware Design and Verification Methods (CHARME 1995), pages 129--145, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. B. Gulavani, T. Henzinger, Y. Kannan, A. V. Nori, and S. K. Rajamani. SYNERGY: A new algorithm for property checking. In Foundations of Software Engineering (FSE 2006), pages 117--127, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. K. Havelund and T. Pressburger. Model Checking Java Programs Using Java Pathfinder. International Journal on Software Tools for Technology Transfer, 1999.Google ScholarGoogle Scholar
  24. M. Heizmann, J. Hoenicke, and A. Podelski. Nested interpolants. In Principles of Programming Languages (POPL 2010), pages 471--482, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. T. Henzinger, R. Jhala, R. Majumdar, and G. Sutre. Lazy abstraction. In Principles of Programming Languages (POPL 2002), pages 58--70, 2002. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. G. J. Holzmann and D. Bosnacki. The design of a multicore extension of the SPIN model checker. IEEE Transactions on Software Engineering, 33: 659--674, October 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. R. Kumar and E. G. Mercer. Load balancing parallel explicit state model checking. Electronic Notes in Theoretical Computer Science, 128: 19--34, 2005.Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. A. Lal and T. W. Reps. Reducing concurrent analysis under a context bound to sequential analysis. Formal Methods in System Design (FMSD), 35 (1): 73--97, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. N. P. Lopes and A. Rybalchenko. Distributed and predictable software model checking. In Verification, Model Checking, and Abstract Interpretation (VMCAI 2011), pages 340--355, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. K. McMillan. Lazy annotation for program testing and verification. In Computer Aided Verification (CAV 2010), pages 104--118, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. K. L. McMillan. Lazy abstraction with interpolants. In Computer Aided Verification (CAV 2006), pages 123--136, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. D. Monniaux. The parallel implementation of the astrée static analyzer. In K. Yi, editor, APLAS, volume 3780 of Lecture Notes in Computer Science, pages 86--96. Springer, 2005. ISBN 3-540-29735-9. Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. A. V. Nori and S. K. Rajamani. An empirical study of optimizations in Yogi. In International Conference on Software Engineering (ICSE 2010), pages 355--364, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. T. Prabhu, S. Ramalingam, M. Might, and M. Hall. EigenCFA: accelerating flow analysis with GPUs. In Principles of Programming Languages (POPL 2011), pages 511--522, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. K. Sen, D. Marinov, and G. Agha. CUTE: A concolic unit testing engine for C. In Foundations of Software Engineering (ESEC-FSE 2005), pages 263--272, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. U. Stern and D. L. Dill. Parallelizing the Murphi Verifier. In Computer Aided Verification (CAV 1997), pages 256--278, 1997. Google ScholarGoogle ScholarDigital LibraryDigital Library
  37. Y. Xie and A. Aiken. Scalable error detection using Boolean satisfiability. In Principles of Programming Languages (POPL 2005), pages 351--363, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  38. Y. Yu, M. Isard, D. Fetterly, M. Budiu, U. Erlingsson, P. K. Gunda, and J. Currey. DryadLINQ: a system for general-purpose distributed data-parallel computing using a high-level language. In Operating Systems Design and Implementation (OSDI 2008), pages 1--14, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Parallelizing top-down interprocedural analyses

                        Recommendations

                        Comments

                        Login options

                        Check if you have access through your login credentials or your institution to get full access on this article.

                        Sign in

                        Full Access

                        • Published in

                          cover image ACM SIGPLAN Notices
                          ACM SIGPLAN Notices  Volume 47, Issue 6
                          PLDI '12
                          June 2012
                          534 pages
                          ISSN:0362-1340
                          EISSN:1558-1160
                          DOI:10.1145/2345156
                          Issue’s Table of Contents
                          • cover image ACM Conferences
                            PLDI '12: Proceedings of the 33rd ACM SIGPLAN Conference on Programming Language Design and Implementation
                            June 2012
                            572 pages
                            ISBN:9781450312059
                            DOI:10.1145/2254064

                          Copyright © 2012 ACM

                          Publisher

                          Association for Computing Machinery

                          New York, NY, United States

                          Publication History

                          • Published: 11 June 2012

                          Check for updates

                          Qualifiers

                          • research-article

                        PDF Format

                        View or Download as a PDF file.

                        PDF

                        eReader

                        View online with eReader.

                        eReader
                        About Cookies On This Site

                        We use cookies to ensure that we give you the best experience on our website.

                        Learn more

                        Got it!