skip to main content
research-article

Steering symbolic execution to less traveled paths

Published:29 October 2013Publication History
Skip Abstract Section

Abstract

Symbolic execution is a promising testing and analysis methodology. It systematically explores a program's execution space and can generate test cases with high coverage. One significant practical challenge for symbolic execution is how to effectively explore the enormous number of program paths in real-world programs. Various heuristics have been proposed for guiding symbolic execution, but they are generally inefficient and ad-hoc. In this paper, we introduce a novel, unified strategy to guide symbolic execution to less explored parts of a program. Our key idea is to exploit a specific type of path spectra, namely the length-n subpath program spectra, to systematically approximate full path information for guiding path exploration. In particular, we use frequency distributions of explored length-n subpaths to prioritize "less traveled" parts of the program to improve test coverage and error detection. We have implemented our general strategy in KLEE, a state-of-the-art symbolic execution engine. Evaluation results on the GNU Coreutils programs show that (1) varying the length n captures program-specific information and exhibits different degrees of effectiveness, and (2) our general approach outperforms traditional strategies in both coverage and error detection.

References

  1. S. Anand, C. S. Păsăreanu, and W. Visser. JPF--SE: A symbolic execution extension to Java PathFinder. In Tools and Algorithms for the Construction and Analysis of Systems, pages 134--138. Springer, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. V. Bala, E. Duesterwald, and S. Banerjia. Transparent dynamic optimization: The design and implementation of Dynamo. Technical report, Technical Report HPL-1999-78, Hewlett-Packard Laboratories, 1999.Google ScholarGoogle Scholar
  3. V. Bala, E. Duesterwald, and S. Banerjia. Dynamo: a transparent dynamic optimization system. In ACM SIGPLAN Conference on Programming Language Design and Implementation, pages 1--12. ACM, 2000. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. T. Ball and J. Larus. Efficient path profiling. In ACM/IEEE International Symposium on Microarchitecture, pages 46--57. IEEE Computer Society, 1996. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. P. Boonstoppel, C. Cadar, and D. Engler. RWset: Attacking path explosion in constraint-based test generation. In International conference on Tools and Algorithms for the Construction and Analysis of Systems, pages 351--366. Springer, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. J. Burnim and K. Sen. Heuristics for scalable dynamic test generation. In IEEE/ACM International Conference on Automated Software Engineering, pages 443--446. IEEE Computer Society, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. C. Cadar, D. Dunbar, and D. Engler. KLEE: Unassisted and automatic generation of high-coverage tests for complex systems programs. In USENIX Symposium on Operating Systems Design and Implementation, pages 209--224, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. C. Cadar, V. Ganesh, P. Pawlowski, D. Dill, and D. Engler. EXE: automatically generating inputs of death. ACM Transactions on Information and System Security (TISSEC), 12(2):10, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. C. Cadar, P. Godefroid, S. Khurshid, C. Pasareanu, K. Sen, N. Tillmann, and W. Visser. Symbolic execution for software testing in practice: preliminary assessment. In International Conference on Software Engineering, pages 1066--1071. IEEE, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. L. A. Clarke. A system to generate test data and symbolically execute programs. IEEE Transactions on Software Engineering, 2(3):215--222, 1976. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Coreutils - GNU core utilities. http://www.gnu.org/software/coreutils/.Google ScholarGoogle Scholar
  12. R. DeMillo, R. Lipton, and F. Sayward. Hints on test data selection: Help for the practicing programmer. Computer, 11(4):34--41, 1978. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. E. Duesterwald and V. Bala. Software profiling for hot path prediction: Less is more. In International Conference on Architectural Support for Programming Languages and Operating Systems, pages 202--211. ACM, 2000. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. J. W. Duran and S. C. Ntafos. An evaluation of random testing. IEEE Transactions on Software Engineering, 10(4):438--444, 1984. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. R. Ferguson and B. Korel. The chaining approach for software test data generation. ACM Transactions on Software Engineering and Methodology (TOSEM), 5(1):63--86, 1996. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. P. Godefroid. Compositional dynamic test generation. In ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages, pages 47--54. ACM, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. P. Godefroid, N. Klarlund, and K. Sen. DART: directed automated random testing. In ACM SIGPLAN Conference on Programming Language Design and Implementation, pages 213--223. ACM, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. P. Godefroid, M. Levin, D. Molnar, et al. Automated whitebox fuzz testing. In Network and Distributed System Security Symposium. The Internet Society, 2008.Google ScholarGoogle Scholar
  19. D. Hamlet and R. Taylor. Partition testing does not inspire confidence {program testing}. IEEE Transactions on Software Engineering, 16(12):1402--1411, 1990. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. M. Harrold, G. Rothermel, K. Sayre, R. Wu, and L. Yi. An empirical investigation of the relationship between spectra differences and regression faults. Software Testing Verification and Reliability, 10(3):171--194, 2000.Google ScholarGoogle ScholarCross RefCross Ref
  21. Y. Jia and M. Harman. Milu: A customizable, runtime-optimized higher order mutation testing tool for the full C language. In Testing: Academia and Industry Conference - Practice And Research Techniques, pages 94--98. IEEE, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. J. King. Symbolic execution and program testing. Communications of the ACM, 19(7):385--394, 1976. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. The KLEE symbolic virtual machine. http://klee.llvm.org/.Google ScholarGoogle Scholar
  24. R. Majumdar and K. Sen. Hybrid concolic testing. In International Conference on Software Engineering, pages 416--426. IEEE, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. D. Marinov, A. Andoni, D. Daniliuc, S. Khurshid, and M. Rinard. An evaluation of exhaustive testing for data structures. Technical report, Technical Report MIT-LCS-TR-921, MIT CSAIL, Cambridge, MA, 2003.Google ScholarGoogle Scholar
  26. C. S. Păsăreanu and W. Visser. Verification of Java programs using symbolic execution and invariant generation. In Model Checking Software, pages 164--181. Springer, 2004.Google ScholarGoogle ScholarCross RefCross Ref
  27. T. Reps, T. Ball, M. Das, and J. Larus. The use of program profiling for software maintenance with applications to the Year 2000 problem. In Joint Meeting of the European Software Engineering Conference and the ACM SIGSOFT International Symposium on Foundations of Software Engineering, pages 432--449, 1997. Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. M. Staats and C. Păsăreanu. Parallel symbolic execution for structural test generation. In International Symposium on Software Testing and Analysis, pages 183--194. ACM, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. K. Taneja, T. Xie, N. Tillmann, and J. de Halleux. eXpress: guided path exploration for efficient regression test generation. In International Symposium on Software Testing and Analysis, pages 1--11. ACM, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. N. Tillmann and J. De Halleux. Pex - white box test generation for .NET. Tests and Proofs, pages 134--153, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. W. Visser, C. S. Păsăreanu, and R. Pelánek. Test input generation for Java containers using state matching. In International Symposium on Software Testing and Analysis, pages 37--48. ACM, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. T. Xie, N. Tillmann, J. de Halleux, and W. Schulte. Fitness-guided path exploration in dynamic symbolic execution. In IEEE/IFIP International Conference on Dependable Systems and Networks, pages 359--368. IEEE, 2009.Google ScholarGoogle ScholarCross RefCross Ref
  33. X. Yang, Y. Chen, E. Eide, and J. Regehr. Finding and understanding bugs in C compilers. In ACM SIGPLAN Conference on Programming Language Design and Implementation, pages 283--294. ACM, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Steering symbolic execution to less traveled paths

                  Recommendations

                  Comments

                  Login options

                  Check if you have access through your login credentials or your institution to get full access on this article.

                  Sign in

                  Full Access

                  • Published in

                    cover image ACM SIGPLAN Notices
                    ACM SIGPLAN Notices  Volume 48, Issue 10
                    OOPSLA '13
                    October 2013
                    867 pages
                    ISSN:0362-1340
                    EISSN:1558-1160
                    DOI:10.1145/2544173
                    Issue’s Table of Contents
                    • cover image ACM Conferences
                      OOPSLA '13: Proceedings of the 2013 ACM SIGPLAN international conference on Object oriented programming systems languages & applications
                      October 2013
                      904 pages
                      ISBN:9781450323741
                      DOI:10.1145/2509136

                    Copyright © 2013 ACM

                    Publisher

                    Association for Computing Machinery

                    New York, NY, United States

                    Publication History

                    • Published: 29 October 2013

                    Check for updates

                    Qualifiers

                    • research-article

                  PDF Format

                  View or Download as a PDF file.

                  PDF

                  eReader

                  View online with eReader.

                  eReader
                  About Cookies On This Site

                  We use cookies to ensure that we give you the best experience on our website.

                  Learn more

                  Got it!