skip to main content
10.1145/1375581.1375614acmconferencesArticle/Chapter ViewAbstractPublication PagespldiConference Proceedingsconference-collections
research-article

Explaining failures of program analyses

Published:07 June 2008Publication History

ABSTRACT

With programs getting larger and often more complex with each new release, programmers need all the help they can get in understanding and transforming programs. Fortunately, modern development environments, such as Eclipse, incorporate tools for understanding, navigating, and transforming programs. These tools typically use program analyses to extract relevant properties of programs.

These tools are often invaluable to developers; for example, many programmers use refactoring tools regularly. However, poor results by the underlying analyses can compromise a tool's usefulness. For example, a bug finding tool may produce too many false positives if the underlying analysis is overly conservative, and thus overwhelm the user with too many possible errors in the program. In such cases it would be invaluable for the tool to explain to the user why it believes that each bug exists. Armed with this knowledge, the user can decide which bugs are worth pursing and which are false positives.

The contributions of this paper are as follows: (i) We describe requirements on the structure of an analysis so that we can produce reasons when the analysis fails; the user of the analysis determines whether or not an analysis's results constitute failure. We also describe a simple language that enforces these requirements; (ii) We describe how to produce necessary and sufficient reasons for analysis failure; (iii) We evaluate our system with respect to a number of analyses and programs and find that most reasons are small (and thus usable) and that our system is fast enough for interactive use.

References

  1. Keith D. Cooper, Mary W. Hall, Robert T. Hood, Ken Kennedy, Kathryn S. McKinley, John M. Mellor-Crummey, Linda Torczon, and Scott K. Warren. The ParaScope parallel programming environment. Proceedings of the IEEE, 81(2):244--263, 1993.]]Google ScholarGoogle ScholarCross RefCross Ref
  2. Luiz DeRose, Kyle Gallivan, Efstratios Gallopoulos, Bret A. Marsolf, and David A. Padua. FALCON: A MATLAB interactive restructuring compiler. In Languages and Compilers for Parallel Computing, pages 269--288, 1995.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Jan Friso Groote and Misa Keinänen. Solving disjunctive/conjunctive boolean equation systems with alternating fixed points, 2004.]]Google ScholarGoogle Scholar
  4. Martin Hirzel, Daniel von Dincklage, Amer Diwan, and Michael Hind. Fast online pointer analysis. ACM Trans. Program. Lang. Syst., 29(2):11, 2007.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Sorin Lerner, Todd Millstein, and Craig Chambers. Automatically proving the correctness of compiler optimizations, 2003.]]Google ScholarGoogle Scholar
  6. Shih-Wei Liao, Amer Diwan, Robert P. Bosch Jr., Anwar M. Ghuloum, and Monica S. Lam. SUIF explorer: An interactive and interprocedural parallelizer. In Principles Practice of Parallel Programming, pages 37--48, 1999.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Angelika Mader. Verification of Modal Properties Using Infinite Boolean Equation Systems. PhD thesis.]]Google ScholarGoogle Scholar
  8. David A. Schmidt. Data flow analysis is model checking of abstract interpretations. In POPL ?98, pages 38--48, New York, NY, USA, 1998. ACM Press.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. David A. Schmidt and Bernhard Steffen. Program analysis as model checking of abstract interpretations. In SAS, pages 351--380, 1998.]]Google ScholarGoogle Scholar
  10. Sherry Shavor, Jim D?Anjou, Scott Fairbrother, Dan Kehn, John Kellerman, and Pat McCarthy. The Java Developers Guide to Eclipse. Addison-Wesley, May 2003.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Standard performance evaluation corporation. SPECjbb2000 (java business benchmark). http://www.spec.org/jbb2000.]]Google ScholarGoogle Scholar
  12. Standard performance evaluation corporation. SPECjvm98 benchmarks. http://www.spec.org/jvm98.]]Google ScholarGoogle Scholar
  13. M.Weiser. Program slicing. In Proceedings of ICSE, pages 439--449. IEEE Computer Society Press, 1981.]] Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Explaining failures of program analyses

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in
      • Published in

        cover image ACM Conferences
        PLDI '08: Proceedings of the 29th ACM SIGPLAN Conference on Programming Language Design and Implementation
        June 2008
        396 pages
        ISBN:9781595938602
        DOI:10.1145/1375581
        • General Chair:
        • Rajiv Gupta,
        • Program Chair:
        • Saman Amarasinghe
        • cover image ACM SIGPLAN Notices
          ACM SIGPLAN Notices  Volume 43, Issue 6
          PLDI '08
          June 2008
          382 pages
          ISSN:0362-1340
          EISSN:1558-1160
          DOI:10.1145/1379022
          Issue’s Table of Contents

        Copyright © 2008 ACM

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 7 June 2008

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Author Tags

        Qualifiers

        • research-article

        Acceptance Rates

        Overall Acceptance Rate406of2,067submissions,20%

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader
      About Cookies On This Site

      We use cookies to ensure that we give you the best experience on our website.

      Learn more

      Got it!