10.1145/2001420.2001445acmconferencesArticle/Chapter ViewAbstractPublication PagesisstaConference Proceedingsconference-collections
research-article

Are automated debugging techniques actually helping programmers?

Online:17 July 2011Publication History

ABSTRACT

Debugging is notoriously difficult and extremely time consuming. Researchers have therefore invested a considerable amount of effort in developing automated techniques and tools for supporting various debugging tasks. Although potentially useful, most of these techniques have yet to demonstrate their practical effectiveness. One common limitation of existing approaches, for instance, is their reliance on a set of strong assumptions on how developers behave when debugging (e.g., the fact that examining a faulty statement in isolation is enough for a developer to understand and fix the corresponding bug). In more general terms, most existing techniques just focus on selecting subsets of potentially faulty statements and ranking them according to some criterion. By doing so, they ignore the fact that understanding the root cause of a failure typically involves complex activities, such as navigating program dependencies and rerunning the program with different inputs. The overall goal of this research is to investigate how developers use and benefit from automated debugging tools through a set of human studies. As a first step in this direction, we perform a preliminary study on a set of developers by providing them with an automated debugging tool and two tasks to be performed with and without the tool. Our results provide initial evidence that several assumptions made by automated debugging techniques do not hold in practice. Through an analysis of the results, we also provide insights on potential directions for future work in the area of automated debugging.

References

  1. T. Ball, M. Naik, and S. K. Rajamani. From symptom to cause: localizing errors in counterexample traces. In Proceedings of the Symposium on Principles of Programming Languages (POPL 03), pages 97--105, New Orleans, LA, USA, 2003. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. H. Cheng, D. Lo, Y. Zhou, X. Wang, and X. Yan. Identifying bug signatures using discriminative graph mining. In Proceedings of the International Symposium on Software Testing and Analysis (ISSTA 09), pages 141--152, Chicago, IL, USA, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. H. Cleve and A. Zeller. Finding failure causes through automated testing. In Proceedings of the International Workshop on Automated Debugging (AADEBUG 00), Munich, Germany, 2000.Google ScholarGoogle Scholar
  4. R. A. DeMillo, H. Pan, and E. H. Spafford. Critical slicing for software fault localization. In Proceedings of the International Symposium on Software Testing and Analysis (ISSTA 96), pages 121--134, San Diego, CA, USA, 1996. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. H. Do, S. Elbaum, and G. Rothermel. Supporting controlled experimentation with testing techniques: An infrastructure and its potential impact. Empirical Software Engineering, 10:405--435, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. M. A. Francel and S. Rugaber. The value of slicing while debugging. Science of Computer Programming, 40:151--169, 2001. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. L. A. Granka, T. Joachims, and G. Gay. Eye-tracking analysis of user behavior in www search. In Proceedings of the International Conference on Research and Development in Information Retrieval (SIGIR 04), pages 478--479, Sheffield, UK, 2004. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. A. Groce, D. Kroening, and F. Lerda. Understanding counterexamples with explain. In Proceedings of the International Conference on Computer-Aided Verification (CAV 04), pages 453--456, Boston, MA, USA, 2004.Google ScholarGoogle ScholarCross RefCross Ref
  9. T. Gyimothy, A. Beszedes, and I. Forgacs. An efficient relevant slicing method for debugging. In Proceedings of the European Software Engineering Conference and Symposium on the Foundations of Software Engineering (ESEC/FSE 99), pages 303--321, London, UK, 1999. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. D. Hao, Y. Pan, L. Zhang, W. Zhao, H. Mei, and J. Sun. A similarity-aware approach to testing based fault localization. In Proceedings of the International Conference on Automated Software Engineering (ASE 05), pages 291--294, Long Beach, CA, USA, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. J. Jones, M. J. Harrold, and J. Stasko. Visualization of test information to assist fault localization. In Proceedings of the International Conference on Software Engineering (ICSE 02), pages 467--477, Orlando, FL, USA, 2002. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. J. A. Jones and M. J. Harrold. Empirical evaluation of the tarantula automatic fault-localization technique. In Proceedings of the International Conference on Automated Software Engineering (ASE 05), pages 273--282, Long Beach, CA, USA, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. J. A. Jones, M. J. Harrold, and J. F. Bowring. Debugging in parallel. In Proceedings of the International Symposium on Software Testing and Analysis (ISSTA 07), pages 16--26, London, UK, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. A. J. Ko and B. A. Myers. Debugging reinvented: asking and answering why and why not questions about program behavior. In Proceedings of the International Conference on Software Engineering (ICSE 08), pages 301--310, Leipzig, Germany, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. A. J. Ko and B. A. Myers. Finding causes of program output with the Java Whyline. In Proceedings of the International Conference on Human Factors in Computing Systems (CHI 09), pages 1569--1578, Boston, MA, USA, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. S. Kusumoto, A. Nishimatsu, K. Nishie, and K. Inoue. Experimental evaluation of program slicing for fault localization. Empirical Software Engineering, 7:49--76, 2002. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. B. Liblit, M. Naik, A. X. Zheng, A. Aiken, and M. I. Jordan. Scalable statistical bug isolation. In Proceedings of the Conference on Programming Language Design and Implementation (PLDI 2005), Chicago, IL, USA, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. C. Liu and J. Han. Failure proximity: A fault localization-based approach. In Proceedings of the International Symposium on the Foundations of Software Engineering (FSE 06), pages 286--295, Portland, Oregon, USA, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. C. Liu, X. Yan, L. Fei, J. Han, and S. P. Midkiff. SOBER: Statistical model-based bug localization. In Proceedings of the European Software Engineering Conference and Symposium on the Foundations of Software Engineering (ESEC/FSE 05), pages 286--295, Lisbon, Portugal, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. C. Parnin and R. DeLine. Evaluating cues for resuming interrupted programming tasks. In Proceedings of the International Conference on Human Factors in Computing Systems (CHI 10), pages 93--102, Atlanta, GA, USA, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. M. Renieris and S. Reiss. Fault localization with nearest neighbor queries. In Proceedings of the International Conference on Automated Software Engineering (ASE 03), pages 30--39, Montreal, Canada, 2003.Google ScholarGoogle ScholarCross RefCross Ref
  22. T. Reps, T. Ball, M. Das, and J. Larus. The use of program profiling for software maintenance with applications to the year 2000 problem. Proceedings of the European Software Engineering Conference and Symposium on the Foundations of Software Engineering (ESEC/FSE 97), pages 432--449, Zurich, Switzerland, 1997. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. K. D. Sherwood and G. C. Murphy. Reducing code navigation effort with differential code coverage. Technical Report TR-2008-14, University of British Columbia, 2008. ftp://ftp.cs.ubc.ca/local/techreports/2008/TR-2008-14.pdf.Google ScholarGoogle Scholar
  24. J. Sillito, K. D. Voider, B. Fisher, and G. Murphy. Managing software change tasks: an exploratory study. Proceedings of the International Symposium on Empirical Software Engineering (ISESE 05), pages 23--32, Noosa Heads, Australia, 2005.Google ScholarGoogle ScholarCross RefCross Ref
  25. I. Vessey. Expertise in debugging computer programs. International Journal of Man-Machine Studies: A Process Analysis, 23(5):459--494, 1985.Google ScholarGoogle ScholarCross RefCross Ref
  26. M. Weiser. Program slicing. In Proceedings of the International Conference on Software Engineering (ICSE 81), pages 439--449, San Diego, CA, USA, 1981. Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. M. Weiser. Program slicing. IEEE Transactions on Software Engineering, 10(4):352--357, 1984.Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. M. Weiser and J. Lyle. Experiments on slicing-based debugging aids. In Proceedings of the Workshop on Empirical Studies of Programmers, pages 187--197, Norwood, NJ, USA, 1986. Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. A. Zeller and R. Hildebrandt. Simplifying and isolating failure-inducing input. IEEE Transactions on Software Engineering, 28(2):183--200, 2002. Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. X. Zhang, N. Gupta, and R. Gupta. Pruning dynamic slices with confidence. In Proceedings of the Conference on Programming Language Design and Implementation (PLDI 06), pages 169--180, New York, NY, USA, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. X. Zhang, R. Gupta, and Y. Zhang. Precise dynamic slicing algorithms. In Proceedings of the International Conference on Software Engineering (ICSE 03), pages 319--329, Washington, DC, USA, 2003. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Are automated debugging techniques actually helping programmers?

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      ACM Conferences cover image
      ISSTA '11: Proceedings of the 2011 International Symposium on Software Testing and Analysis
      July 2011
      394 pages
      ISBN:9781450305624
      DOI:10.1145/2001420
      • General Chair:
      • Matthew Dwyer,
      • Program Chair:
      • Frank Tip

      Copyright © 2011 ACM

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Online: 17 July 2011
      • Published: 17 July 2011

      Permissions

      Request permissions about this article.

      Request Permissions

      Qualifiers

      • research-article

      Acceptance Rates

      ISSTA '11 Paper Acceptance Rate 35 of 121 submissions, 29%
      Overall Acceptance Rate 241 of 915 submissions, 26%

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader
    About Cookies On This Site

    We use cookies to ensure that we give you the best experience on our website.

    Learn more

    Got it!