skip to main content
article

Systematic black-box analysis of collaborative web applications

Published:14 June 2017Publication History
Skip Abstract Section

Abstract

Web applications, such as collaborative editors that allow multiple clients to concurrently interact on a shared resource, are difficult to implement correctly. Existing techniques for analyzing concurrent software do not scale to such complex systems or do not consider multiple interacting clients. This paper presents Simian, the first fully automated technique for systematically analyzing multi-client web applications.

Naively exploring all possible interactions between a set of clients of such applications is practically infeasible. Simian obtains scalability for real-world applications by using a two-phase black-box approach. The application code remains unknown to the analysis and is first explored systematically using a single client to infer potential conflicts between client events triggered in a specific context. The second phase synthesizes multi-client interactions targeted at triggering misbehavior that may result from the potential conflicts, and reports an inconsistency if the clients do not converge to a consistent state.

We evaluate the analysis on three widely used systems, Google Docs, Firepad, and ownCloud Documents, where it reports a variety of inconsistencies, such as incorrect formatting and misplaced text fragments. Moreover, we find that the two-phase approach runs 10x faster compared to exhaustive exploration, making systematic analysis practically applicable.

Skip Supplemental Material Section

Supplemental Material

References

  1. C. Q. Adamsen, G. Mezzetti, and A. Møller. Systematic execution of Android test suites in adverse conditions. In nternational Symposium on Software Testing and Analysis (ISSTA), pages 83–93, 2015. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. C. Artho, K. Havelund, and A. Biere. High-level data races. Software Testing, Verification and Reliability, 13(4):207–227, 2003.Google ScholarGoogle ScholarCross RefCross Ref
  3. S. Artzi, J. Dolby, S. H. Jensen, A. Møller, and F. Tip. A framework for automated testing of JavaScript web applications. In International Conference on Software Engineering (ICSE), pages 571–580, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. H. Attiya, S. Burckhardt, A. Gotsman, A. Morrison, H. Yang, and M. Zawirski. Specification and complexity of collaborative text editing. In Symposium on Principles of Distributed Computing (PODC), pages 259–268, 2016. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. M. D. Bond, K. E. Coons, and K. S. McKinley. PACER: proportional detection of data races. In Conference on Programming Language Design and Implementation (PLDI), pages 255–268, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. P. A. Brooks and A. M. Memon. Automated GUI testing guided by usage profiles. In International Conference on Automated Software Engineering (ASE), pages 333–342, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. L. Brutschy, D. Dimitrov, P. Müller, and M. Vechev. Serializability for eventual consistency: criterion, analysis, and applications. In Symposium on Principles of Programming Languages (POPL), pages 458–472, 2017. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. S. Burckhardt, P. Kothari, M. Musuvathi, and S. Nagarakatte. A randomized scheduler with probabilistic guarantees of finding bugs. In Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS), pages 167–178, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. W. Choi, G. Necula, and K. Sen. Guided GUI testing of Android apps with minimal restart and approximate learning. In Conference on Object-Oriented Programming, Systems, Languages, and Applications (OOPSLA), pages 623–640, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. A. Choudhary, S. Lu, and M. Pradel. Efficient detection of thread safety violations via coverage-guided generation of concurrent tests. In International Conference on Software Engineering (ICSE), 2017. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. K. E. Coons, S. Burckhardt, and M. Musuvathi. GAMBIT: effective unit testing for concurrency libraries. In Symposium on Principles and Practice of Parallel Programming, (PPOPP), pages 15–24, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. L. Effinger-Dean, B. Lucia, L. Ceze, D. Grossman, and H.-J. Boehm. IFRit: Interference-free regions for dynamic data-race detection. In Conference on Object-Oriented Programming, Systems, Languages, and Applications (OOPSLA), pages 467–484, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. C. A. Ellis and S. J. Gibbs. Concurrency control in groupware systems. In International Conference on Management of Data (MOD), pages 399–407, 1989. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. M. Ermuth and M. Pradel. Monkey see, monkey do: Effective generation of GUI tests with inferred macro events. In International Symposium on Software Testing and Analysis (ISSTA), pages 82–93, 2016. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. C. Flanagan and S. N. Freund. Atomizer: a dynamic atomicity checker for multithreaded programs. In Symposium on Principles of Programming Languages (POPL), pages 256–267, 2004. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. C. Flanagan and P. Godefroid. Dynamic partial-order reduction for model checking software. In Symposium on Principles of Programming Languages (POPL), pages 110–121, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. C. Flanagan and S. Qadeer. A type and effect system for atomicity. In Conference on Programming Language Design and Implementation (PLDI), pages 338–349, 2003. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. C. Hsiao, C. Pereira, J. Yu, G. Pokam, S. Narayanasamy, P. M. Chen, Z. Kong, and J. Flinn. Race detection for eventdriven mobile applications. In Conference on Programming Language Design and Implementation (PLDI), pages 326– 336, 2014. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. C. S. Jensen, M. R. Prasad, and A. Møller. Automated testing with targeted event sequence generation. In International Symposium on Software Testing and Analysis (ISSTA), pages 67–77, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. C. S. Jensen, A. Møller, V. Raychev, and M. Vechev. Stateless model checking of event-driven applications. In Conference on Object-Oriented Programming Systems, Languages and Applications (OOPSLA), pages 57–73, 2015. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. S. Lu, S. Park, E. Seo, and Y. Zhou. Learning from mistakes: a comprehensive study on real world concurrency bug characteristics. In Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS), pages 329–339, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. P. Maiya, A. Kanade, and R. Majumdar. Race detection for Android applications. In Conference on Programming Language Design and Implementation (PLDI), pages 316– 325, 2014. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. A. Marchetto, P. Tonella, and F. Ricca. State-based testing of Ajax web applications. In International Conference on Software Testing, Verification, and Validation (ICST), pages 121–130, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. D. Marino, M. Musuvathi, and S. Narayanasamy. LiteRace: effective sampling for lightweight data-race detection. In Conference on Programming Language Design and Implementation (PLDI), pages 134–143, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. A. M. Memon. An event-flow model of GUI-based applications for testing. Software Testing, Verification and Reliability, 17(3):137–157, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. M. Musuvathi, S. Qadeer, T. Ball, G. Basler, P. A. Nainar, and I. Neamtiu. Finding and reproducing Heisenbugs in concurrent programs. In Symposium on Operating Systems Design and Implementation (OSDI), pages 267–280, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. E. Mutlu, S. Tasiran, and B. Livshits. Detecting JavaScript races that matter. In European Software Engineering Conference and International Symposium on Foundations of Software Engineering (ESEC/FSE), 2015. Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. M. Naik, C.-S. Park, K. Sen, and D. Gay. Effective static deadlock detection. In International Conference on Software Engineering (ICSE), pages 386–396, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. B. Petrov, M. Vechev, M. Sridharan, and J. Dolby. Race detection for web applications. In Conference on Programming Language Design and Implementation (PLDI), pages 251– 262, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. M. Pradel and T. R. Gross. Fully automatic and precise detection of thread safety violations. In Conference on Programming Language Design and Implementation (PLDI), pages 521–530, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. M. Pradel, M. Huggler, and T. R. Gross. Performance regression testing of concurrent classes. In International Symposium on Software Testing and Analysis (ISSTA), pages 13–25, 2014. Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. M. Pradel, P. Schuh, G. Necula, and K. Sen. EventBreak: Analyzing the responsiveness of user interfaces through performance-guided test generation. In Conference on Object-Oriented Programming, Systems, Languages, and Applications (OOPSLA), pages 33–47, 2014. Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. V. Raychev, M. Vechev, and M. Sridharan. Effective race detection for event-driven programs. In Conference on Object-Oriented Programming, Systems, Languages, and Applications (OOPSLA), pages 151–166, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. M. Samak and M. K. Ramanathan. Multithreaded test synthesis for deadlock detection. In Conference on Object-Oriented Programming Systems, Languages and Applications (OOPSLA), pages 473–489, 2014. Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. M. Samak and M. K. Ramanathan. Synthesizing tests for detecting atomicity violations. In European Software Engineering Conference and International Symposium on Foundations of Software Engineering (ESEC/FSE), pages 131–142, 2015. Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. M. Samak, M. K. Ramanathan, and S. Jagannathan. Synthesizing racy tests. In Conference on Programming Language Design and Implementation (PLDI), pages 175–185, 2015. Google ScholarGoogle ScholarDigital LibraryDigital Library
  37. M. Samak, O. Tripp, and M. K. Ramanathan. Directed synthesis of failing concurrent executions. In International Conference on Object-Oriented Programming, Systems, Languages, and Applications (OOPSLA), pages 430–446, 2016. Google ScholarGoogle ScholarDigital LibraryDigital Library
  38. K. Sen. Effective random testing of concurrent programs. In International Conference on Automated Software Engineering (ASE), pages 323–332, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  39. K. Sen. Race directed random testing of concurrent programs. In Conference on Programming Language Design and Implementation (PLDI), pages 11–21, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  40. O. Shacham, N. Bronson, A. Aiken, M. Sagiv, M. Vechev, and E. Yahav. Testing atomicity of composed concurrent operations. In Conference on Object-Oriented Programming Systems, Languages, and Applications (OOPSLA), pages 51– 64, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  41. S. Tasharofi, M. Pradel, Y. Lin, and R. Johnson. Bita: Coverage-guided, automatic testing of actor programs. In Conference on Automated Software Engineering (ASE), 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  42. V. Terragni and S.-C. Cheung. Coverage-driven test code generation for concurrent classes. In International Conference on Software Engineernig (ICSE), pages 1121–1132, 2016. Google ScholarGoogle ScholarDigital LibraryDigital Library
  43. S. Thummalapenta, K. V. Lakshmi, S. Sinha, N. Sinha, and S. Chandra. Guided test generation for web applications. In International Conference on Software Engineering (ICSE), pages 162–171, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  44. W. Visser, K. Havelund, G. P. Brat, S. Park, and F. Lerda. Model checking programs. International Conference on Automated Software Engineering (ASE), pages 203–232, 2003. Google ScholarGoogle ScholarDigital LibraryDigital Library
  45. W. Vogels. Eventually consistent. Communications of the ACM, 52(1):40–44, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  46. C. Wang, M. Said, and A. Gupta. Coverage guided systematic concurrency testing. In International Conference on Software Engineering (ICSE), pages 221–230, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  47. A. Williams, W. Thies, and M. D. Ernst. Static deadlock detection for Java libraries. In European Conference on Object-Oriented Programming (ECOOP), pages 602–629, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  48. T. Yu and M. Pradel. Syncprof: Detecting, localizing, and optimizing synchronization bottlenecks. In International Symposium on Software Testing and Analysis (ISSTA), pages 389– 400, 2016. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Systematic black-box analysis of collaborative web applications

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in

      Full Access

      • Published in

        cover image ACM SIGPLAN Notices
        ACM SIGPLAN Notices  Volume 52, Issue 6
        PLDI '17
        June 2017
        708 pages
        ISSN:0362-1340
        EISSN:1558-1160
        DOI:10.1145/3140587
        Issue’s Table of Contents
        • cover image ACM Conferences
          PLDI 2017: Proceedings of the 38th ACM SIGPLAN Conference on Programming Language Design and Implementation
          June 2017
          708 pages
          ISBN:9781450349888
          DOI:10.1145/3062341

        Copyright © 2017 ACM

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 14 June 2017

        Check for updates

        Qualifiers

        • article

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader
      About Cookies On This Site

      We use cookies to ensure that we give you the best experience on our website.

      Learn more

      Got it!