skip to main content

DiffStream: differential output testing for stream processing programs

Published:13 November 2020Publication History
Skip Abstract Section

Abstract

High performance architectures for processing distributed data streams, such as Flink, Spark Streaming, and Storm, are increasingly deployed in emerging data-driven computing systems. Exploiting the parallelism afforded by such platforms, while preserving the semantics of the desired computation, is prone to errors, and motivates the development of tools for specification, testing, and verification. We focus on the problem of differential output testing for distributed stream processing systems, that is, checking whether two implementations produce equivalent output streams in response to a given input stream. The notion of equivalence allows reordering of logically independent data items, and the main technical contribution of the paper is an optimal online algorithm for checking this equivalence. Our testing framework is implemented as a library called DiffStream in Flink. We present four case studies to illustrate how our framework can be used to (1) correctly identify bugs in a set of benchmark MapReduce programs, (2) facilitate the development of difficult-to-parallelize high performance applications, and (3) monitor an application for a long period of time with minimal performance overhead.

Skip Supplemental Material Section

Supplemental Material

Auxiliary Presentation Video

OOPSLA Video Presentation

References

  1. Parosh Abdulla, Stavros Aronis, Bengt Jonsson, and Konstantinos Sagonas. 2014. Optimal dynamic partial order reduction. ACM SIGPLAN Notices 49, 1 ( 2014 ), 373-384.Google ScholarGoogle Scholar
  2. Sebastian Burckhardt, Chris Dern, Madanlal Musuvathi, and Roy Tan. 2010. Line-up: a complete and automatic linearizability checker. In Proceedings of the 31st ACM SIGPLAN Conference on Programming Language Design and Implementation. 330-340.Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Sebastian Burckhardt, Alexey Gotsman, Hongseok Yang, and Marek Zawirski. 2014. Replicated data types: specification, verification, optimality. In ACM Sigplan Notices, Vol. 49. ACM, 271-284.Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Paris Carbone, Asterios Katsifodimos, Stephan Ewen, Volker Markl, Seif Haridi, and Kostas Tzoumas. 2015. Apache flink: Stream and batch processing in a single engine. Bulletin of the IEEE Computer Society Technical Committee on Data Engineering 36, 4 ( 2015 ).Google ScholarGoogle Scholar
  5. Saksham Chand, Yanhong A Liu, and Scott D Stoller. 2016. Formal verification of multi-Paxos for distributed consensus. In International Symposium on Formal Methods. Springer, 119-136.Google ScholarGoogle ScholarCross RefCross Ref
  6. Xin Chen, Ymir Vigfusson, Douglas M Blough, Fang Zheng, Kun-Lung Wu, and Liting Hu. 2017. GOVERNOR: Smoother Stream Processing Through Smarter Backpressure. In 2017 IEEE International Conference on Autonomic Computing (ICAC). IEEE, 145-154.Google ScholarGoogle Scholar
  7. Yu-Fang Chen, Lei Song, and Zhilin Wu. 2016. The commutativity problem of the MapReduce framework: A transducer-based approach. In International Conference on Computer Aided Verification. Springer, 91-111.Google ScholarGoogle ScholarCross RefCross Ref
  8. Sanket Chintapalli, Derek Dagit, Bobby Evans, Reza Farivar, Thomas Graves, Mark Holderbaugh, Zhuo Liu, Kyle Nusbaum, Kishorkumar Patil, Boyang Jerry Peng, et al. 2016. Benchmarking streaming computation engines: Storm, flink and spark streaming. In 2016 IEEE international parallel and distributed processing symposium workshops (IPDPSW). IEEE, 1789-1792.Google ScholarGoogle Scholar
  9. Rebecca L Collins and Luca P Carloni. 2009. Flexible filters: load balancing through backpressure for stream programs. In Proceedings of the seventh ACM international conference on Embedded software. 205-214.Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Christoph Csallner, Leonidas Fegaras, and Chengkai Li. 2011. New ideas track: testing mapreduce-style programs. In Proceedings of the 19th ACM SIGSOFT symposium and the 13th European conference on Foundations of software engineering. ACM, 504-507.Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Jefrey Dean and Sanjay Ghemawat. 2008. MapReduce: Simplified Data Processing on Large Clusters. Commun. ACM 51, 1 (Jan. 2008 ), 107-113. https://doi.org/10.1145/1327452.1327492 Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Volker Diekert and Grzegorz Rozenberg. 1995. The Book of Traces. World Scientific. https://doi.org/10.1142/2563 Google ScholarGoogle ScholarCross RefCross Ref
  13. Robert B Evans and Alberto Savoia. 2007. Diferential testing: a new approach to change detection. In The 6th Joint Meeting on European software engineering conference and the ACM SIGSOFT Symposium on the Foundations of Software Engineering: Companion Papers. ACM, 549-552.Google ScholarGoogle Scholar
  14. Apache Software Foundation. 2019. Apache Storm. http://storm.apache.org/. [Online; accessed March 31, 2019 ].Google ScholarGoogle Scholar
  15. Phillip B Gibbons and Ephraim Korach. 1992. The complexity of sequential consistency. In Proceedings of the Fourth IEEE Symposium on Parallel and Distributed Processing. IEEE, 317-325.Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Phillip B Gibbons and Ephraim Korach. 1997. Testing shared memories. SIAM J. Comput. 26, 4 ( 1997 ), 1208-1244.Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Patrice Godefroid. 1996. Partial-order methods for the verification of concurrent systems: an approach to the state-explosion problem. Springer-Verlag.Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Alex Groce, Gerard Holzmann, and Rajeev Joshi. 2007. Randomized diferential testing as a prelude to formal verification. In 29th International Conference on Software Engineering (ICSE'07). IEEE, 621-631.Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. Muhammad Ali Gulzar, Matteo Interlandi, Seunghyun Yoo, Sai Deep Tetali, Tyson Condie, Todd Millstein, and Miryung Kim. 2016. Bigdebug: Debugging primitives for interactive big data processing in spark. In 2016 IEEE/ACM 38th International Conference on Software Engineering (ICSE). IEEE, 784-795.Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Klaus Havelund and Grigore Roşu. 2004. Eficient monitoring of safety properties. International Journal on Software Tools for Technology Transfer 6, 2 ( 2004 ).Google ScholarGoogle ScholarCross RefCross Ref
  21. Chris Hawblitzel, Jon Howell, Manos Kapritsos, Jacob R Lorch, Bryan Parno, Michael L Roberts, Srinath Setty, and Brian Zill. 2015. IronFleet: proving practical distributed systems correct. In Proceedings of the 25th Symposium on Operating Systems Principles. ACM, 1-17.Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. Maurice P Herlihy and Jeannette M Wing. 1990. Linearizability: A correctness condition for concurrent objects. ACM Transactions on Programming Languages and Systems (TOPLAS) 12, 3 ( 1990 ), 463-492.Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. Paul Holser. 2013. junit-quickcheck (software). https://github.com/pholser/junit-quickcheck.Google ScholarGoogle Scholar
  24. Petr Hosek and Cristian Cadar. 2013. Safe software updates via multi-version execution. In Proceedings of the 2013 International Conference on Software Engineering. IEEE Press, 612-621.Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. JUnit. 2019. JUnit testing framework. https://junit.org/junit5/. [Online; accessed October 19, 2019 ].Google ScholarGoogle Scholar
  26. Soila Kavulya, Jiaqi Tan, Rajeev Gandhi, and Priya Narasimhan. 2010. An analysis of traces from a production mapreduce cluster. In Proceedings of the 2010 10th IEEE/ACM International Conference on Cluster, Cloud and Grid Computing. IEEE Computer Society, 94-103.Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. Sanjeev Kulkarni, Nikunj Bhagat, Maosong Fu, Vikas Kedigehalli, Christopher Kellogg, Sailesh Mittal, Jignesh M. Patel, Karthik Ramasamy, and Siddarth Taneja. 2015. Twitter Heron: Stream Processing at Scale. In Proceedings of the 2015 ACM SIGMOD International Conference on Management of Data (Melbourne, Victoria, Australia) ( SIGMOD '15). ACM, New York, NY, USA, 239-250. https://doi.org/10.1145/2723372.2742788 Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. Edward A Lee and David G Messerschmitt. 1987. Synchronous data flow. Proc. IEEE 75, 9 ( 1987 ), 1235-1245.Google ScholarGoogle ScholarCross RefCross Ref
  29. Martin Leucker and Christian Schallhart. 2009. A brief account of runtime verification. The Journal of Logic and Algebraic Programming 78, 5 ( 2009 ), 293-303.Google ScholarGoogle ScholarCross RefCross Ref
  30. Sihan Li, Hucheng Zhou, Haoxiang Lin, Tian Xiao, Haibo Lin, Wei Lin, and Tao Xie. 2013. A characteristic study on failures of production distributed data-parallel programs. In Proceedings of the 2013 International Conference on Software Engineering. IEEE Press, 963-972.Google ScholarGoogle ScholarCross RefCross Ref
  31. Chang Liu, Jiaxing Zhang, Hucheng Zhou, Sean McDirmid, Zhenyu Guo, and Thomas Moscibroda. 2014. Automating distributed partial aggregation. In Proceedings of the ACM Symposium on Cloud Computing. 1-12.Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. Gavin Lowe. 2017. Testing for linearizability. Concurrency and Computation: Practice and Experience 29, 4 ( 2017 ), e3928.Google ScholarGoogle Scholar
  33. Konstantinos Mamouras, Caleb Stanford, Rajeev Alur, Zachary G Ives, and Val Tannen. 2019. Data-trace types for distributed stream processing systems. In Proceedings of the 40th ACM SIGPLAN Conference on Programming Language Design and Implementation. ACM, 670-685.Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. João Eugenio Marynowski, Michel Albonico, Eduardo Cunha de Almeida, and Gerson Sunyé. 2012. Testing MapReduce-based systems. arXiv preprint arXiv:1209.6580 ( 2012 ).Google ScholarGoogle Scholar
  35. Matthew Maurer and David Brumley. 2012. Tachyon: Tandem Execution for Eficient Live Patch Testing. In Presented as part of the 21st USENIX Security Symposium (USENIX Security 12). USENIX, Bellevue, WA, 617-630. https://www.usenix. org/conference/usenixsecurity12/technical-sessions/presentation/maurerGoogle ScholarGoogle Scholar
  36. Antoni Mazurkiewicz. 1986. Trace theory. In Advanced course on Petri nets. Springer, 278-324.Google ScholarGoogle Scholar
  37. William M McKeeman. 1998. Diferential testing for software. Digital Technical Journal 10, 1 ( 1998 ), 100-107.Google ScholarGoogle Scholar
  38. Jesús Morán, Claudio de la Riva, and Javier Tuya. 2019. Testing MapReduce programs: A systematic mapping study. Journal of Software: Evolution and Process 31, 3 ( 2019 ), e2120.Google ScholarGoogle ScholarDigital LibraryDigital Library
  39. Robert HB Netzer and Barton P Miller. 1990. On the complexity of event ordering for shared-memory parallel program executions. Technical Report. University of Wisconsin-Madison Department of Computer Sciences.Google ScholarGoogle Scholar
  40. Robert HB Netzer and Barton P Miller. 1992. What are race conditions? Some issues and formalizations. ACM Letters on Programming Languages and Systems (LOPLAS) 1, 1 ( 1992 ), 74-88.Google ScholarGoogle Scholar
  41. Shadi A Noghabi, Kartik Paramasivam, Yi Pan, Navina Ramesh, Jon Bringhurst, Indranil Gupta, and Roy H Campbell. 2017. Samza: stateful scalable stream processing at LinkedIn. Proceedings of the VLDB Endowment 10, 12 ( 2017 ), 1634-1645.Google ScholarGoogle ScholarDigital LibraryDigital Library
  42. Christopher Olston, Shubham Chopra, and Utkarsh Srivastava. 2009. Generating example data for dataflow programs. In Proceedings of the 2009 ACM SIGMOD International Conference on Management of data. ACM, 245-256.Google ScholarGoogle ScholarDigital LibraryDigital Library
  43. Christopher Olston and Benjamin Reed. 2011. Inspector gadget: A framework for custom monitoring and debugging of distributed dataflows. In Proceedings of the 2011 ACM SIGMOD International Conference on Management of data. ACM, 1221-1224.Google ScholarGoogle ScholarDigital LibraryDigital Library
  44. Stack Overflow. 2020. Questions tagged with apache-flink on Stack Overflow. https://stackoverflow.com/questions/tagged/ apache-flink. [Online; accessed January 27, 2020 ].Google ScholarGoogle Scholar
  45. Burcu Kulahcioglu Ozkan, Rupak Majumdar, Filip Niksic, Mitra Tabaei Befrouei, and Georg Weissenbacher. 2018. Randomized testing of distributed systems with probabilistic guarantees. Proceedings of the ACM on Programming Languages 2, OOPSLA ( 2018 ), 160.Google ScholarGoogle ScholarDigital LibraryDigital Library
  46. Oded Padon, Kenneth L McMillan, Aurojit Panda, Mooly Sagiv, and Sharon Shoham. 2016. Ivy: safety verification by interactive generalization. ACM SIGPLAN Notices 51, 6 ( 2016 ), 614-630.Google ScholarGoogle Scholar
  47. Chang-Seo Park, Koushik Sen, Paul Hargrove, and Costin Iancu. 2011. Eficient data race detection for distributed memory parallel programs. In Proceedings of 2011 International Conference for High Performance Computing, Networking, Storage and Analysis. ACM, 51.Google ScholarGoogle ScholarDigital LibraryDigital Library
  48. Doron Peled. 1994. Combining Partial Order Reductions with On-the-fly Model-Checking. In Computer Aided Verification, Proc. 6th Int. Conference (LNCS 818). Springer-Verlag.Google ScholarGoogle Scholar
  49. Alex Raizman, Asvin Ananthanarayan, Anton Kirilov, Badrish Chandramouli, and Mohamed H Ali. 2010. An extensible test framework for the Microsoft StreamInsight query processor.. In DBTest.Google ScholarGoogle Scholar
  50. Veselin Raychev, Madanlal Musuvathi, and Todd Mytkowicz. 2015. Parallelizing user-defined aggregations using symbolic execution. In Proceedings of the 25th Symposium on Operating Systems Principles. 153-167.Google ScholarGoogle ScholarDigital LibraryDigital Library
  51. Stefan Savage, Michael Burrows, Greg Nelson, Patrick Sobalvarro, and Thomas Anderson. 1997. Eraser: A dynamic data race detector for multithreaded programs. ACM Transactions on Computer Systems (TOCS) 15, 4 ( 1997 ), 391-411.Google ScholarGoogle ScholarDigital LibraryDigital Library
  52. Scott Schneider, Martin Hirzel, Buğra Gedik, and Kun-Lung Wu. 2013. Safe data parallelism for general streaming. IEEE transactions on computers 64, 2 ( 2013 ), 504-517.Google ScholarGoogle Scholar
  53. Bianca Schroeder and Garth Gibson. 2009. A large-scale study of failures in high-performance computing systems. IEEE transactions on Dependable and Secure Computing 7, 4 ( 2009 ), 337-350.Google ScholarGoogle Scholar
  54. Koushik Sen. 2008. Race directed random testing of concurrent programs. ACM Sigplan Notices 43, 6 ( 2008 ), 11-21.Google ScholarGoogle Scholar
  55. William Thies, Michal Karczmarek, and Saman Amarasinghe. 2002. StreamIt: A language for streaming applications. In International Conference on Compiler Construction. Springer, 179-196.Google ScholarGoogle ScholarCross RefCross Ref
  56. Joseph Tucek, Weiwei Xiong, and Yuanyuan Zhou. 2009. Eficient Online Validation with Delta Execution. In Proceedings of the 14th International Conference on Architectural Support for Programming Languages and Operating Systems (Washington, DC, USA) (ASPLOS XIV). ACM, New York, NY, USA, 193-204. https://doi.org/10.1145/1508244.1508267 Google ScholarGoogle ScholarDigital LibraryDigital Library
  57. Alexandre Vianna, Waldemar Ferreira, and Kiev Gama. 2019. An Exploratory Study of How Specialists Deal with Testing in Data Stream Processing Applications. arXiv preprint arXiv: 1909. 11069 ( 2019 ).Google ScholarGoogle Scholar
  58. James R Wilcox, Doug Woos, Pavel Panchekha, Zachary Tatlock, Xi Wang, Michael D Ernst, and Thomas Anderson. 2015. Verdi: a framework for implementing and formally verifying distributed systems. ACM SIGPLAN Notices 50, 6 ( 2015 ), 357-368.Google ScholarGoogle Scholar
  59. Jeannette M. Wing and Chun Gong. 1993. Testing and verifying concurrent objects. J. Parallel and Distrib. Comput. 17, 1-2 ( 1993 ), 164-182.Google ScholarGoogle ScholarDigital LibraryDigital Library
  60. Tian Xiao, Jiaxing Zhang, Hucheng Zhou, Zhenyu Guo, Sean McDirmid, Wei Lin, Wenguang Chen, and Lidong Zhou. 2014. Nondeterminism in MapReduce considered harmful? an empirical study on non-commutative aggregators in MapReduce programs. In Companion Proceedings of the 36th International Conference on Software Engineering. ACM, 44-53.Google ScholarGoogle ScholarDigital LibraryDigital Library
  61. Zhihong Xu, Martin Hirzel, and Gregg Rothermel. 2013a. Semantic characterization of MapReduce workloads. In 2013 IEEE International Symposium on Workload Characterization (IISWC). IEEE, 87-97.Google ScholarGoogle ScholarCross RefCross Ref
  62. Zhihong Xu, Martin Hirzel, Gregg Rothermel, and Kun-Lung Wu. 2013b. Testing properties of dataflow program operators. In Proceedings of the 28th IEEE/ACM International Conference on Automated Software Engineering. IEEE Press, 103-113.Google ScholarGoogle ScholarDigital LibraryDigital Library
  63. Xuejun Yang, Yang Chen, Eric Eide, and John Regehr. 2011. Finding and understanding bugs in C compilers. In ACM SIGPLAN Notices, Vol. 46. ACM, 283-294.Google ScholarGoogle ScholarDigital LibraryDigital Library
  64. Matei Zaharia, Tathagata Das, Haoyuan Li, Timothy Hunter, Scott Shenker, and Ion Stoica. 2013. Discretized streams: Fault-tolerant streaming computation at scale. In Proceedings of the twenty-fourth ACM symposium on operating systems principles. ACM, 423-438.Google ScholarGoogle ScholarDigital LibraryDigital Library
  65. Hucheng Zhou, Jian-Guang Lou, Hongyu Zhang, Haibo Lin, Haoxiang Lin, and Tingting Qin. 2015. An empirical study on quality issues of production big data platform. In Proceedings of the 37th International Conference on Software EngineeringVolume 2. IEEE Press, 17-26.Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. DiffStream: differential output testing for stream processing programs

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in

        Full Access

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader
        About Cookies On This Site

        We use cookies to ensure that we give you the best experience on our website.

        Learn more

        Got it!