skip to main content
research-article

Enabledness-based Testing of Object Protocols

Authors Info & Claims
Published:03 January 2021Publication History
Skip Abstract Section

Abstract

A significant proportion of classes in modern software introduce or use object protocols, prescriptions on the temporal orderings of method calls on objects. This article studies search-based test generation techniques that aim to exploit a particular abstraction of object protocols (enabledness preserving abstractions (EPAs)) to find failures. We define coverage criteria over an extension of EPAs that includes abnormal method termination and define a search-based test case generation technique aimed at achieving high coverage. Results suggest that the proposed case generation technique with a fitness function that aims at combined structural and extended EPA coverage can provide better failure-detection capabilities not only for protocol failures but also for general failures when compared to random testing and search-based test generation for standard structural coverage.

References

  1. [n.d.]. Oracle. Retrieved from https://docs.oracle.com/javase/8/docs/api/java/util/ListIterator.html and https://docs.oracle.com/javase/8/docs/api/java/util/ListIterator..htmlGoogle ScholarGoogle Scholar
  2. [n.d.]. Tcases: A Model-Based Test Case Generator. Retrieved from https://github.com/Cornutum/tcases and https://github.com/Cornutum/tcasesl.Google ScholarGoogle Scholar
  3. [n.d.]. Apache. Retrieved from https://xalan.apache.org and https://xalan.apache.org.Google ScholarGoogle Scholar
  4. [n.d.]. Retrieved from https://bugs.openjdk.java.net/browse/JDK-4308549 and https://bugs.openjdk.java.net/browse/JDK-4308549.Google ScholarGoogle Scholar
  5. [n.d.]. Retrieved from http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=6529795 and http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=6529795.Google ScholarGoogle Scholar
  6. [n.d.]. Apache. Retrieved from https://issues.apache.org/jira/browse/COLLECTIONS-360 and https://issues.apache.org/jira/browse/COLLECTIONS-360.Google ScholarGoogle Scholar
  7. [n.d.]. Retrieved from http://hg.openjdk.java.net/openjfx/10-dev/rt/rev/572a70fabb47 and http://hg.openjdk.java.net/openjfx/10-dev/rt/rev/572a70fabb47.Google ScholarGoogle Scholar
  8. [n.d.]. Retrieved from https://bugs.openjdk.java.net/browse/JDK-4238266 and https://bugs.openjdk.java.net/browse/JDK-4238266.Google ScholarGoogle Scholar
  9. [n.d.]. Retrieved from https://bugs.java.com/bugdatabase/view_bug.do?bug_id=4135670 and https://bugs.java.com/bugdatabase/view_bug.do?bug_id=4135670.Google ScholarGoogle Scholar
  10. [n.d.]. Retrieved from https://issues.apache.org/jira/browse/COLLECTIONS-61 and https://issues.apache.org/jira/browse/COLLECTIONS-61.Google ScholarGoogle Scholar
  11. [n.d.]. Retrieved from https://bugs.openjdk.java.net/ and https://bugs.openjdk.java.net/.Google ScholarGoogle Scholar
  12. [n.d.]. Retrieved from http://bugs.sun.com and http://bugs.sun.com.Google ScholarGoogle Scholar
  13. [n.d.]. Retrieved from https://issues.apache.org and https://issues.apache.org.Google ScholarGoogle Scholar
  14. D. Amalfitano, A. R. Fasolino, P. Tramontana, B. D. Ta, and A. M. Memon. 2015. MobiGUITAR: Automated model-based testing of mobile apps. IEEE Softw. 32, 5 (September 2015), 53--59. DOI:https://doi.org/10.1109/MS.2014.55Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. James H. Andrews, Lionel C. Briand, Yvan Labiche, and Akbar Siami Namin. 2006. Using mutation analysis for assessing and comparing testing coverage criteria. IEEE Trans. Softw. Eng. 32, 8 (2006), 608--624. DOI:https://doi.org/10.1109/TSE.2006.83Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. James H. Andrews, Tim Menzies, and Felix C. H. Li. 2011. Genetic algorithms for randomized unit testing. IEEE Trans. Softw. Eng. 37, 1 (2011), 80--94.Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Andrea Arcuri and Lionel Briand. 2011. A practical guide for using statistical tests to assess randomized algorithms in software engineering. In Proceedings of the 33rd International Conference on Software Engineering (ICSE’11). IEEE, 1--10.Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Andrea Arcuri and Lionel C. Briand. 2014. A Hitchhiker’s guide to statistical tests for assessing randomized algorithms in software engineering. Softw. Test. Verif. Reliabil. 24, 3 (2014), 219--250. DOI:https://doi.org/10.1002/stvr.1486Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. Andrea Arcuri, Gordon Fraser, and Juan Pablo Galeotti. 2014. Automated unit test generation for classes with environment dependencies. In Proceedings of the ACM/IEEE International Conference on Automated Software Engineering (ASE’14), Ivica Crnkovic, Marsha Chechik, and Paul Grünbacher (Eds.). ACM, 79--90. DOI:https://doi.org/10.1145/2642937.2642986Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Andrea Arcuri, Gordon Fraser, and Juan Pablo Galeotti. 2015. Generating TCP/UDP network data for automated unit test generation. In Proceedings of the 2015 10th Joint Meeting on Foundations of Software Engineering (ESEC/FSE’15), Elisabetta Di Nitto, Mark Harman, and Patrick Heymans (Eds.). ACM, 155--165. DOI:https://doi.org/10.1145/2786805.2786828Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. Andrea Arcuri, Gordon Fraser, and René Just. 2017. Private API access and functional mocking in automated unit test generation. In Proceedings of the 2017 IEEE International Conference on Software Testing, Verification and Validation (ICST’17). IEEE Computer Society, 126--137. DOI:https://doi.org/10.1109/ICST.2017.19Google ScholarGoogle ScholarCross RefCross Ref
  22. Cyrille Valentin Artho, Armin Biere, Masami Hagiya, Eric Platon, Martina Seidl, Yoshinori Tanabe, and Mitsuharu Yamamoto. 2013. Modbat: A model-based API tester for event-driven systems. In Proceedings of the Haifa Verification Conference. Springer, 112--128.Google ScholarGoogle ScholarCross RefCross Ref
  23. Thomas Ball and Sriram K. Rajamani. 2001. Automatically validating temporal safety properties of interfaces. In Proceedings of the 8th International SPIN Workshop on Model Checking of Software (SPIN’01). Springer-Verlag, Berlin, 103--122.Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. N. Beckman, D. Kim, and J. Aldrich. [n.d.]. An empirical study of object protocols in the wild. In Proceedings of the European Conference on Object-Oriented Programming (ECOOP’11). 2--26.Google ScholarGoogle Scholar
  25. Kevin Bierhoff and Jonathan Aldrich. 2007. Modular typestate checking of aliased objects. SIGPLAN Not. 42, 10 (Oct. 2007), 301--320. DOI:https://doi.org/10.1145/1297105.1297050Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. José Campos, Yan Ge, Nasser Albunian, Gordon Fraser, Marcelo Eler, and Andrea Arcuri. 2018. An empirical evaluation of evolutionary algorithms for unit test suite generation. Inf. Softw. Technol. 104 (2018), 207--235. DOI:https://doi.org/10.1016/j.infsof.2018.08.010Google ScholarGoogle ScholarCross RefCross Ref
  27. Henry Coles, Thomas Laurent, Christopher Henard, Mike Papadakis, and Anthony Ventresque. 2016. Pit: A practical mutation testing tool for java. In Proceedings of the 25th International Symposium on Software Testing and Analysis. ACM, 449--452.Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. H. Czemerinski, V. Braberman, and S. Uchitel. 2016. Behaviour abstraction adequacy criteria for API call protocol testing. Softw. Test. Verif. Reliabil. 26, 3 (2016), 211–244. DOI:https://doi.org/10.1002/stvr.1593Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. Valentin Dallmeier, Nikolai Knopp, Christoph Mallon, Gordon Fraser, Sebastian Hack, and Andreas Zeller. 2012. Automatically generating test cases for specification mining. IEEE Trans. Softw. Eng. 38, 2 (2012), 243--257. DOI:https://doi.org/10.1109/TSE.2011.105Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. G. de Caso, V. Braberman, D. Garbervetsky, and S. Uchitel. [n.d.]. Program abstractions for behaviour validation. In Proceedings of the International Conference on Software Engineering (ICSE’11). 381--390.Google ScholarGoogle Scholar
  31. G. de Caso, V. Braberman, D. Garbervetsky, and S. Uchitel. [n.d.]. Validation of contracts using enabledness preserving finite state abstractions. In Proceedings of the International Conference on Software Engineering (ICSE’09). 452--462.Google ScholarGoogle Scholar
  32. Guido de Caso, Victor Braberman, Diego Garbervetsky, and Sebastian Uchitel. 2012. Automated abstractions for contract validation. IEEE Trans. Softw. Eng. 38, 1 (2012), 141--162.Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. G. de Caso, V. Braberman, D. Garbervetsky, and S. Uchitel. 2013. Enabledness-based program abstractions for behavior validation. ACM Trans. Softw. Eng. Methodol. 22, 3 (2013).Google ScholarGoogle Scholar
  34. Kalyanmoy Deb, Samir Agrawal, Amrit Pratap, and T. Meyarivan. 2002. A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Trans. Evol. Comput. 6, 2 (2002), 182--197. DOI:https://doi.org/10.1109/4235.996017Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. Robert DeLine and Manuel Fähndrich. 2004. Typestates for objects. In Proceedings of the European Conference on Object-Oriented Programming (ECOOP’04). 465--490.Google ScholarGoogle ScholarCross RefCross Ref
  36. Mariangiola Dezani-Ciancaglini, Dimitris Mostrous, Nobuko Yoshida, and Sophia Drossopoulou. 2006. Session types for object-oriented languages. In Proceedings of the European Conference on Object-Oriented Programming. Springer, 328--352.Google ScholarGoogle ScholarDigital LibraryDigital Library
  37. John Field, Deepak Goyal, G. Ramalingam, and Eran Yahav. 2003. Typestate verification: Abstraction techniques and complexity results. In Proceedings of the International Static Analysis Symposium. Springer, 439--462.Google ScholarGoogle ScholarCross RefCross Ref
  38. Stephen J Fink, Eran Yahav, Nurit Dor, G. Ramalingam, and Emmanuel Geay. 2008. Effective typestate verification in the presence of aliasing. ACM Trans. Softw. Eng. Methodol. 17, 2 (2008), 9.Google ScholarGoogle ScholarDigital LibraryDigital Library
  39. G. Fraser and A. Arcuri. 2013. Whole test suite generation. IEEE Trans. Softw. Eng. 39, 2 (2013), 276--291.Google ScholarGoogle ScholarDigital LibraryDigital Library
  40. Gordon Fraser and Andrea Arcuri. 2014. A large-scale evaluation of automated unit test generation using EvoSuite. ACM Trans. Softw. Eng. Methodol. 24, 2, Article 8 (December 2014), 42 pages. DOI:https://doi.org/10.1145/2685612Google ScholarGoogle ScholarDigital LibraryDigital Library
  41. Gordon Fraser and Andrea Arcuri. 2015. Achieving scalable mutation-based generation of whole test suites. Emp. Softw. Eng. 20, 3 (2015), 783--812. DOI:https://doi.org/10.1007/s10664-013-9299-zGoogle ScholarGoogle ScholarDigital LibraryDigital Library
  42. Gordon Fraser and Andreas Zeller. 2011. Exploiting common object usage in test case generation. In Proceedings of the 4th IEEE International Conference on Software Testing, Verification and Validation (ICST’11). IEEE Computer Society, 80--89. DOI:https://doi.org/10.1109/ICST.2011.53Google ScholarGoogle ScholarDigital LibraryDigital Library
  43. G. Fraser and A. Zeller. 2012. Mutation-driven generation of unit tests and oracles. Trans. Softw. Eng. 28, 2 (2012), 278--292.Google ScholarGoogle ScholarDigital LibraryDigital Library
  44. Wolfgang Grieskamp, Yuri Gurevich, Wolfram Schulte, and Margus Veanes. 2002. Generating finite state machines from abstract state machines. In Proceedings of the International Symposium on Software Testing and Analysis (ISSTA’02). 112--122.Google ScholarGoogle ScholarDigital LibraryDigital Library
  45. W. Grieskamp, N. Kicillof, K. Stobie, and V. Braberman. 2011. Model-based quality assurance of protocol documentation: Tools and methodology. STVR 21, 1 (2011), 55--71. DOI:https://doi.org/10.1002/stvr.427Google ScholarGoogle Scholar
  46. A. Groce, C. Zhang, E. Eide, Y. Chen, and J. Regehr. [n.d.]. Swarm testing. In Proceedings of the International Symposium on Software Testing and Analysis (ISSTA’12). 78--88.Google ScholarGoogle Scholar
  47. Mats P. E. Heimdahl, Devaraj George, and Robert Weber. 2004. Specification test coverage adequacy criteria = Specification test generation inadequacy criteria. In Proceedings of the Eighth IEEE International Conference on High Assurance Systems Engineering (HASE’04). IEEE Computer Society, USA, 178–186.Google ScholarGoogle ScholarDigital LibraryDigital Library
  48. Robert M. Hierons, Kirill Bogdanov, Jonathan P. Bowen, Rance Cleaveland, John Derrick, Jeremy Dick, Marian Gheorghe, Mark Harman, Kalpesh Kapoor, Paul Krause, Gerald Lüttgen, Anthony J. H. Simons, Sergiy Vilkomir, Martin R. Woodward, and Hussein Zedan. 2009. Using formal specifications to support testing. Comput. Surveys 41, 2, Article 9 (2009), 9:1--9:76 pages.Google ScholarGoogle Scholar
  49. Kobi Inkumsah and Tao Xie. 2008. Improving structural testing of object-oriented programs via integrating evolutionary testing and symbolic execution. In Proceedings of the 2008 23rd IEEE/ACM International Conference on Automated Software Engineering. IEEE Computer Society, 297--306.Google ScholarGoogle ScholarDigital LibraryDigital Library
  50. Jan and Tretmans. 1996. Conformance testing with labelled transition systems: Implementation relations and test generation. Comput. Netw. ISDN Syst. 29, 1 (1996), 49--79.Google ScholarGoogle ScholarDigital LibraryDigital Library
  51. René Just, Darioush Jalali, and Michael D. Ernst. 2014. Defects4J: A database of existing faults to enable controlled testing studies for Java programs. In Proceedings of the International Symposium on Software Testing and Analysis, (ISSTA’14), Corina S. Pasareanu and Darko Marinov (Eds.). ACM, 437--440. DOI:https://doi.org/10.1145/2610384.2628055Google ScholarGoogle Scholar
  52. René Just, Darioush Jalali, Laura Inozemtseva, Michael D Ernst, Reid Holmes, and Gordon Fraser. 2014. Are mutants a valid substitute for real faults in software testing? In Proceedings of the 22nd ACM SIGSOFT International Symposium on Foundations of Software Engineering. ACM, 654--665.Google ScholarGoogle ScholarDigital LibraryDigital Library
  53. Teemu Kanstrén and Olli-Pekka Puolitaival. 2012. Using built-in domain-specific modeling support to guide model-based test generation. Model-Driven Engineering of Information Systems: Principles, Techniques, and Practice (2012), 295--319.Google ScholarGoogle Scholar
  54. I. Krka, Y. Brun, and N. Medvidovic. [n.d.]. Automatic mining of specifications from invocation traces and method invariants. In Proceedings of the Symposium on the Foundations of Software Engineering (FSE’14). 178--189.Google ScholarGoogle Scholar
  55. Kiran Lakhotia, Phil McMinn, and Mark Harman. 2010. An empirical investigation into branch coverage for C programs using CUTE and AUSTIN. J. Syst. Softw. 83, 12 (December 2010), 2379--2391. DOI:https://doi.org/10.1016/j.jss.2010.07.026Google ScholarGoogle ScholarDigital LibraryDigital Library
  56. Tien-Duy B. Le, Xuan-Bach D. Le, David Lo, and Ivan Beschastnikh. 2015. Synergizing specification miners through model fissions and fusions (T). In Proceedings of the 30th IEEE/ACM International Conference on Automated Software Engineering (ASE’15), Myra B. Cohen, Lars Grunske, and Michael Whalen (Eds.). IEEE Computer Society, 115--125. DOI:https://doi.org/10.1109/ASE.2015.83Google ScholarGoogle ScholarDigital LibraryDigital Library
  57. David Lee and Mihalis Yannakakis. 1996. Principles and methods of testing finite state machines-a survey. Proc. IEEE 84, 8 (1996), 1090--1123.Google ScholarGoogle ScholarCross RefCross Ref
  58. Wenbin Li, Franck Le Gall, and Naum Spaseski. 2017. A survey on model-based testing tools for test case generation. In Proceedings of the International Conference on Tools and Methods for Program Analysis. Springer, 77--89.Google ScholarGoogle Scholar
  59. Lisa Liu, Bertrand Meyer, and Bernd Schoeller. 2007. Using contracts and Boolean queries to improve the quality of automatic test generation. In Proceedings of the 1st International Conference on Tests and Proofs (TAP’07). Springer-Verlag, Berlin, Heidelberg, 114–130.Google ScholarGoogle ScholarDigital LibraryDigital Library
  60. Lech Madeyski, Wojciech Orzeszyna, Richard Torkar, and Mariusz Jozala. 2014. Overcoming the equivalent mutant problem: A systematic literature review and a comparative experiment of second order mutation. IEEE Trans. Softw. Eng. 40, 1 (2014), 23--42. DOI:https://doi.org/10.1109/TSE.2013.44Google ScholarGoogle ScholarDigital LibraryDigital Library
  61. Alessandro Marchetto, Paolo Tonella, and Filippo Ricca. 2008. State-based testing of Ajax web applications. In Proceedings of the 1st International Conference on Software Testing, Verification, and Validation (ICST’08). 121--130. DOI:https://doi.org/10.1109/ICST.2008.22Google ScholarGoogle ScholarDigital LibraryDigital Library
  62. Leonardo Mariani, Mauro Pezzè, Oliviero Riganelli, and Mauro Santoro. 2011. AutoBlackTest: A tool for automatic black-box testing. In Proceedings of the International Conference on Software Engineering (ICSE’11). 1013--1015.Google ScholarGoogle ScholarDigital LibraryDigital Library
  63. Raluca Marinescu, Cristina Seceleanu, Hèléne Le Guen, and Paul Pettersson. 2015. A research overview of tool supported model-based testing of requirements-based designs. Adv. Comput. 98 (2015), 89--140.Google ScholarGoogle ScholarCross RefCross Ref
  64. P. McMinn. 2004. Search-based software test data generation: A survey. Softw. Test., Verif. Reliabil. 14, 2 (2004), 105--156.Google ScholarGoogle ScholarDigital LibraryDigital Library
  65. Atif M. Memon, Mary Lou Soffa, and Martha E. Pollack. 2001. Coverage criteria for GUI testing. In Proceedings of the 8th European Software Engineering Conference Held Jointly with 9th ACM SIGSOFT International Symposium on Foundations of Software Engineering (ESEC/FSE-9). ACM, New York, NY, 256--267. DOI:https://doi.org/10.1145/503209.503244Google ScholarGoogle Scholar
  66. A. Mesbah, E. Bozdag, and A. v. Deursen. 2008. Crawling AJAX by inferring user interface state changes. In Proceedings of the 2008 8th International Conference on Web Engineering. 122--134. DOI:https://doi.org/10.1109/ICWE.2008.24Google ScholarGoogle ScholarDigital LibraryDigital Library
  67. S. Mouchawrab, L. C. Briand, Y. Labiche, and M. Di Penta. 2011. Assessing, comparing, and combining state machine-based testing and structural testing: A series of experiments. IEEE Trans. Softw. Eng. 37, 2 (March 2011), 161--187. DOI:https://doi.org/10.1109/TSE.2010.32Google ScholarGoogle ScholarDigital LibraryDigital Library
  68. Shinichi Nakagawa. 2004. A farewell to Bonferroni: The problems of low statistical power and publication bias. Behav. Ecol. 15, 6 (11 2004), 1044--1045. DOI:https://doi.org/10.1093/beheco/arh107Google ScholarGoogle Scholar
  69. Alessandro Orso and Gregg Rothermel. 2014. Software testing: A research travelogue (2000--2014). In Proceedings of the on Future of Software Engineering. ACM, 117--132.Google ScholarGoogle ScholarDigital LibraryDigital Library
  70. Carlos Pacheco, Shuvendu K. Lahiri, Michael D. Ernst, and Thomas Ball. 2007. Feedback-directed random test generation. In Proceedings of the 29th International Conference on Software Engineering (ICSE’07). 75--84.Google ScholarGoogle ScholarDigital LibraryDigital Library
  71. Annibale Panichella, Fitsum Meshesha Kifetew, and Paolo Tonella. 2015. Reformulating branch coverage as a many-objective optimization problem. In Proceedings of the 8th IEEE International Conference on Software Testing, Verification and Validation (ICST’15). IEEE Computer Society, 1--10. DOI:https://doi.org/10.1109/ICST.2015.7102604Google ScholarGoogle ScholarCross RefCross Ref
  72. Annibale Panichella, Fitsum Meshesha Kifetew, and Paolo Tonella. 2018. Automated test case generation as a many-objective optimisation problem with dynamic selection of the targets. IEEE Trans. Softw. Eng. 44, 2 (2018), 122--158. DOI:https://doi.org/10.1109/TSE.2017.2663435Google ScholarGoogle ScholarCross RefCross Ref
  73. Thomas V. Perneger. 1998. What’s wrong with Bonferroni adjustments. Br. Med. J. 316, 7139 (1998), 1236--1238. DOI:https://doi.org/10.1136/bmj.316.7139.1236 arXiv:https://www.bmj.com/contentGoogle ScholarGoogle ScholarCross RefCross Ref
  74. Michael Pradel, Philipp Bichsel, and Thomas R. Gross. 2010. A framework for the evaluation of specification miners based on finite state machines. In Proceedings of the 26th IEEE International Conference on Software Maintenance (ICSM’10). IEEE Computer Society, 1--10. DOI:https://doi.org/10.1109/ICSM.2010.5609576Google ScholarGoogle Scholar
  75. A. Pretschner, W. Prenninger, S. Wagner, C. Kühnel, M. Baumgartner, B. Sostawa, R. Zölch, and T. Stauner. 2005. One evaluation of model-based testing and its automation. In Proceedings of the International Conference on Software Engineering (ICSE’05). 392--401.Google ScholarGoogle Scholar
  76. A. Pretschner, W. Prenninger, S. Wagner, C. Kühnel, M. Baumgartner, B. Sostawa, R. Zölch, and T. Stauner. 2005. One evaluation of model-based testing and its automation. In Proceedings of the 27th International Conference on Software Engineering (ICSE’05). ACM, New York, NY, 392--401. DOI:https://doi.org/10.1145/1062455.1062529Google ScholarGoogle Scholar
  77. J. Rojas, J. Campos, M. Vivanti, G. Fraser, and A. Arcuri. [n.d.]. Combining multiple coverage criteria in search-based unit test generation. In Proceedings of the Symposium on Search Based Software Engineering (SSBSE’15). 93--108.Google ScholarGoogle Scholar
  78. José Miguel Rojas, Mattia Vivanti, Andrea Arcuri, and Gordon Fraser. 2017. A detailed investigation of the effectiveness of whole test suite generation. Emp. Softw. Eng. 22, 2 (2017), 852--893. DOI:https://doi.org/10.1007/s10664-015-9424-2Google ScholarGoogle ScholarDigital LibraryDigital Library
  79. Gregg Rothermel, Mary Jean Harrold, Jeffery Ostrin, and Christie Hong. 1998. An empirical study of the effects of minimization on the fault detection capabilities of test suites. In Proceedings of the International Conference on Software Maintenance (ICSM’98). IEEE Computer Society, Los Alamitos, CA, 34--. http://dl.acm.org/citation.cfm?id=850947.853294Google ScholarGoogle ScholarCross RefCross Ref
  80. M. J. Rutherford, A. Carzaniga, and A. L. Wolf. 2008. Evaluating test suites and adequacy criteria using simulation-based models of distributed systems. IEEE Trans. Softw. Eng. 34, 4 (July 2008), 452--470. DOI:https://doi.org/10.1109/TSE.2008.33Google ScholarGoogle ScholarDigital LibraryDigital Library
  81. V. Santiago, A. S. Martins Do Amaral, N. L. Vijaykumar, M. D. Fatima Mattiello-francisco, E. Martins, and O. C. Lopes. 2006. A practical approach for automated test case generation using statecharts. In Proceedings of the 30th Annual International Computer Software and Applications Conference (COMPSAC’06), Vol. 2. 183--188. DOI:https://doi.org/10.1109/COMPSAC.2006.100Google ScholarGoogle Scholar
  82. Muhammad Shafique and Yvan Labiche. 2015. A systematic review of state-based test tools. Int. J. Softw. Tools Technol. Transf. 17, 1 (2015), 59--76.Google ScholarGoogle ScholarDigital LibraryDigital Library
  83. Sina Shamshiri, René Just, José Miguel Rojas, Gordon Fraser, Phil McMinn, and Andrea Arcuri. 2015. Do automatically generated unit tests find real faults? An empirical study of effectiveness and challenges (T). In Proceedings of the 30th IEEE/ACM International Conference on Automated Software Engineering (ASE’15), Myra B. Cohen, Lars Grunske, and Michael Whalen (Eds.). IEEE Computer Society, 201--211. DOI:https://doi.org/10.1109/ASE.2015.86Google ScholarGoogle ScholarDigital LibraryDigital Library
  84. Sina Shamshiri, José Miguel Rojas, Luca Gazzola, Gordon Fraser, Phil McMinn, Leonardo Mariani, and Andrea Arcuri. 2018. Random or evolutionary search for object-oriented test suite generation? Softw. Test. Verif. Reliabil. 28, 4 (2018), e1660.Google ScholarGoogle ScholarCross RefCross Ref
  85. R. Sharma, M. Gligoric, A. Arcuri, G. Fraser, and D. Marinov. 2011. Testing container classes: Random or systematic? In Proceedings of the International Conference on Fundamental Approaches to Software Engineering (FASE’11). 262--277.Google ScholarGoogle Scholar
  86. M. Srinivas and Lalit M. Patnaik. 1994. Genetic algorithms: A survey. IEEE Comput. 27, 6 (1994), 17--26. DOI:https://doi.org/10.1109/2.294849Google ScholarGoogle ScholarDigital LibraryDigital Library
  87. Robert E. Strom and Shaula Yemini. 1986. Typestate: A programming language concept for enhancing software reliability. IEEE Trans. Softw. Eng. SE-12, 1 (Jan. 1986), 157–171. DOI:10.1109/TSE.1986.6312929Google ScholarGoogle ScholarDigital LibraryDigital Library
  88. Suresh Thummalapenta, Tao Xie, Nikolai Tillmann, Jonathan de Halleux, and Zhendong Su. 2011. Synthesizing method sequences for high-coverage testing. In Proceedings of the ACM Object-Oriented Programming, Systems, Languages 8 Applications (OOPSLA’11).Google ScholarGoogle ScholarDigital LibraryDigital Library
  89. P. Tonella. 2004. Evolutionary testing of classes. SIGSOFT Softw. Eng. Notes 29, 4 (July 2004), 119--128.Google ScholarGoogle ScholarDigital LibraryDigital Library
  90. Roland H. Untch, A. Jefferson Offutt, and Mary Jean Harrold. 1993. Mutation analysis using mutant schemata. In ACM SIGSOFT Software Engineering Notes, Vol. 18. ACM, 139--148.Google ScholarGoogle Scholar
  91. Mark Utting and Bruno Legeard. 2010. Practical Model-based Testing: A Tools Approach. Elsevier.Google ScholarGoogle ScholarDigital LibraryDigital Library
  92. M. Utting, G. Perrone, J. Winchester, S. Thompson, R. Yang, and P. Douangsavanh. 2009. Model junit. Department of Computer Science, University of Waikato, Waikato, New Zealand (2009).Google ScholarGoogle Scholar
  93. Mark Utting, Alexander Pretschner, and Bruno Legeard. 2012. A taxonomy of model-based testing approaches. Softw. Test. Verif. Reliabil. 22, 5 (2012), 297--312.Google ScholarGoogle ScholarDigital LibraryDigital Library
  94. Machiel van der Bijl, Arend Rensink, and Jan Tretmans. 2003. Compositional testing with ioco. In Formal Approaches to Software Testing, Proceedings of the 3rd International Workshop on Formal Approaches to Testing of Software (FATES’03), Alexandre Petrenko and Andreas Ulrich (Eds.), Lecture Notes in Computer Science, Vol. 2931. Springer, 86--100. DOI:https://doi.org/10.1007/978-3-540-24617-6_7Google ScholarGoogle Scholar
  95. S. Weißleder. 2010. Simulated satisfaction of coverage criteria on UML state machines. In Proceedings of the 2010 3rd International Conference on Software Testing, Verification and Validation. 117--126. DOI:https://doi.org/10.1109/ICST.2010.28Google ScholarGoogle ScholarDigital LibraryDigital Library
  96. X. Xiao, T. Xie, N. Tillmann, and J. de Halleux. [n.d.]. Precise identification of problems for structural test generation. In Proceedings of the International Conference on Software Engineering (ICSE’11). 611--620.Google ScholarGoogle Scholar
  97. Dianxiang Xu, Weifeng Xu, Michael Kent, Lijo Thomas, and Linzhang Wang. 2015. An automated test generation technique for software quality assurance. IEEE Transactions on Reliability 64, 1 (2015), 247--268.Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. Enabledness-based Testing of Object Protocols

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in

        Full Access

        • Published in

          cover image ACM Transactions on Software Engineering and Methodology
          ACM Transactions on Software Engineering and Methodology  Volume 30, Issue 2
          Continuous Special Section: AI and SE
          April 2021
          463 pages
          ISSN:1049-331X
          EISSN:1557-7392
          DOI:10.1145/3446657
          • Editor:
          • Mauro Pezzè
          Issue’s Table of Contents

          Copyright © 2021 ACM

          Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          • Published: 3 January 2021
          • Revised: 1 July 2020
          • Accepted: 1 July 2020
          • Received: 1 June 2019
          Published in tosem Volume 30, Issue 2

          Permissions

          Request permissions about this article.

          Request Permissions

          Check for updates

          Qualifiers

          • research-article
          • Research
          • Refereed

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader

        HTML Format

        View this article in HTML Format .

        View HTML Format