skip to main content
article

Automatic non-functional testing of code generators families

Published:20 October 2016Publication History
Skip Abstract Section

Abstract

The intensive use of generative programming techniques provides an elegant engineering solution to deal with the heterogeneity of platforms and technological stacks. The use of domain-specific languages for example, leads to the creation of numerous code generators that automatically translate highlevel system specifications into multi-target executable code. Producing correct and efficient code generator is complex and error-prone. Although software designers provide generally high-level test suites to verify the functional outcome of generated code, it remains challenging and tedious to verify the behavior of produced code in terms of non-functional properties. This paper describes a practical approach based on a runtime monitoring infrastructure to automatically check the potential inefficient code generators. This infrastructure, based on system containers as execution platforms, allows code-generator developers to evaluate the generated code performance. We evaluate our approach by analyzing the performance of Haxe, a popular high-level programming language that involves a set of cross-platform code generators. Experimental results show that our approach is able to detect some performance inconsistencies that reveal real issues in Haxe code generators.

References

  1. Marco Brambilla, Jordi Cabot, and Manuel Wimmer. Modeldriven software engineering in practice. Synthesis Lectures on Software Engineering, 1(1):1–182, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Krzysztof Czarnecki and Simon Helsen. Classification of model transformation approaches. In Proceedings of the 2nd OOPSLA Workshop on Generative Techniques in the Context of the Model Driven Architecture, volume 45, pages 1–17. USA, 2003.Google ScholarGoogle Scholar
  3. Benjamin Dasnois. HaXe 2 Beginner’s Guide. Packt Publishing Ltd, 2011.Google ScholarGoogle Scholar
  4. Nelly Delgado, Ann Q Gates, and Steve Roach. A taxonomy and catalog of runtime software-fault monitoring tools. IEEE Transactions on software Engineering, 30(12):859–872, 2004. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Melina Demertzi, Murali Annavaram, and Mary Hall. Analyzing the effects of compiler optimizations on application reliability. In Workload Characterization (IISWC), 2011 IEEE International Symposium on, pages 184–193. IEEE, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Franck Fleurey, Brice Morin, Arnor Solberg, and Olivier Barais. Mde to manage communications with and between resource-constrained systems. In International Conference on Model Driven Engineering Languages and Systems, pages 349–363. Springer, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Robert France and Bernhard Rumpe. Model-driven development of complex software: A research roadmap. In 2007 Future of Software Engineering, pages 37–54. IEEE Computer Society, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Victor Guana and Eleni Stroulia. Chaintracker, a modeltransformation trace analysis tool for code-generation environments. In ICMT, pages 146–153. Springer, 2014.Google ScholarGoogle Scholar
  9. Victor Guana and Eleni Stroulia. How do developers solve software-engineering tasks on model-based code generators? an empirical study design. In First International Workshop on Human Factors in Modeling (HuFaMo 2015). CEUR-WS, pages 33–38, 2015.Google ScholarGoogle Scholar
  10. Gustavo Hartmann, Geoff Stead, and Asi DeGani. Crossplatform mobile development. Mobile Learning Environment, Cambridge, pages 1–18, 2011.Google ScholarGoogle Scholar
  11. Robert Hundt. Loop recognition in c++/java/go/scala. In Proceedings of Scala Days 2011, June 2011.Google ScholarGoogle Scholar
  12. Sven Jörges and Bernhard Steffen. Back-to-back testing of model-based code generators. In International Symposium On Leveraging Applications of Formal Methods, Verification and Validation, pages 425–444. Springer, 2014. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Li Li, Tony Tang, and Wu Chou. A rest service framework for fine-grained resource management in container-based cloud. In 2015 IEEE 8th International Conference on Cloud Computing, pages 645–652. IEEE, 2015. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Ruici Luo, Wei Ye, and Shikun Zhang. Towards a deployment system for cloud applications.Google ScholarGoogle Scholar
  15. William M McKeeman. Differential testing for software. Digital Technical Journal, 10(1):100–107, 1998.Google ScholarGoogle Scholar
  16. Dirk Merkel. Docker: lightweight linux containers for consistent development and deployment. Linux Journal, 2014(239):2, 2014. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Nicholas Nethercote and Julian Seward. Valgrind: a framework for heavyweight dynamic binary instrumentation. In ACM Sigplan notices, volume 42, pages 89–100. ACM, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Zhelong Pan and Rudolf Eigenmann. Fast and effective orchestration of compiler optimizations for automatic performance tuning. In International Symposium on Code Generation and Optimization (CGO’06), pages 12–pp. IEEE, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. Alireza Pazirandeh and Evelina Vorobyeva. Evaluation of cross-platform tools for mobile development. 2015.Google ScholarGoogle Scholar
  20. Aseem Rastogi, Nikhil Swamy, Cédric Fournet, Gavin Bierman, and Panagiotis Vekris. Safe & efficient gradual typing for typescript. In ACM SIGPLAN Notices, volume 50, pages 167–180. ACM, 2015. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. Julien Richard-Foy, Olivier Barais, and Jean-Marc Jézéquel. Efficient high-level abstractions for web programming. In ACM SIGPLAN Notices, volume 49, pages 53–60. ACM, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. Stephen Soltesz, Herbert Pötzl, Marc E Fiuczynski, Andy Bavier, and Larry Peterson. Container-based operating system virtualization: a scalable, high-performance alternative to hypervisors. In ACM SIGOPS Operating Systems Review, volume 41, pages 275–287. ACM, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. Cristian Constantin Spoiala, Alin Calinciuc, Corneliu Octavian Turcu, and Constantin Filote. Performance comparison of a webrtc server on docker versus virtual machine. In 2016 International Conference on Development and Application Systems (DAS), pages 295–298. IEEE, 2016.Google ScholarGoogle Scholar
  24. Stepan Stepasyuk and Yavor Paunov. Evaluating the haxe programming language-performance comparison between haxe and platform-specific languages. 2015.Google ScholarGoogle Scholar
  25. Domagoj ˇStrekelj, Hrvoje Leventi´c, and Irena Gali´c. Performance overhead of haxe programming language for crossplatform game development. International Journal of Electrical and Computer Engineering Systems, 6(1):9–13, 2015.Google ScholarGoogle Scholar
  26. Ingo Stuermer, Mirko Conrad, Heiko Doerr, and Peter Pepper. Systematic testing of model-based code generators. IEEE Transactions on Software Engineering, 33(9):622, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. Ingo Stürmer, Daniela Weinberg, and Mirko Conrad. Overview of existing safeguarding techniques for automatically generated code. In ACM SIGSOFT Software Engineering Notes, volume 30, pages 1–6. ACM, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. Yu Sun, Jules White, Sean Eade, and Douglas C Schmidt. Roar: A qos-oriented modeling framework for automated cloud resource allocation and optimization. Journal of Systems and Software, 116:146–161, 2016. Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. Xuejun Yang, Yang Chen, Eric Eide, and John Regehr. Finding and understanding bugs in c compilers. In ACM SIGPLAN Notices, volume 46, pages 283–294. ACM, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Automatic non-functional testing of code generators families

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in

    Full Access

    • Published in

      cover image ACM SIGPLAN Notices
      ACM SIGPLAN Notices  Volume 52, Issue 3
      GPCE '16
      March 2017
      212 pages
      ISSN:0362-1340
      EISSN:1558-1160
      DOI:10.1145/3093335
      Issue’s Table of Contents
      • cover image ACM Conferences
        GPCE 2016: Proceedings of the 2016 ACM SIGPLAN International Conference on Generative Programming: Concepts and Experiences
        October 2016
        212 pages
        ISBN:9781450344463
        DOI:10.1145/2993236

      Copyright © 2016 ACM

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 20 October 2016

      Check for updates

      Qualifiers

      • article

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader
    About Cookies On This Site

    We use cookies to ensure that we give you the best experience on our website.

    Learn more

    Got it!