Abstract
The intensive use of generative programming techniques provides an elegant engineering solution to deal with the heterogeneity of platforms and technological stacks. The use of domain-specific languages for example, leads to the creation of numerous code generators that automatically translate highlevel system specifications into multi-target executable code. Producing correct and efficient code generator is complex and error-prone. Although software designers provide generally high-level test suites to verify the functional outcome of generated code, it remains challenging and tedious to verify the behavior of produced code in terms of non-functional properties. This paper describes a practical approach based on a runtime monitoring infrastructure to automatically check the potential inefficient code generators. This infrastructure, based on system containers as execution platforms, allows code-generator developers to evaluate the generated code performance. We evaluate our approach by analyzing the performance of Haxe, a popular high-level programming language that involves a set of cross-platform code generators. Experimental results show that our approach is able to detect some performance inconsistencies that reveal real issues in Haxe code generators.
- Marco Brambilla, Jordi Cabot, and Manuel Wimmer. Modeldriven software engineering in practice. Synthesis Lectures on Software Engineering, 1(1):1–182, 2012. Google Scholar
Digital Library
- Krzysztof Czarnecki and Simon Helsen. Classification of model transformation approaches. In Proceedings of the 2nd OOPSLA Workshop on Generative Techniques in the Context of the Model Driven Architecture, volume 45, pages 1–17. USA, 2003.Google Scholar
- Benjamin Dasnois. HaXe 2 Beginner’s Guide. Packt Publishing Ltd, 2011.Google Scholar
- Nelly Delgado, Ann Q Gates, and Steve Roach. A taxonomy and catalog of runtime software-fault monitoring tools. IEEE Transactions on software Engineering, 30(12):859–872, 2004. Google Scholar
Digital Library
- Melina Demertzi, Murali Annavaram, and Mary Hall. Analyzing the effects of compiler optimizations on application reliability. In Workload Characterization (IISWC), 2011 IEEE International Symposium on, pages 184–193. IEEE, 2011. Google Scholar
Digital Library
- Franck Fleurey, Brice Morin, Arnor Solberg, and Olivier Barais. Mde to manage communications with and between resource-constrained systems. In International Conference on Model Driven Engineering Languages and Systems, pages 349–363. Springer, 2011. Google Scholar
Digital Library
- Robert France and Bernhard Rumpe. Model-driven development of complex software: A research roadmap. In 2007 Future of Software Engineering, pages 37–54. IEEE Computer Society, 2007. Google Scholar
Digital Library
- Victor Guana and Eleni Stroulia. Chaintracker, a modeltransformation trace analysis tool for code-generation environments. In ICMT, pages 146–153. Springer, 2014.Google Scholar
- Victor Guana and Eleni Stroulia. How do developers solve software-engineering tasks on model-based code generators? an empirical study design. In First International Workshop on Human Factors in Modeling (HuFaMo 2015). CEUR-WS, pages 33–38, 2015.Google Scholar
- Gustavo Hartmann, Geoff Stead, and Asi DeGani. Crossplatform mobile development. Mobile Learning Environment, Cambridge, pages 1–18, 2011.Google Scholar
- Robert Hundt. Loop recognition in c++/java/go/scala. In Proceedings of Scala Days 2011, June 2011.Google Scholar
- Sven Jörges and Bernhard Steffen. Back-to-back testing of model-based code generators. In International Symposium On Leveraging Applications of Formal Methods, Verification and Validation, pages 425–444. Springer, 2014. Google Scholar
Digital Library
- Li Li, Tony Tang, and Wu Chou. A rest service framework for fine-grained resource management in container-based cloud. In 2015 IEEE 8th International Conference on Cloud Computing, pages 645–652. IEEE, 2015. Google Scholar
Digital Library
- Ruici Luo, Wei Ye, and Shikun Zhang. Towards a deployment system for cloud applications.Google Scholar
- William M McKeeman. Differential testing for software. Digital Technical Journal, 10(1):100–107, 1998.Google Scholar
- Dirk Merkel. Docker: lightweight linux containers for consistent development and deployment. Linux Journal, 2014(239):2, 2014. Google Scholar
Digital Library
- Nicholas Nethercote and Julian Seward. Valgrind: a framework for heavyweight dynamic binary instrumentation. In ACM Sigplan notices, volume 42, pages 89–100. ACM, 2007. Google Scholar
Digital Library
- Zhelong Pan and Rudolf Eigenmann. Fast and effective orchestration of compiler optimizations for automatic performance tuning. In International Symposium on Code Generation and Optimization (CGO’06), pages 12–pp. IEEE, 2006. Google Scholar
Digital Library
- Alireza Pazirandeh and Evelina Vorobyeva. Evaluation of cross-platform tools for mobile development. 2015.Google Scholar
- Aseem Rastogi, Nikhil Swamy, Cédric Fournet, Gavin Bierman, and Panagiotis Vekris. Safe & efficient gradual typing for typescript. In ACM SIGPLAN Notices, volume 50, pages 167–180. ACM, 2015. Google Scholar
Digital Library
- Julien Richard-Foy, Olivier Barais, and Jean-Marc Jézéquel. Efficient high-level abstractions for web programming. In ACM SIGPLAN Notices, volume 49, pages 53–60. ACM, 2013. Google Scholar
Digital Library
- Stephen Soltesz, Herbert Pötzl, Marc E Fiuczynski, Andy Bavier, and Larry Peterson. Container-based operating system virtualization: a scalable, high-performance alternative to hypervisors. In ACM SIGOPS Operating Systems Review, volume 41, pages 275–287. ACM, 2007. Google Scholar
Digital Library
- Cristian Constantin Spoiala, Alin Calinciuc, Corneliu Octavian Turcu, and Constantin Filote. Performance comparison of a webrtc server on docker versus virtual machine. In 2016 International Conference on Development and Application Systems (DAS), pages 295–298. IEEE, 2016.Google Scholar
- Stepan Stepasyuk and Yavor Paunov. Evaluating the haxe programming language-performance comparison between haxe and platform-specific languages. 2015.Google Scholar
- Domagoj ˇStrekelj, Hrvoje Leventi´c, and Irena Gali´c. Performance overhead of haxe programming language for crossplatform game development. International Journal of Electrical and Computer Engineering Systems, 6(1):9–13, 2015.Google Scholar
- Ingo Stuermer, Mirko Conrad, Heiko Doerr, and Peter Pepper. Systematic testing of model-based code generators. IEEE Transactions on Software Engineering, 33(9):622, 2007. Google Scholar
Digital Library
- Ingo Stürmer, Daniela Weinberg, and Mirko Conrad. Overview of existing safeguarding techniques for automatically generated code. In ACM SIGSOFT Software Engineering Notes, volume 30, pages 1–6. ACM, 2005. Google Scholar
Digital Library
- Yu Sun, Jules White, Sean Eade, and Douglas C Schmidt. Roar: A qos-oriented modeling framework for automated cloud resource allocation and optimization. Journal of Systems and Software, 116:146–161, 2016. Google Scholar
Digital Library
- Xuejun Yang, Yang Chen, Eric Eide, and John Regehr. Finding and understanding bugs in c compilers. In ACM SIGPLAN Notices, volume 46, pages 283–294. ACM, 2011. Google Scholar
Digital Library
Index Terms
Automatic non-functional testing of code generators families
Recommendations
Automatic non-functional testing of code generators families
GPCE 2016: Proceedings of the 2016 ACM SIGPLAN International Conference on Generative Programming: Concepts and ExperiencesThe intensive use of generative programming techniques provides an elegant engineering solution to deal with the heterogeneity of platforms and technological stacks. The use of domain-specific languages for example, leads to the creation of numerous ...
Configuring test generators using bug reports: a case study of GCC compiler and Csmith
SAC '21: Proceedings of the 36th Annual ACM Symposium on Applied ComputingThe correctness of compilers is instrumental in the safety and reliability of other software systems, as bugs in compilers can produce executables that do not reflect the intent of programmers. Such errors are difficult to identify and debug. Random ...
A review of code smell mining techniques
Over the past 15years, researchers presented numerous techniques and tools for mining code smells. It is imperative to classify, compare, and evaluate existing techniques and tools used for the detection of code smells because of their varying features ...







Comments