skip to main content
article

How preprocessor annotations (do not) affect maintainability: a case study on change-proneness

Published:23 October 2017Publication History
Skip Abstract Section

Abstract

Preprocessor annotations (e.g., #ifdef in C) enable the development of similar, but distinct software variants from a common code base. One particularly popular preprocessor is the C preprocessor, cpp. But the cpp is also widely criticized for impeding software maintenance by making code hard to understand and change. Yet, evidence to support this criticism is scarce. In this paper, we investigate the relation between cpp usage and maintenance effort, which we approximate with the frequency and extent of source code changes. To this end, we mined the version control repositories of eight open- source systems written in C. For each system, we measured if and how individual functions use cpp annotations and how they were changed. We found that functions containing cpp annotations are generally changed more frequently and more profoundly than other functions. However, when accounting for function size, the differences disappear or are greatly diminished. In summary, with respect to the frequency and extent of changes, our findings do not support the criticism of the cpp regarding maintainability.

References

  1. S. Apel, D. Batory, C. Kästner, and G. Saake. FeatureOriented Software Product Lines. Springer, 2013.Google ScholarGoogle Scholar
  2. N. Cliff. Ordinal Methods for Behavioral Data Analysis. Erlbaum, 1996.Google ScholarGoogle Scholar
  3. J. Cohen. Statistical Power Analysis for the Behavioral Sciences. 2nd ed. Erlbaum, 1988.Google ScholarGoogle Scholar
  4. M. L. Collard, H. H. Kagdi, and J. I. Maletic. “An XML-Based Lightweight C++ Fact Extractor”. In: Proceedings of the International Workshop on Program Comprehension (IWPC). IEEE, 2003, pp. 134–143. Google ScholarGoogle ScholarCross RefCross Ref
  5. M. D’Ambros, A. Bacchelli, and M. Lanza. “On the Impact of Design Flaws on Software Defects”. In: Proceedings of the International Conference on Quality Software (QSIC). IEEE, 2010, pp. 23–31. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. I. Deligiannis, M. Shepperd, M. Roumeliotis, and I. Stamelos. “An Empirical Investigation of an ObjectOriented Design Heuristic for Maintainability”. In: Journal of Systems and Software 65.2 (2003), pp. 127– 139. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. M. Di Penta, L. Cerulo, Y.-G. Guéhéneuc, and G. Antoniol. “An Empirical Study of the Relationships Between Design Pattern Roles and Class Change Proneness”. In: Proceedings of the International Conference on Software Maintenance (ICSM). IEEE, 2008, pp. 217–226. Google ScholarGoogle ScholarCross RefCross Ref
  8. A. J. Dobson and A. Barnett. An Introduction to Generalized Linear Models. 3rd ed. CRC Press, 2008.Google ScholarGoogle Scholar
  9. S. G. Eick, T. L. Graves, A. F. Karr, J. S. Marron, and A. Mockus. “Does Code Decay? Assessing the Evidence from Change Management Data”. In: IEEE Transactions on Software Engineering (TSE) 27.1 (Jan. 2001), pp. 1–12. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. K. El Emam. A Methodology for Validating Software Product Metrics. Tech. rep. NCR 44142. Ottawa, Ontario, Canada: National Research Council, June 2000.Google ScholarGoogle Scholar
  11. K. El Emam, S. Benlarbi, N. Goel, and S. N. Rai. “The Confounding Effect of Class Size on the Validity of Object-Oriented Metrics”. In: IEEE Transactions on Software Engineering (TSE) 27.7 (2001), pp. 630–650. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. M. D. Ernst, G. J. Badros, and D. Notkin. “An Empirical Analysis of C Preprocessor Use”. In: IEEE Transactions on Software Engineering (TSE) 28.12 (Dec. 2002), pp. 1146–1170. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. J.-M. Favre. “Understanding-in-the-Large”. In: Proceedings of the International Workshop on Program Comprehension (IWPC). IEEE, 1997, pp. 29–38. Google ScholarGoogle ScholarCross RefCross Ref
  14. J. Feigenspan, C. Kästner, S. Apel, J. Liebig, M. Schulze, R. Dachselt, M. Papendieck, T. Leich, and G. Saake. “Do Background Colors Improve Program Comprehension in the #ifdef Hell?” In: Empirical Software Engineering 18.4 (Aug. 2013), pp. 699–745. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. G. Ferreira, M. Malik, C. Kästner, J. Pfeffer, and S. Apel. “Do #ifdefs Influence the Occurrence of Vulnerabilities? An Empirical Study of the Linux Kernel”. In: Proceedings of the International Software Product Line Conference (SPLC). ACM, 2016, pp. 65–73. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. K. Gao and T. M. Khoshgoftaar. “A Comprehensive Empirical Study of Count Models for Software Fault Prediction”. In: IEEE Transactions on Reliability 56.2 (2007), pp. 223–236. Google ScholarGoogle ScholarCross RefCross Ref
  17. R. J. Grissom and J. J. Kim. Effect Sizes for Research: A Broad Practical Approach. Erlbaum, 2005.Google ScholarGoogle Scholar
  18. T. Hall, S. Beecham, D. Bowes, D. Gray, and S. Counsell. “A Systematic Literature Review on Fault Prediction Performance in Software Engineering”. In: IEEE Transactions on Software Engineering (TSE) 38.6 (2012), pp. 1276–1304. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. T. Hall, M. Zhang, D. Bowes, and Y. Sun. “Some Code Smells Have a Significant but Small Effect on Faults”. In: ACM Transactions on Software Engineering and Methodology (TOSEM) 23.4 (Sept. 2014), 33:1–33:39. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. J. M. Hilbe. Negative Binomial Regression. 2nd ed. Cambridge University Press, 2011. Google ScholarGoogle ScholarCross RefCross Ref
  21. C. Hunsen, B. Zhang, J. Siegmund, C. Kästner, O. Leßenich, M. Becker, and S. Apel. “Preprocessor-Based Variability in Open-Source and Industrial Software Systems: An Empirical Study”. In: Empirical Software Engineering 21.2 (2016), pp. 449–482. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. C. Kästner and S. Apel. “Virtual Separation of Concerns – A Second Chance for Preprocessors”. In: Journal of Object Technology 8.6 (Sept. 2009), pp. 59–78. Google ScholarGoogle ScholarCross RefCross Ref
  23. B. W. Kernighan and D. M. Ritchie. The C Programming Language. Prentice Hall, 1978.Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. F. Khomh, M. Di Penta, and Y.-G. Guéhéneuc. “An Exploratory Study of the Impact of Code Smells on Software Change-Proneness”. In: Proceedings of the Working Conference on Reverse Engineering (WCRE). IEEE, 2009, pp. 75–84. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. F. Khomh, M. Di Penta, Y.-G. Guéhéneuc, and G. Antoniol. “An Exploratory Study of the Impact of Antipatterns on Class Change- and Fault-Proneness”. In: Empirical Software Engineering 17.3 (2012), pp. 243– 275. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. D. Le, E. Walkingshaw, and M. Erwig. “#ifdef Confirmed Harmful: Promoting Understandable Software Variation”. In: Proceedings of the Symposium on Visual Languages and Human Centric Computing (VL/HCC). IEEE, 2011, pp. 143–150.Google ScholarGoogle Scholar
  27. J. Liebig, S. Apel, C. Lengauer, C. Kästner, and M. Schulze. “An Analysis of the Variability in Forty Preprocessor-Based Software Product Lines”. In: Proceedings of the International Conference on Software Engineering (ICSE). ACM, 2010, pp. 105–114. Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. J. Liebig, C. Kästner, and S. Apel. “Analyzing the Discipline of Preprocessor Annotations in 30 Million Lines of C Code”. In: Proceedings of the International Conference on Aspect-Oriented Software Development (AOSD). ACM, 2011, pp. 191–202. fect of Clones on Changeability”. In: Proceedings of the International Conference on Software Maintenance (ICSM). IEEE, 2008, pp. 227–236. Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. R. Malaquias, M. Ribeiro, R. Bonifácio, E. Monteiro, F. Medeiros, A. Garcia, and R. Gheyi. “The Discipline of Preprocessor-Based Annotations Does #ifdef TAG n’t #endif Matter”. In: Proceedings of the International Conference on Program Comprehension (ICPC). IEEE, 2017, pp. 297–307. Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. D. McFadden. Quantitative Methods for Analyzing Travel Behavior of Individuals: Some Recent Developments. Institute of Transportation Studies, University of California, 1977.Google ScholarGoogle Scholar
  31. F. Medeiros, C. Kästner, M. Ribeiro, S. Nadi, and R. Gheyi. “The Love/Hate Relationship with the C Preprocessor: An Interview Study”. In: Proceedings of the European Conference on Object-Oriented Programming (ECOOP). Schloss Dagstuhl–Leibniz-Zentrum für Informatik, 2015, pp. 495–518.Google ScholarGoogle Scholar
  32. F. Medeiros, M. Ribeiro, and R. Gheyi. “Investigating Preprocessor-Based Syntax Errors”. In: Proceedings of the International Conference on Generative Programming: Concepts & Experiences (GPCE). ACM, 2013, pp. 75–84. Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. F. Medeiros, M. Ribeiro, R. Gheyi, S. Apel, C. Kastner, B. Ferreira, L. Carvalho, and B. Fonseca. “Discipline Matters: Refactoring of Preprocessor Directives in the #ifdef Hell”. In: IEEE Transactions on Software Engineering (TSE) (2017). accepted for publication, p. 16. Google ScholarGoogle ScholarCross RefCross Ref
  34. F. Medeiros, I. Rodrigues, M. Ribeiro, L. Teixeira, and R. Gheyi. “An Empirical Study on Configuration-Related Issues: Investigating Undeclared and Unused Identifiers”. In: Proceedings of the International Conference on Generative Programming: Concepts & Experiences (GPCE). ACM, 2015, pp. 35–44. Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. J. Melo, C. Brabrand, and A. Wąsowski. “How Does the Degree of Variability Affect Bug Finding?” In: Proceedings of the International Conference on Software Engineering (ICSE). ACM, 2016, pp. 679–690. Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. A. Mockus and L. G. Votta. “Identifying Reasons for Software Changes Using Historic Databases”. In: Proceedings of the International Conference on Software Maintenance (ICSM). IEEE, 2000, pp. 120–130. Google ScholarGoogle ScholarCross RefCross Ref
  37. R. Moser, W. Pedrycz, and G. Succi. “A Comparative Analysis of the Efficiency of Change Metrics and Static Code Attributes for Defect Prediction”. In: Proceedings of the International Conference on Software Engineering (ICSE). ACM, 2008, pp. 181–190. Google ScholarGoogle ScholarDigital LibraryDigital Library
  38. S. M. Olbrich, D. S. Cruzes, and D. I. Sjøberg. “Are All Code Smells Harmful? A Study of God Classes and Brain Classes in the Evolution of Three Open Source Systems”. In: Proceedings of the International Conference on Software Maintenance (ICSM). IEEE, 2010, pp. 1– 10. Google ScholarGoogle ScholarDigital LibraryDigital Library
  39. T. J. Ostrand, E. J. Weyuker, and R. M. Bell. “Predicting the Location and Number of Faults in Large Software Systems”. In: IEEE Transactions on Software Engineering (TSE) 31.4 (2005), pp. 340–355. Google ScholarGoogle ScholarDigital LibraryDigital Library
  40. R. Queiroz, L. Passos, M. T. Valente, C. Hunsen, S. Apel, and K. Czarnecki. “The Shape of Feature Code: An Analysis of Twenty C-Preprocessor-Based Systems”. In: Software & Systems Modeling (SOSYM) 16.1 (2017), pp. 77–96. Google ScholarGoogle ScholarDigital LibraryDigital Library
  41. D. Romano and M. Pinzger. “Using Source Code Metrics to Predict Change-Prone Java Interfaces”. In: Proceedings of the International Conference on Software Maintenance (ICSM). IEEE, 2007, pp. 303–312.Google ScholarGoogle Scholar
  42. A. Saboury, P. Musavi, F. Khomh, and G. Antoniol. “An Empirical Study of Code Smells in JavaScript Projects”. In: Proceedings of the International Conference on Software Analysis, Evolution, and Reengineering (SANER). IEEE, 2017, pp. 294–305. Google ScholarGoogle ScholarCross RefCross Ref
  43. S. Schulze, J. Liebig, J. Siegmund, and S. Apel. “Does the Discipline of Preprocessor Annotations Matter? A Controlled Experiment”. In: Proceedings of the International Conference on Generative Programming and Component Engineering (GPCE). ACM. 2013, pp. 65– 74.Google ScholarGoogle Scholar
  44. D. I. Sjøberg, A. Yamashita, B. C. D. Anda, A. Mockus, and T. Dyba. “Quantifying the Effect of Code Smells on Maintenance Effort”. In: IEEE Transactions on Software Engineering (TSE) 39.8 (2013), pp. 1144–1156. Google ScholarGoogle ScholarDigital LibraryDigital Library
  45. F. Z. Sokol, M. Finavaro Aniche, and M. A. Gerosa. “MetricMiner: Supporting Researchers in Mining Software Repositories”. In: Proceedings of the Working Conference on Source Code Manipulation and Analysis (SCAM). IEEE, 2013, pp. 142–146. Google ScholarGoogle ScholarCross RefCross Ref
  46. H. Spencer and G. Collyer. “#ifdef Considered Harmful, or Portability Experience With C News”. In: Proceedings of the USENIX Technical Conference. USENIX Association, 1992, pp. 185–197.Google ScholarGoogle Scholar
  47. G. Succi, W. Pedrycz, M. Stefanovic, and J. Miller. “Practical Assessment of the Models for Identification of Defect-Prone Classes in Object-Oriented Commercial Systems Using Design Metrics”. In: Journal of Systems and Software 65.1 (2003), pp. 1–12. Google ScholarGoogle ScholarDigital LibraryDigital Library
  48. Y. Zhou, H. Leung, and B. Xu. “Examining the Potentially Confounding Effect of Class Size on the Associations Between Object-Oriented Metrics and ChangeProneness”. In: IEEE Transactions on Software Engineering (TSE) 35.5 (2009), pp. 607–623. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. How preprocessor annotations (do not) affect maintainability: a case study on change-proneness

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in

      Full Access

      • Published in

        cover image ACM SIGPLAN Notices
        ACM SIGPLAN Notices  Volume 52, Issue 12
        GPCE '17
        December 2017
        258 pages
        ISSN:0362-1340
        EISSN:1558-1160
        DOI:10.1145/3170492
        Issue’s Table of Contents
        • cover image ACM Conferences
          GPCE 2017: Proceedings of the 16th ACM SIGPLAN International Conference on Generative Programming: Concepts and Experiences
          October 2017
          258 pages
          ISBN:9781450355247
          DOI:10.1145/3136040

        Copyright © 2017 ACM

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 23 October 2017

        Check for updates

        Qualifiers

        • article

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader
      About Cookies On This Site

      We use cookies to ensure that we give you the best experience on our website.

      Learn more

      Got it!