10.1145/3287324.3287463acmconferencesArticle/Chapter ViewAbstractPublication PagessigcseConference Proceedingsconference-collections
research-article
Public Access

Automated Critique of Early Programming Antipatterns

Published:22 February 2019Publication History

ABSTRACT

The introductory programming lab, with small cycles of teaching, coding, testing, and critique from instructors, is an extraordinarily productive learning experience for novice programmers. We wish to extend the availability of such critique through automation, capturing the essence of interaction between student and instructor as closely as possible. Integrated Development Environments and Automated Grading Systems provide constant feedback through static analysis and unit testing. But we also wish to tailor automated feedback to acknowledge commonly recurring issues with novice programmers, in keeping with the practice of a human instructor. We argue that the kinds of mistakes that novice programmers make, and the way they are reported to the novices, deserve special care. In this paper we provide examples of early programming antipatterns that have arisen from our teaching experience, and describe different ways of identifying and dealing with them automatically through our tool WebTA. Novice students may produce code that is close to a correct solution but contains syntactic errors; WebTA attempts to salvage the promising portions of the student's submission and suggest repairs that are more meaningful than typical compiler error messages. Alternatively, a student misunderstanding may result in well-formed code that passes unit tests yet contains clear design flaws; through additional analysis, WebTA can identify and flag them. Finally, certain types of antipattern can be anticipated and flagged by the instructor, based on the context of the course and the programming exercise; WebTA allows for customizable critique triggers and messages.

References

  1. N.M.Ali,J.Hosking,andJ.Grundy.2013.Ataxonomyandmappingofcomputer- based critiquing tools. IEEE Transactions on Software Engineering 39, 11 (2013), 1494--1520. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. BrettA.Becker,GrahamGlanville,RicardoIwashima,ClaireMcDonnell,Kyle Goslin, and Catherine Mooney. 2016. Effective compiler error message enhance- ment for novice programming students. Computer Science Education 26 (2016), 148--175. Issue 2--3.Google ScholarGoogle ScholarCross RefCross Ref
  3. Christopher Brown, Robert Pastel, Bill Siever, and John Earnest. 2012. JUG: A JUnit generation, time complexity analysis and reporting tool to streamline grading. In Proceedings of the 17th ACM Annual Conference on Innovation and Technology in Computer Science Education. New York, NY, USA, 99--104. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Neil Brown and Amjad Altadmri. 2017. Novice Java programming mistakes: Large-scale data vs. educator beliefs. ACM Transactions on Computing Education 17 (2017). Issue 2. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. WilliamJ.Brown,RaphaelC.Malveau,HaysW."Skip"McCormick,andThomasJ. Mowbray. 1998. AntiPatterns: Refactoring Software, Architectures, and Projects in Crisis. John Wiley & Sons. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. JohnDeNero,SumukhSridhara,ManuelPerez-Quinones,AatishNayak,andBen Leong. 2017. Beyond autograding: Advances in student feedback platforms. In Proceedings of the 2017 ACM SIGCSE Technical Symposium on Computer Science Education. ACM, ACM, NY, NY, 651--652. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Christopher Douce, David Livingstone, and James Orwell. 2005. Automatic test- based assessment of programming: A review. Journal on Educational Resources in Computing (JERIC) 5, 3 (2005), 4. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Stephen H. Edwards and Manuel A. Perez-Quinones. 2008. Web-CAT: Auto- matically grading programming assignments. In Proceedings of the 13th Annual Conference on Innovation and Technology in Computer Science Education. 328--328. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. MariosFokaefs,NikolaosTsantalis,EleniStroulia,andAlexanderChatzigeorgiou. 2011. JDeodorant: Identification and application of extract class refactorings. Proceedings of the 33rd International Conference on Software Engineering (2011). Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. MartinFowler.1999.Refactoring:ImprovingtheDesignofExistingCode.Addison- Wesley. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Luke Gusukuma, Austin Cory Bart, Dennis Kafura, and Jeremy Ernst. 2018. Misconception-driven feedback: Results from an experimental study. In Pro- ceedings of the ACM Conference on International Computing Education Research. 160--168. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. JackHollingsworth.1960.Automaticgradersforprogrammingclasses.Commun. ACM 3, 10 (Oct 1960), 528--529. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. PetriIhantola,TuukkaAhoniemi,VilleKaravirta,andOttoSeppala?.2010.Review of recent systems for automatic assessment of programming assignments. In Pro- ceedings of the 10th Koli Calling International Conference on Computing Education Research. ACM, 86--93. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. M.JoyandN.Griffiths.2004.Onlinesubmissionofcoursework-Atechnological perspective. In Proceedings of the IEEE International Conference on Advanced Learning Technologies. 430--434. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Robert C. Martin. 2009. Smells and heuristics. In Clean Code: A Handbook of Agile Software Craftsmanship. Prentice Hall, Chapter 17.Google ScholarGoogle Scholar
  16. L.QiuandC.K.Riesbeck.2008.Anincrementalmodelfordevelopingeducational critiquing systems: Experiences with the Java Critiquer. Journal of Interactive Learning Research 19, 1 (2008), 119--145.Google ScholarGoogle Scholar
  17. Noa Ragonis and Mordechai Ben-Ari. 2005. A long-term investigation of the comprehension of OOP concepts by novices. Computer Science Education 15 (2005), 203--221. Issue 3.Google ScholarGoogle ScholarCross RefCross Ref
  18. Jaime Spacco, David Hovemeyer, William Pugh, Fawzi Emad, Jeffrey K. Hollingsworth, and Nelson Padua-Perez. 2006. Experiences with Marmoset: Designing and Using an Advanced Submission and Testing System for Program- ming Courses. In Proceedings of the 11th Annual ACM Conference on Innovation and Technology in Computer Science Education. 13--17. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. Hallvard Trætteberg and Trond Aalberg. 2006. JExercise: A specification-based and test-driven exercise support plugin for Eclipse. Proceedings of the 2006 OOPSLA Workshop on Eclipse Technology Exchange (2006). Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. ArtoVihavainen,ThomasVikberg,MattiLuukkainen,andMartinPartel.2013. Scaffolding students' learning using Test My Code. In Proceedings of the 18th ACM Conference on Innovation and Technology in Computer Science Education. 117--122. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. Stelios Xinogalos. 2015. Object-oriented design and programming: An inves- tigation of novices' conceptions on objects and classes. ACM Transactions on Computing Education 15 (2015). Issue 3. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Automated Critique of Early Programming Antipatterns

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in
      • Published in

        cover image ACM Conferences
        SIGCSE '19: Proceedings of the 50th ACM Technical Symposium on Computer Science Education
        February 2019
        1364 pages
        ISBN:9781450358903
        DOI:10.1145/3287324

        Copyright © 2019 ACM

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 22 February 2019

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • research-article

        Acceptance Rates

        SIGCSE '19 Paper Acceptance Rate169of526submissions,32%Overall Acceptance Rate1,595of4,542submissions,35%

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader
      About Cookies On This Site

      We use cookies to ensure that we give you the best experience on our website.

      Learn more

      Got it!