ABSTRACT
The introductory programming lab, with small cycles of teaching, coding, testing, and critique from instructors, is an extraordinarily productive learning experience for novice programmers. We wish to extend the availability of such critique through automation, capturing the essence of interaction between student and instructor as closely as possible. Integrated Development Environments and Automated Grading Systems provide constant feedback through static analysis and unit testing. But we also wish to tailor automated feedback to acknowledge commonly recurring issues with novice programmers, in keeping with the practice of a human instructor. We argue that the kinds of mistakes that novice programmers make, and the way they are reported to the novices, deserve special care. In this paper we provide examples of early programming antipatterns that have arisen from our teaching experience, and describe different ways of identifying and dealing with them automatically through our tool WebTA. Novice students may produce code that is close to a correct solution but contains syntactic errors; WebTA attempts to salvage the promising portions of the student's submission and suggest repairs that are more meaningful than typical compiler error messages. Alternatively, a student misunderstanding may result in well-formed code that passes unit tests yet contains clear design flaws; through additional analysis, WebTA can identify and flag them. Finally, certain types of antipattern can be anticipated and flagged by the instructor, based on the context of the course and the programming exercise; WebTA allows for customizable critique triggers and messages.
- N.M.Ali,J.Hosking,andJ.Grundy.2013.Ataxonomyandmappingofcomputer- based critiquing tools. IEEE Transactions on Software Engineering 39, 11 (2013), 1494--1520. Google Scholar
Digital Library
- BrettA.Becker,GrahamGlanville,RicardoIwashima,ClaireMcDonnell,Kyle Goslin, and Catherine Mooney. 2016. Effective compiler error message enhance- ment for novice programming students. Computer Science Education 26 (2016), 148--175. Issue 2--3.Google Scholar
Cross Ref
- Christopher Brown, Robert Pastel, Bill Siever, and John Earnest. 2012. JUG: A JUnit generation, time complexity analysis and reporting tool to streamline grading. In Proceedings of the 17th ACM Annual Conference on Innovation and Technology in Computer Science Education. New York, NY, USA, 99--104. Google Scholar
Digital Library
- Neil Brown and Amjad Altadmri. 2017. Novice Java programming mistakes: Large-scale data vs. educator beliefs. ACM Transactions on Computing Education 17 (2017). Issue 2. Google Scholar
Digital Library
- WilliamJ.Brown,RaphaelC.Malveau,HaysW."Skip"McCormick,andThomasJ. Mowbray. 1998. AntiPatterns: Refactoring Software, Architectures, and Projects in Crisis. John Wiley & Sons. Google Scholar
Digital Library
- JohnDeNero,SumukhSridhara,ManuelPerez-Quinones,AatishNayak,andBen Leong. 2017. Beyond autograding: Advances in student feedback platforms. In Proceedings of the 2017 ACM SIGCSE Technical Symposium on Computer Science Education. ACM, ACM, NY, NY, 651--652. Google Scholar
Digital Library
- Christopher Douce, David Livingstone, and James Orwell. 2005. Automatic test- based assessment of programming: A review. Journal on Educational Resources in Computing (JERIC) 5, 3 (2005), 4. Google Scholar
Digital Library
- Stephen H. Edwards and Manuel A. Perez-Quinones. 2008. Web-CAT: Auto- matically grading programming assignments. In Proceedings of the 13th Annual Conference on Innovation and Technology in Computer Science Education. 328--328. Google Scholar
Digital Library
- MariosFokaefs,NikolaosTsantalis,EleniStroulia,andAlexanderChatzigeorgiou. 2011. JDeodorant: Identification and application of extract class refactorings. Proceedings of the 33rd International Conference on Software Engineering (2011). Google Scholar
Digital Library
- MartinFowler.1999.Refactoring:ImprovingtheDesignofExistingCode.Addison- Wesley. Google Scholar
Digital Library
- Luke Gusukuma, Austin Cory Bart, Dennis Kafura, and Jeremy Ernst. 2018. Misconception-driven feedback: Results from an experimental study. In Pro- ceedings of the ACM Conference on International Computing Education Research. 160--168. Google Scholar
Digital Library
- JackHollingsworth.1960.Automaticgradersforprogrammingclasses.Commun. ACM 3, 10 (Oct 1960), 528--529. Google Scholar
Digital Library
- PetriIhantola,TuukkaAhoniemi,VilleKaravirta,andOttoSeppala?.2010.Review of recent systems for automatic assessment of programming assignments. In Pro- ceedings of the 10th Koli Calling International Conference on Computing Education Research. ACM, 86--93. Google Scholar
Digital Library
- M.JoyandN.Griffiths.2004.Onlinesubmissionofcoursework-Atechnological perspective. In Proceedings of the IEEE International Conference on Advanced Learning Technologies. 430--434. Google Scholar
Digital Library
- Robert C. Martin. 2009. Smells and heuristics. In Clean Code: A Handbook of Agile Software Craftsmanship. Prentice Hall, Chapter 17.Google Scholar
- L.QiuandC.K.Riesbeck.2008.Anincrementalmodelfordevelopingeducational critiquing systems: Experiences with the Java Critiquer. Journal of Interactive Learning Research 19, 1 (2008), 119--145.Google Scholar
- Noa Ragonis and Mordechai Ben-Ari. 2005. A long-term investigation of the comprehension of OOP concepts by novices. Computer Science Education 15 (2005), 203--221. Issue 3.Google Scholar
Cross Ref
- Jaime Spacco, David Hovemeyer, William Pugh, Fawzi Emad, Jeffrey K. Hollingsworth, and Nelson Padua-Perez. 2006. Experiences with Marmoset: Designing and Using an Advanced Submission and Testing System for Program- ming Courses. In Proceedings of the 11th Annual ACM Conference on Innovation and Technology in Computer Science Education. 13--17. Google Scholar
Digital Library
- Hallvard Trætteberg and Trond Aalberg. 2006. JExercise: A specification-based and test-driven exercise support plugin for Eclipse. Proceedings of the 2006 OOPSLA Workshop on Eclipse Technology Exchange (2006). Google Scholar
Digital Library
- ArtoVihavainen,ThomasVikberg,MattiLuukkainen,andMartinPartel.2013. Scaffolding students' learning using Test My Code. In Proceedings of the 18th ACM Conference on Innovation and Technology in Computer Science Education. 117--122. Google Scholar
Digital Library
- Stelios Xinogalos. 2015. Object-oriented design and programming: An inves- tigation of novices' conceptions on objects and classes. ACM Transactions on Computing Education 15 (2015). Issue 3. Google Scholar
Digital Library
Index Terms
Automated Critique of Early Programming Antipatterns
Recommendations
A Systematic Literature Review of Automated Feedback Generation for Programming Exercises
Formative feedback, aimed at helping students to improve their work, is an important factor in learning. Many tools that offer programming exercises provide automated feedback on student solutions. We have performed a systematic literature review to ...
A Tutoring System to Learn Code Refactoring
In the last few decades, numerous tutoring systems and assessment tools have been developed to support students with learning programming, giving hints on correcting errors, showing which test cases do not succeed, and grading their overall solutions. ...
Automated Assessment in Computer Science Education: A State-of-the-Art Review
Practical programming competencies are critical to the success in computer science (CS) education and go-to-market of fresh graduates. Acquiring the required level of skills is a long journey of discovery, trial and error, and optimization seeking through ...






Comments