Improve feedback for ICT education and studies
Articles
April 2, 2019

Relieving workload on teachers and improving feedback for students

As many students and teachers experience, learning to code is hard and challenging. It is a skill that needs to be learned by doing, and a skill that is learned by making mistakes and learning from these mistakes. Without sufficient feedback, however, it is challenging for students to know how and where to improve, and it might impair their motivation too.

On the other hand, assessing students individually and giving sufficient feedback is hard and time-consuming for teachers. Especially with the growing demand for ICT and Computer Science degrees, teachers can only spend less time per student.

Another issue often seen in higher education, is that assessment is done similar to other studies, where the only summative assessment is at the end of the course in the form of a final exam. This is not particularly suited to programming courses, where students need feedback constantly and have to spend time learning to program. Graded weekly or biweekly assignments, for example, could help solve this. It is necessary, though, to give not only summative feedback, in the form of a grade, but also formative feedback, to let a student know what they did wrong and how to improve on it. Even if this feedback can be given qualitative, it is important that students receive it timely, and not weeks after they submitted their assignments.

This means that the workload on teachers is high, and often it is impossible to achieve all these three goals.

To be able to give timely feedback, automated assessment could help solve this. In grading programming assignments, manually testing if it the functionality is working correctly, is hugely time-consuming. Being able to automatically assess this, would only require teachers to set up automatic testing scripts once for all students (which can be potentially used for multiple years).

Start engaging students and saving time with automatic grading now!

So what are the options for automated testing? The most simple form is input/output testing. You specify a certain input to a function or program and specify what output you expect, possibly with a regex to make it a bit more advanced. For simple assignments this could be sufficient. Coupling this with rubric items also allows students to get a bit more formative feedback than just “this is what we expected, this is what your program returned”.

A more advanced form of automated testing can be achieved with unit tests. Unit testing frameworks are ubiquitous and perfectly suited to do advanced testing of code. It also allows to give more meaningful feedback to students.

Besides functionality testing, style and structure can be automatically assessed using linters and code climate checkers (also tools which are ubiquitously used in programming production code).

Automated testing, however, is not the be-all and end-all, a manual check of the teacher would still be recommended to check if the automated assessment is correct and to provide extra formative feedback to students. A complimentary assessment between human and machine would make sure students get timely and qualitative feedback, while the workload for teachers doesn’t go through the roof.

In the coming months, we will release CodeGrade AutoTest, which allows teachers to set up simple and advanced automated testing at the press of a button, with the ability to provide qualitative feedback to students.

References

  1. J. Hamari, J. Koivisto, and H. Sarsa, “Does gamification work?–a literature review of empirical studies on gamification,” in 2014 47th Hawaii international conference on system sciences (HICSS), 2014, pp. 3025–3034.
  2. H. Keuning, J. Jeuring, and B. Heeren, “Towards a systematic review of automated feedback generation for programming exercises,” in Proceedings of the 2016 ACM Conference on Innovation and Technology in Computer Science Education, 2016, pp. 41–46.
  3. J. Bennedsen and M. E. Caspersen, “Failure rates in introductory programming,” AcM SIGcSE Bulletin, vol. 39, no. 2, pp. 32–36, 2007.
  4. D. Boud and E. Molloy, Feedback in higher and professional education: understanding it and doing it well. Routledge, 2013.
  5. M. E. Caspersen and J. Bennedsen, “Instructional design of a programming course: a learning theoretic approach,” in Proceedings of the third international workshop on Computing education research, 2007, pp. 111–122.
  6. J. Hattie and H. Timperley, “The power of feedback,” Review of educational research, vol. 77, no. 1, pp. 81–112, 2007.
  7. S. Narciss, “Feedback strategies for interactive learning tasks,” Handbook of research on educational communications and technology, vol. 3, pp. 125–144, 2008.
  8. V. J. Shute, “Focus on formative feedback,” Review of educational research, vol. 78, no. 1, pp. 153–189, 2008.

Continue reading

How to automatically grade R assignments

Streamline R Assignments with Automatic Grading in CodeGrade

Watch now! Efficient Grading with AutoTestV2!

Discover how AutoTest V2's streamlined grading and instant feedback transform the grading process for educators.

You're invited! Join our AI Assistant webinar

Discover CodeGrade’s AI Code Assistant! Join our webinar on Nov 20 with Professor Brian Brady to explore AI’s role in authentic code learning and skill-building.

How to automatically grade SQL assignments

Learn how to set up SQL assignments with CodeGrade for efficient assessment and feedback.

Sign up to our newsletter

See how CodeGrade can transform your courses today!