The need for Python autograding
Coding is slowly but surely becoming part of every professional field, making a fundamental knowledge of programming extremely useful. And with computer science classes being introduced in K-12/primary education, we’re sure to see an even bigger increase in Computer Science students, especially in Python courses.
With that being said, teachers can also expect a higher workload. Unzipping, downloading, running, adding feedback and re-uploading submissions per student and per assignment is a great burden! Unfortunately, this can take time away from teaching and giving students personal attention. This is where autograding can really help!
Not only does automatic grading help teachers, but it also directly benefits students. Firstly, with an instant feedback mechanism, students can submit their work and see immediately where to improve. They can correct their code and resubmit, and this feedback loop continues. This quick turnaround helps students learn at a consistent pace - no longer do they need to wait a few days for their answers, only to forget how they did it in the first place. Moreover, with CodeGrade, students can also directly code in our integrated editor (directly from within the LMS) or hand work in directly through their GitHub or GitLab repository and get their feedback instantly!
Secondly, after handing in, students can see if their solution works and fix it if it doesn’t, making sure you will never have to fix their compiler errors again. Finally, you have more time to focus on the interesting and important facets of teaching. For example, cutting down hours of grading time allows educators to give more attention to student questions and wider learning objectives.
Ideally, an autograder balances two elements; ease of use - students and teachers should be able to get to grips with the autograder quickly and easily; and flexibility - as learners move from basic programming assignments to more advanced ones, an autograder should function with a wider range of accepted outputs, among other nuances.
Setting up a basic autograded Python, Jupyter Notebook or IPython Notebook assignment is pretty simple. We’ve outlined some of the options below.
Input/Output autograding
Many programming courses, particularly introductory courses, teach students how to create programs and interact with them through the command line interface (CLI). For instance, a simple program might prompt a user to provide a number and, in return, print a statement telling the user that their number is odd or even. Input/Output (I/O) tests are a great way of checking that these simple programs work as expected. However, this would traditionally require teachers to download each of their students’ submissions and run them independently in their local environment, putting inputs to the program one at a time and making themselves vulnerable to human error.
Fortunately, CodeGrade’s I/O test step allows you to run a program as you would on your own CLI - enter an input (either standard input, stdin, or as input argument) to the program, and test that the predicted output matches the actual output. With CodeGrade, this doesn’t have to be an exact match, but can be made more flexible by the teacher by modifying case sensitivity, substring matching, whitespace sensitivity or by using a regex.