High-quality feedback delivered promptly is an important part of undergraduate learning but providing this feedback is difficult in larger classes. Instructors respond to larger class sizes by either increasing their workload or by reducing the frequency or the quality of their assessment. The strategy of instructors increasing their workload reaches a human limit quickly, and instructors are reluctant to lower the quality of assessment. At the same time, computers and automation perform a larger degree of instruction and feedback in higher education. What is the appropriate role of computation and automation in providing better feedback to students while keeping the overall instructor workload at a manageable level?

A few strategies are available to increase the amount of feedback to students or respond to increasing class sizes. Instructors can hire graders or teaching assistants and allocate the workload between them to get the overall individual workload to a manageable level. Instructors can create class assignments that use automated grading to reduce the manual grading workload. Instructors can also move from centralized grading to distributed approaches that allow students or others non-instructors to give feedback. Automated grading explicitly requires computers but the other methods may also use automation to amplify human effort.

The dominant grading strategy in many classes is for most feedback to come from the main instructor or from teaching assistants. To borrow from the language of electricity production, this is a centralized strategy. Many larger classes use a phalanx of teaching assistants to manage the feedback to students. Unfortunately, budget constraints or a lack of graduate students at many universities makes this strategy infeasible. Because of the structure of the curriculum I teach in, there are rarely students available that have previously taken the class that are available as assistants. Much of the feedback I give requires personal attention, but since I teach technical classes, some of my assessment can be done by a computer.

Autograding techniques are expanding in sophistication and can catch many common errors but cannot substitute for many types of feedback. Scantron grading has existed for a long time, and online exams are gaining popularity. The availability of computers and learning management systems (LMS), expand both the availability and the type of automated assessment possible. In addition to multiple choice questions, textual and numerical free response questions are possible as well. Instructors see these techniques as a necessity in larger classes but bemoan the limited feedback quality provided by multiple choice questions. Since this approach has limits, I'm interested in what barriers to providing higher-quality feedback can be reduced using automation. As the sophistication of these tools increases, it will be possible to provide more types of feedback automatically. These techniques can reduce the need for some rote grading and free up time and mental energy for deeper feedback.

Distributing feedback responsibilities to students is a strategy that lowers the instructor workload and provides students with a more challenging mental task. In my classes, in formal and informal ways, I distribute the feedback responsibilities to students with good results. If students can provide feedback of sufficient depth and quality to stimulate student learning, then the strategy can scale to much larger classes. For this to work, students must be provided with clear guidelines for providing feedback as well as an explanation that this is a valuable learning exercise for them. One benefit of this approach is that students are asked to evaluate the quality and relevance of work which uses higher-order thinking. While the feedback will be different than that provided by an expert, if it prompts the learner to reflect on their mental models, it can be equally valuable.

I tend to think of my grading and assessment strategies along two dimensions: automation and distribution. Thinking along these two dimensions helps me choose and combine techniques to create good exercises for students. There is an interesting middle ground that I am exploring with automation providing an amplifying role for manual instructor and peer grading. Automation can perform simple tasks in collection, organization, and feedback, freeing time and energy for more substantial criticism of the work. It is important to recognize that all of these techniques have limits. The point of using them is not to remove human grading but to amplify and focus precious human effort on the best tasks for student learning.

I'll be experimenting with these techniques in the classroom this fall and spring so check in with me and I'll let you know how it is going.