Date

This post describes a teaching technique I am revising that illustrates the use of hybrid distributed and automated feedback to focus instructor time on high-value feedback. When I began teaching at Sonoma State University in 2013, I used the techniques I experienced in the classroom: lecture, individual homework, and individual testing. However, I was inspired by the work of Eric Mazur and Carl Weinman on peer physics instruction and my initial experiments with it were very promising. I decided to adapt an exam technique where students worked in teams on quizzes and exams and were allowed multiple attempts on each question. The examples I saw used either a commercially available paper IF-AT form or website Learning Catalytics to provide automated grading of the exam questions. I wanted the benefits of the technique without the complexity, expense, or constraint of adapting my materials to a commercial product. The simplest approach was for me to use the same paper quiz but then walk around the room and mark questions during the exam.

This quiz technique was well received by students and allowed me to uncover subtle student difficulties. I was also impressed by how long students remained engaged in these exercises. The basic benefits of the technique come from getting feedback easily and quickly during the exercise by using multiple choice questions and working in teams that must arrive at a consensus. Multiple choice, though very limited, has a benefit in the ease and speed of marking. With well-constructed wrong or distracting answers, it has effective diagnostic power. Working in teams provides two main benefits. Students must clearly articulate their strategies and reasoning to each other and students must arrive at a consensus. These discussions also allowed me to observe their mental models in real time. Once students arrived at a consensus and marked an answer, I could tell them whether it was correct. Confronted with immediate feedback, students try very hard to determine their error in either conceptual understanding or the calculation. They were also able to ask me and each other questions and articulate their difficulties. This simple grading structure allowed me to focus my efforts on groups that were struggling. Students got both centralized feedback from the instructor and also the distributed feedback from their peers.

This technique works very well for classes of about 25 students where you have less than ten teams. In larger classes, students had to wait for me to get to them and mark answers, a poor use of their class time. The kind of feedback they are waiting for is very simple: a, b, c, or d? This is a perfect opportunity for automation, but automation comes with a large initial and ongoing investment of time. My question to myself was, what was the simplest solution that allowed students to get feedback more quickly while retaining the essence of the activity in larger classes? For me to make this investment and accept this increase in complexity, I have to believe the benefit outweighs the cost. My strategy is to use our Moodle learning management system to administer an online quiz in parallel with the paper quiz and remove the need for me to do this simple, repetitive grading task. Using a smartphone or laptop, a team member can submit the multiple choice answer and immediately find out if their attempt is correct. With this quiz implementation, I can focus all my time on working with the teams having difficulties. This fall, I'll be converting these quizzes from paper (LaTeX format) to the format for our Moodle LMS.

It is worth reflecting on the limitations of this strategy. First of all, multiple choice questions are only suitable for certain types of assessment. I use them mostly for drilling on the mechanics of using formulas, handling large numbers, and converting units. Giving students feedback on crafting strategies for solutions has to be handled another way. Again, this will require a significant upfront investment in quiz formatting, and getting the technology set. However, my experience is that I can get the benefits of these improvements for several semesters.

I hope that this illustrates an example where the addition of a technology in the classroom was used to achieve a specific pedagogical goal. In this case, the goal was to maximize the time both the students and instructors spent engaged with the material. Technology is often reflexively suggested as a way of improving educational outcomes, and technologists (myself included) can make the mistake of focusing on the tool at the expense of the desired outcome. It bears repeating that I didn't initially use an automated grading technology because I didn't think it would be worth the investment. Eventually, I saw the benefit of this time expense. What was clarifying for me in my design of this improvement was to separate the pedagogical benefits and goals of the activity from the technology that can enable those outcomes.