Arnold Glass and Mengxue Kang, psychology researchers at Rutgers-New Brunswick’s School of Arts and Sciences, are conducting an ongoing study using technology to monitor college students’ academic performance and to assess the effects of new instructional technologies on that performance. Noticing a problematic trend in the data—students’ homework grades far outpacing their exam grades—they have dug into a subset of their findings to try and determine what may be driving that change. The results raise questions about teaching and learning in a time when remote education opportunities are expanding.
The data come from Professor Glass’s own courses: two sections of a lecture course on human learning and memory taught between 2008 and 2011, and two sections of a lecture course on human cognition taught between 2012 and 2018. Glass and Kang analyzed student homework and exam performance (232 total question sets) for 2,433 students who took the classes over the entire period. Fity-nine percent of students were female, and 41 percent were male. The vast majority were between the ages of nineteen and twenty-four.
Homework consisted of online quizzes of four to eight questions posted after each lecture that were to be completed outside of the classroom prior to the next lecture. The students’ answers, the correct answer, and some detail on why the answer was correct were available after the quiz window closed and throughout the semester and were to be used as study guides for the exams. There were three in-class exams given each semester. In the final two years of the study, the researchers also asked students about their process for completing the online quizzes: whether they typically answered from memory (these students were dubbed Homework Generators), or whether they looked up their answers before submitting (dubbed Homework Copiers).
The initial data sorted students into two groups: those whose exam grades were higher than their homework grades and vice versa. The researchers considered the former to be the preferred outcome, since the lectures and online quizzes were intended to build upon one another over the course of the semester and “produce learning” that would ultimately increase the probability that students answered exam questions correctly. This was the predominant pattern shown by the data beginning in 2008. However, the percent of students who exhibited the opposite pattern—scoring better on the online homework than the in-person exams—increased from 14 percent in 2008 to a whopping 55 percent in 2017.