Grading Machines: Can AI Exam-Grading Replace Law Professors?

Kevin L. Cope, Jens Frankenreiter, Scott Hirst, Eric A. Posner, Daniel Schwarcz and Dane Thorley:

In the past few years, large language models (LLMs) have achieved significant technical advances, such that legal-advocacy organizations are increasingly adopting them as complements to — or substitutes for — lawyers and other human experts. Several studies have examined LLMs’ performance in taking law school exams, finding mixed results. Yet there have been no published studies systematically analyzing LLMs’ competence at one of law professors’ chief responsibilities: grading law school exams. This paper presents results of an analysis of how LLMs perform in evaluating student responses to legal analysis questions of the kind typically administered in law school exams. The underlying data come from exams in four subjects administered at top-30 U.S. law schools. Unlike some projects in computer or data science, our goal is not to design a new LLM that minimizes error or maximizes agreement with human graders. Rather, we seek to determine whether existing models — which can be straightforwardly applied by most professors and students — are already suitable for the task of law exam evaluation. We find that, when provided with a detailed rubric, the LLM grades correlate with the human grader at Pearson correlation coefficients of up to 0.93. Our findings suggest that, even if they do not fully replace humans in the near future, LLMs could soon be put to valuable tasks by law school professors, such as reviewing and validating professor grading, providing substantive feedback on ungraded midterms, and providing students feedback on self-administered practice exams.


Fast Lane Literacy by sedso