Pricing Study: Machine Scoring of Student Essays

Barry Topol, John Olson, and Ed Roeber (PDF):

Education experts agree that the next generation of assessments (such as those being developed by the Partnership for the Assessment of Readiness for College and Careers (PARCC) and the Smarter Balanced Assessment Consortium (SBAC) in response to the new Common Core State Standards (CCSS)) need to do a better job of measuring deeper learning to determine if students are acquiring those skills critical to success in the 21st century.
Existing assessments tend to emphasize “bubble in” multiple choice type questions because they are easier, more timely and cheaper to score. However, multiple choice questions do not provide as good a measure of critical thinking skills as performance type questions, in which students are asked to read a passage or passages and present an argument based on synthesizing the information they have read. The answers to these performance type questions tend to be scored by humans, which is a time intensive and expensive process.
While some discussion about finding ways to increase the amount of
money spent on state assessment systems overall has begun, at least for the near future, states only appear to be able to spend roughly what they spend today for new summative assessments. Therefore, the question is, can the next generation of assessments be designed to better measure student critical thinking skills while costing roughly the same amount as states spend today (about $25 per student)?