Value Added Models& Student Information Systems

147K PDF via a Dan Dempsey email:

The following abstract and conclusion is taken from:
Volume 4, Issue 4 – Fall 2009 – Special Issue: Key Issues in Value-Added Modeling
Would Accountability Based on Teacher Value Added Be Smart Policy? An Examination of the Statistical Properties and Policy Alternatives
Douglas N. Harris of University of Wisconsin Madison
Education Finance and Policy Fall 2009, Vol. 4, No. 4: 319-350.
Available here:
http://www.mitpressjournals.org/doi/pdfplus/10.1162/edfp.2009.4.4.319
Abstract
Annual student testing may make it possible to measure the contributions to student achievement made by individual teachers. But would these “teacher value added” measures help to improve student achievement? I consider the statistical validity, purposes, and costs of teacher value-added policies. Many of the key assumptions of teacher value added are rejected by empirical evidence. However, the assumption violations may not be severe, and value-added measures still seem to contain useful information. I also compare teacher value-added accountability with three main policy alternatives: teacher credentials, school value-added accountability, and formative uses of test data. I argue that using teacher value-added measures is likely to increase student achievement more efficiently than a teacher credentials-only strategy but may not be the most cost-effective policy overall. Resolving this issue will require a new research and policy agenda that goes beyond analysis of assumptions and statistical properties and focuses on the effects of actual policy alternatives.
6. CONCLUSION
A great deal of attention has been paid recently to the statistical assumptions of VAMs, and many of the most important papers are contained in the present volume. The assumptions about the role of past achievement in affecting current achievement (Assumption No. 2) and the lack of variation in teacher effects across student types (Assumption No. 4) seem least problematic. However, unobserved differences are likely to be important, and it is unclear whether the student fixed effects models, or any other models, really account for them (Assumption No. 3). The test scale is also a problem and will likely remain so because the assumptions underlying the scales are untestable. There is relatively little evidence on how administration and teamwork affect teachers (Assumption No. 1).

Related: Value Added Assessment, Standards Based Report Cards and Los Angeles’s Value Added Teacher Data.
Many notes and links on the Madison School District’s student information system: Infinite Campus are here.