School Information System
Newsletter Sign Up |

Subscribe to this site via RSS: | Newsletter signup | Send us your ideas

March 13, 2006

What's an AYP Rating? And Why it Matters

Eduwonk:

Most everyone in the political and policy world was fixated on all the "what does it mean" questions about Sunday's NYT Mag story on Mark Warner. But there was also some chattering about the Outlook spread on No Child Left Behind in the Wash. Post. It was well done including reactions from DC-area principals, an NCLB primer by Jay Mathews, and a map of DC-area schools (pdf) not making "adequate yearly progress" or AYP.


But despite the primer, readers might have been left wondering about these adequate yearly progress targets. That's understandable, it's confusing, and they're not the result of a single calculation. Instead, it's a multi-step process with opportunities to increase or decrease the level of difficulty at each one. It goes something like this:


First, the state chooses a test to use. This can be a pre-existing test used elsewhere, a custom-designed one based on the state's standards, or a combination of the two. Obviously, the degree of difficulty is a big issue here.


Second, the state decides what the cut score on the test will be for a student to be "proficient" as well as "basic", "advanced", and any other delineations of performance the state wants to have. In other words, how many questions does a student need to answer correctly? For No Child Left Behind the most important category is proficient because that is what the law's "adequate yearly progress" ratings are based on. There are several methods for determining cut scores. What's most important to remember about them is that they all rely on professional judgment. There is no revealed source of truth about what a fifth-grader or a high school student needs to know and be able to do. At the risk of oversimplifying too much, the three most common methods are based on using expert judgment from a panel of experts to come up with cut scores, comparing and contrasting how various groups of test takers do on the test, and scaling the questions from easy to hard and determining various delineations for performance along the scale. Again, plenty of chances to increase or reduce the level of difficulty in this process.


But, while newspapers commonly report the percentage of students passing a test, they rarely report on what the cut scores are and when and how they are set. The composition of the professionals involved also matters a lot. Is it just K-12 teachers, or outside experts for instance representatives of higher education, too? Lack of attention to this process is unfortunate because there is plenty of opportunity for mischief and a state with a difficult test and a high cut score, say 40 out of 50, is going to have different results than a state with an easier test or a low cut scores. But, cut scores of half to 2/3 of the questions correct in order to be "proficient" are not at all uncommon. All this is public information or can be obtained through a FOIA. And it's all extremely relevant to all this.

Dick Askey commented on test scores vis a vis local, state and national results here.

Posted by James Zellmer at March 13, 2006 4:22 PM
Subscribe to this site via RSS/Atom: Newsletter signup | Send us your ideas