If only the good news about Wisconsin education was true

Roger Frank Bass:

Finally, there was some really good news about education. According to the Wisconsin Department of Public Instruction, the percentage of proficient readers in the third grade had increased from 64.8% in 1998 to 87.4% in 2005. And this improvement was broad-based – every minority group advanced substantially.

If only it were true

Deception No. 1: Test questions, their scoring and definitions of “proficiency” changed constantly. The number of test items, the kinds of items (multiple choice vs. short answer) and their content varied every year the test was given. The score needed for proficiency dropped more than 40%, from 50 in 1998 to 29 in 2005.

That sleight of hand entailed complex statistics to estimate how hard the revised test might be for the next crop of third-graders. That estimate, rather than criteria for effective reading, became the cutoff for proficiency. Obviously, even if mathematics could prove that two tests are equally hard, changing the questions every year meant that subsequent tests weren’t assessing the same thing. The tests were apples and oranges, and the mathematics a red herring.

Deception No. 2: Reading skills were less important than student guessing, and the test’s margin of error. Fifty-three of the test’s 58 items were multiple choice with four possible answers. So on average, students guessed 13 answers correctly. In addition, the test’s margin of error was six points.

Now remember, only 29 correct were needed for “proficiency” in 2005. So with 13 for guessing and six for test error, we have 19 of those 29 (65%). And that’s only the beginning. The statistical estimates of proficiency contributed additional error margins that were never added to the students’ scores.

Besides that, schools teaching to the test add even more points. Then there’s this: To move up from being a “minimal” reader to a “basic” reader required only 14 points – one more than random guessing and far less than guessing plus the error margins just described. The assessment data say more about the test than student reading.

Deception No. 3: Data on ethnic groups. The third-grade data reported by the DPI indicated that, from 1998 to 2005, every minority improved between 20% and 44%. But yearly average increases ranged from only 2.5% to 5.5%, less than the test’s margin of error.

That means each year’s improved scores could have resulted from testing error and how well students were prepared to take the test – not improved reading. And even if we concede the reality of those 2.5% to 5.5% improvements, they are still minuscule compared with what’s obtained with well-researched reading programs. There was no reason to celebrate those data in the first place.

So what do the data tell us?

One, nobody knows how well those third-graders read and, according to the state, third-grade test data don’t predict student reading levels even a year later. That makes sense: Without an accurate measure of current reading skills, how could we predict future performance?

Two, parents need an independent, deception-free appraisal of student learning.

Three, decades of research on education’s fads amply demonstrate how those boom-to-bust cycles last about five to seven years. The third-grade test was used seven years – it was junked right on schedule following its retirement party, where the state school superintendent declared, “When people come together, we can see results,” according to a July 13, 2005, article in the Journal Sentinel.

Why should we assume that the agency responsible for that third-grade test turned in a better performance the next time it evaluated our children and schools?

Roger Frank Bass of Port Washington is a professor of education at Carthage College. His e-mail address is rfb53074@aol.com

Via http://www.jsonline.com/story/index.aspx?id=681874.

Much more on testing, here.