In the aftermath of Hurricane Katrina, one of the US’s largest insurers refused to honor damage claims from customers living on the US Gulf Coast who submitted hurricane insurance claims, asserting that their property had not been damaged by hurricane, but by flooding. Only a high-stakes, high-profile, class-action lawsuit ultimately pried the insurance payments loose. Currently, this large US insurance company, with its own trust issues, is running a series of television commercials poking fun at an institution that it assumes is trusted by the public even less–the Internet. “They wouldn’t put it on the Internet if it wasn’t true” says the naïve foil who purchased allegedly inferior insurance after believing the promises in an Internet advertisement, presumably eliciting off-screen laughter in millions of living rooms.
Now suppose that you are responsible for learning the “state of the art” in the research literature on an important, politically-sensitive, and hotly-contested public policy topic. You can save money by hiring master’s level public policy students or recent graduates, though none with any particular knowledge or experience in the topic at hand–a highly specialized topic with its own doctoral-level training, occupational specializations, and vocabulary. You give your public policy masters a computer with an Internet browser and ask them to complete their reports within a few months. What would you expect them to produce?
You can see for yourself at the website of the Organisation for Economic Co-operation and Development2 (OECD). In 2009 the OECD launched the Review on Evaluation and Assessment Frameworks for Improving School Outcomes. Apparently the “Review” has not claimed an acronym. In my own interest, then, I give it one–REAFISO.3
In its own words, REAFISO was created to:
“…provide analysis and policy advice to countries on the following overarching policy question: How can assessment and evaluation policies work together more effectively to improve student outcomes in primary and secondary schools?”
To answer this question, the OECD intended to:
“…look at the various components of assessment and evaluation frameworks that countries use with the objective of improving the student outcomes produced by schools…. and
“…extend and add value to the existing body of international work on evaluation and assessment policies.”
REAFISO’s work interested me for two reasons. First, I once worked at the OECD, on fixed- length consulting contracts accumulating to sixteen months. I admired and respected their education research work and thoroughly enjoyed my time outside work hours. (The OECD is based in Paris.) I particularly appreciate the OECD’s education (statistical) indicators initiatives.
Second, I have worked myself and on my own time to address the overarching question they pose, ultimately publishing a meta-analysis and research summary of the effect of testing on student achievement. As I lacked the OECD’s considerable resources, it took me some time–a decade, as it turned out–to reach a satisfactory stage of completion. I hedge on the word “completion” because I do not believe it possible for one individual to collect all the studies in this enormous research literature.
Deficiencies of the OECD’s REAFISO research reviews include:
overwhelming dependence on US sources;
overwhelming dependence on inexpensive, easily-found documents;
overwhelming dependence on the work of economists and education professors;
wholesale neglect of the relevant literature in psychology, the social science that
invented cognitive assessment, and from practicing assessment and measurement
wholesale neglect of the majority of pertinent research.
Moreover, it seems that REAFISO has fully aligned itself with a single faction within the heterogeneous universe of education research–the radical constructivists. Has the OECD joined the US education establishment? One wouldn’t think that it had the same (self-) interests. Yet, canon by canon by canon, REAFISO’s work seems to subscribe to US education establishment dogma. For example, in her report “Assessment and Innovation in Education”, Janet Looney writes