Commentary on the SAT

There’s a lot to say. First, we must distinguish between two types of tests, or really two types of testing. When people say “standardized tests,” they think of the SAT, but they also think of state-mandated exams (usually bought, at great taxpayer expense, from Pearson and other for-profit companies) that are designed to serve as assessments of public K-12 schools, of aggregates and averages of students. The SAT, ACT, GRE, GMAT, LSAT, MCAT, and similar tests are oriented towards individual ability or aptitude; they exist to show prerequisite skills to admissions officers. (And, in one of the most essential purposes of college admissions, to employers, who are restricted in the types of testing they can perform thanks to Griggs v Duke Power Co.) Sure, sometimes researchers will use SAT data to reflect on, for example, the fact that there’s no underlying educational justification for higher graduation rates1, but SATs are really about the individual. State K-12 testing is about cities and districts, and exists to provide (typically dubious) justification for changes to education policy2. SATs and similar help admissions officers sort students for spots in undergraduate and graduate programs. This post is about those predictive entrance tests like the SAT.

Liberals repeat several types of myths about the SAT/ACT with such utter confidence and repetition that they’ve become a kind of holy writ. But myths they are. 

  1. SATs/ACTs don’t predict college success. They do, indeed. This one is clung to so desperately by liberals that you’d think there was some sort of compelling empirical basis to believe this. There isn’t. There never has been. They’re making it up. They want it to be true, and so they believe it to be true.

    The predictive validity of the SAT is typically understated because the comparison we’re making has an inherent range restriction problem. If you ask “how well do the SATs predict college performance?,” you are necessarily restricting your independent variable to those who took the SAT and then went to college. But many take the SAT and do not go to college. By leaving out their data, you’re cropping the potential strength of correlation and underselling the predictive power of the SAT. When we correct statistically for this range restriction, which is not difficult, the predictive validity of the SAT and similar tests becomes remarkably strong. Range restriction is a known issue, it’s not remotely hard to understand, and your average New York Times digital subscription holder has every ability to learn about it and use that knowledge to adjust their understanding of the tests. The fact that they don’t points to the reality that liberals long ago decided that any information that does not confirm their priors can be safely discarded.