Infographic of the Day: Does Adding Teachers Improve Education?

Cliff Kuang:

Politicians seem to have temporary set aside the debate about improving our schools, but you can bet that when the issue rises again, one solution will be raised, over and over: Improving student/teacher ratios–that is, hiring more teachers. But is it really a silver bullet for increasing results? What sort of results can we expect?
The graph above offers a few clues–but unraveling them takes a bit of explanation. The crucial point being: Adding teachers might improve student performance relative to past results, but it’s a weak lever for effecting aggregate improvements.
So, let’s dig into the graph. Each of the lines–colored in blue or green–represents data from a single state. To the left is that state’s student/teacher ratio; to the right is that state’s average SAT score.
The graph looks sort of confusing at first, but it actually does a pretty good job at showing that student/teacher ratios and SAT scores aren’t closely related. If they were highly correlated, you’d expect to see lines with slopes all at a 45-degree angle (whether sloping up or down). But as you can see, they’re actually a tangle. The states with the highest SAT achievement have relatively low student/teacher ratios–but those ratios alone don’t account for their performance, since plenty of other states have similar ratios but don’t score nearly as well.

One thought on “Infographic of the Day: Does Adding Teachers Improve Education?”

  1. From NCES:
    “Student/teacher ratios do not provide a direct measure of class size. The ratio is determined by dividing the total number of full-time-equivalent teachers into the total student enrollment. These teachers include classroom teachers; prekindergarten teachers in some elementary schools; art, music, and physical education teachers; and teachers who do not teach regular classes every period of the day. Teachers are reported in full-time-equivalent (FTE) units. ”
    The author here is assuming that class size is same as student/teacher ratio. Then he is assuming SAT scores are good measure — it is not for several reasons, one being not everyone takes the SAT, so the relationship is between different populations of students. These are average SATs and average student/teacher — pretty lousy way to understand if there is a relationship.
    Finally, the author doesn’t have an understanding of correlation, assuming the slope of the line between independent and dependent variables is an indication of the strength of the correlation — it is not in general, and it certainly has nothing to do with the angle of graphed line.
    The angle of the slope is fully dependent on the how the graph is constructed: if a graph is stretched along the x-axis, the angle of the line decreases, compressed, the angle increases. Discussing the angle of the line is quite useless, and is has little to do with slope of a “regression” line.
    The slope, itself, is dependent on the x and y units of measure. The author’s graph uses two different units, or seems to. The units need to be converted to z-scores then graphed with equal units along the x and y-axes to make any statement regarding regarding slope and angle.
    The author’s conclusion that the average state aggregate of student/teacher ratio is not a good predictor of the average state aggregate of SAT scores deserves a “duh”.

Comments are closed.