A Comparison of Madison Elementary, Middle & High Schools to other Wisconsin Districts

Madtown Chris:

  • This blog will contain my personal analysis of Dane County schools primarily based on the state testing data. The posts are really intended to be read sequentially, more or less. So far it’s not really a stream-of-consciousness like a normal blog; rather it’s more of a straightforward analysis.
    Why bother with this?
    Our friends recently mentioned that they were concerned with the Madison schools among other things and were thinking about moving to a small town like Lodi. Other people we know are putting their kids in private school. I thought that Madison schools might not be in the dismal situation that seems to be the conventional wisdom today but I couldn’t say for sure.
    I stumbled upon the State of Wisconsin web site that provides test scores for various grades for every school in Wisconsin and decided to conduct an analysis to see what I could discover about the various schools in the area. Should we move? Should they move? Should you be concerned about the schools? I attempt to answer these questions given numerous assumptions.
    Summary of Results
    Madison has the best schools in Dane county and among the best in all of Wisconsin.
    But, and it’s a big BUT, you have to make sure you’re in the right one. Choose poorly and you get a relatively bad school.

  • High Schools
  • Middle Schools
  • Elementary Schools

14 thoughts on “A Comparison of Madison Elementary, Middle & High Schools to other Wisconsin Districts”

  1. These data are the results for white/Caucasian students. Here is some of what the blogger assumed:
    “I only collected and compared scores of white/Caucasian students. I did this for two reasons.
    First, I was specifically concerned with my kids’ opportunities not yours. If you’re non-white, I’m sorry, you can try to figure this out yourself for your own demographic and you probably should. If they had more specific demographics that relate to educational performance, I would have used that as well (e.g. parents educational level, etc.) but they only provide race and gender. My reasoning is that if a school is good for my kids’ particular demographic then it’s good for their education.
    My guess is that in this data set, race is actually a proxy for economic class which, I think, is probably a better predictor for academic achievement. So my guess is that these results apply to all races having a similar economic status to the average white Madisonian.
    Second, I had a feeling that the well-known and unfortunate “achievement gap” between whites and minorities causes the statistics to be not comparable between schools with varying minority populations.
    Clearly the achievement gap is a problem in schools with significant minority populations and needs to be addressed. I’m just not analyzing or addressing it here.”
    Obviously, this analysis leaves alot out. But it does highlight one thing–if 72% of the white students at West have average scores in the advanced category, one might fairly assume the current high school design is working well for them. So those parents shopping schools by looking at the comparison this blogger made might want to take notice of the proposed high school design overhaul Rainwater is proposing (and stacking the deck for in his committee appointments.)

  2. I would hope that parents, who are also citizens of this community, would not be satisfied with schools that “work well” for their children but not for the children of their neighbors and fellow citizens. The kind of thinking that says the children who live 3 miles away are not a concern of mine is a continual disappointment to me and unworthy of the many benefits we enjoy living in a relatively affluent community.
    Beyond that, I find the idea of determining “relatively bad schools” from test data absurd. Tests do mostly reflect the demographics of the student population, so some schools with lower scores may in fact be doing a great job for their needy student population. Maybe the top-scoring schools should be a lot higher-scoring given the advantages their students enjoy. All this conclusion really means is that the blogger is recommending that parents avoid the urban poor.
    Kay Cahill

  3. I agree with all of Kay’s comments. Test data is interesting, but doesn’t tell you much at all about the quality of education in a school. Given what this particular blogger was looking at…white kids who score in the “advanced” category….I’d be shocked if West wasn’t at the top of the list. And West will probably always be at the top of list, high school redesign or not. West happens to have a large demographic of students that will always do well on standardized tests….children of well educated parents, many of whom work at UW and place a very high value on education.
    Since demographics have far more influence on test scores than just about anything else, you have to be really careful about drawing any conclusions at all by looking at this type of ranking. Maybe the best you can say is that the schools that have high scores are doing fine, but the inverse isn’t true…..you can’t really draw any conclusions about schools with low scores. There are many schools in Madison that provide a very high quality education, but will never be at the top of this sort of ranking because they serve large numbers of disadvantaged kids. Parents who really care about quality of education should look beyond the test scores when evaluating schools.

  4. Chris, thanks for putting together this analysis. Work like this can give all of us more insight into the MMSD and education policy in general.
    I also echo Joan Knoebel’s thought that the entire high school system probably isn’t broken. Certainly parts desperately need to be fixed, but they should be fixed without destroying what works.
    I’d advise the redesign task force to look nation-wide for high schools that succeed in the areas where the MMSD fails, rather than follow the usual course of trying to reinvent the wheel on the unfounded belief that Madison students are unlike all other students in the world.

  5. In no societal group are 72% of the people advanced. Any testing system that would come to that conclusion is devoid of rigor. Tests designed by educators, for educators, are deluding society into thinking everything is just fine. It’s not.
    What’s worse, is that the parents and kids actually believe they are advanced. Advanced compared to what? A society where 20% of HS kids don’t know where the Pacific Ocean is?
    Sure, they’ll be able to get by in life. But intellectually advanced by any reasonable global or natural standard?…..nope.
    rcs

  6. As a followup to my previous post, I invite all the advanced HS students out there to quickly answer these 5th grade algebra problems from Singapore Math.
    Geese Problem. (SAT Level 5) “A flock of geese on a pond were being observed continuously.
    At 1:00 P.M., 1/5 of the geese flew away.
    At 2:00 P.M., 1/8 of the geese that remained flew away.
    At 3:00 P.M., 3 times as many geese as had flown away at 1:00 P.M. flew away,
    leaving 28 geese on the pond.
    At no other time did any geese arrive or fly away or die. How many geese were in the original flock?”
    and:
    Tart Problem. (From a 5th grade Singapore math textbook) “Mrs. Chen made some tarts. She sold 3/5 of them in the morning and 1/4 of the remainder in the afternoon. If she sold 200 more tarts in the morning than in the afternoon, how many tarts did she make?”
    Parents: Sit your advanced HS kids down and see if they can answer both in 5 minutes.
    rcs

  7. A few thoughts on the term advanced:
    First, it wasn’t clear from the link, in which subject students were scoring at the advanced level.
    Second, regardless of the subject, the state categories are based on benchmarks for a particular grade level. “Advanced”, in this context, simply means that students have skills that exceed criteria for proficiency at a high school (not college) level. Is it really so unbelievable that, in a school which has a higher than typical demographic of college educated parents, higher percentages of students score advanced than would be typical statewide?
    It would be interesting to see the correlation between the percentage of students scoring at the “advanced” level and the percentage of students from the high school who are accepted to colleges and universities. Allowing for some variance due to personal choice and economic circumstance, it seems to me that if those percentages are in the same ballpark, than the term is appropriate in the context that it is being applied. This doesn’t mean that 72% of students at the school will also be advanced as college students. The criteria for proficiency changes as the goals of the institution change. However, it seems unlikely to me that as many of the students who merely test at “proficient” in high school will go on to colleges and universities–although they may well go on to equally lucrative futures in fields not requiring a university degree.
    I would think that our general goal would be for as many students as possible to be proficient in the areas tested by the time they graduate high school. Additionally, I think it should be our goal to provide instruction that is strong enough to move large percentages of our students–those who are interested in a college education–beyond proficiency and into the advanced performance levels needed for college success.
    Whether “advanced” should be synonymous with “excellence” is (in my mind) a separate question.

  8. Yes, some schools have a significantly higher percentage (of white kids) scoring in the advanced range. Others have articulated, in different language, that there is not likely to be much cause and effect between a given school and a student’s score.
    Demographics (financial support, family support, neighborhood support) account for the most for the variation in students scores.
    But, this percentage of advanced is most likely to be the score most affected by demographics, and have little to do with a school’s instructional quality, given how the WKCE test is supposed to work.
    Unlike the NAEP test, intelligence tests, ACT/SAT tests, the WKCE is supposed to measure how well kids are meeting the core requirements of the given grade in school. So, if the state core standards say by 4th grade, kids need to know how to add, subtract, and multiply single and 3-digit numbers, the test is supposed to measure this, even if 90% of the kids will get 100% of such questions correct.
    Basically, educational standards are setup to prioritize knowledge to be transmitted and learned in a particular grade. From this, unfortunately, laundry list of standards, and district, school, and teacher is supposed to allocate the time and energy to implement (teach) this knowledge.
    The priorities can be ranked as 1, 2, 3, 4. The teacher must be sure that all kids can perform at the core (1). As a student shows he/she has reached priority (1) knowledge, teach them toward priority (2), then (3) and finally (4).
    So, in order to evaluate how well a particular school is doing, one needs to look at all levels: minimal, basic, proficient, advanced. If a school shows a lot of kids at the minimal level and then a lot at the advanced level, and not very many at basic, and proficient, then I would be suspect teacher/school quality — they’re not teaching basic knowledge. If you see most kids at proficient level, then likely you have a good school.

  9. Teacher L:
    Good observations, and I’d agree with much of what you said.
    However…although it’s a fairly obtuse subject, there is somewhere on DPI’s website definitions of what it means to be “proficient” or “advanced” or even “basic” in any given subject, and they are important definitions, because they are the definitions that DPI uses to determine whether students meet at least the proficiency standard, which is the one that districts/schools are judged for NCLB compliance. (Remember that under NCLB, the feds required students to meet certain standards, but left it up to the states to determine those standards. In Wisconsin, that means being “proficient” on the WKCE tests.)
    This may be a gross overgeneralization (and others with more working knowledge can chime in), but “advanced” has essentially come to mean achieving beyond grade level (e.g., 3rd grader Johnny reads at a 4th grade level). “Proficient” means achieving at grade level.
    I’d argue that one concern of these definitions is that the standards are pretty loose, and the bar is set fairly low. A fair comparison might be Wisconsin’s achievement levels on its own tests vs. some national standardized test, like the NAEP assessment. The most recent NAEP assessment (taken by students throughout the country) suggests some states have much more rigorous “standards” for deciding whether or not their students are achieving proficiency. I would argue Wisconsin’s standards aren’t all that rigorous. Recent NAEP data suggest Wisconsin has a ways to go, relative to other states, when it comes to demanding high standards of achievement from its students and students achieving those standards; to cite one example, Wisconsin is well behind Minnesota — a state with very similar socio-economic demographics — on many NAEP measurements.
    The primary goal, it seems to me, isn’t necessarily to get more students “achieving” at the “advanced” level so they can go off to a four-year college. Rather, it seems schools ought to be seeking ways to ramp up achievement for all students, regardless of where they end up after high school — a job, the military, an apprentice program, a two-year vocational college, or a four-year college. Every one of those post-high school sectors is becoming more demanding of the skills they want students to have.

  10. My experience with state testing is all at the elementary school level. Based on the included items, I haven’t felt that the standards are particularly low, but some areas seem pretty random to me. Admittedly, because we don’t actually get scored tests back, I find it difficult to evaluate how high the standards are. Unlike standardized achievement tests (WIAT, Woodcock) that determine a “floor” and a “ceiling” based on items that gradually increase in difficulty, the WKCE doesn’t have an obvious progression of difficulty. Possibly the standards are too low, but I find that difficult to determine from my vantage point. My chief complaint has to do with the structure of the assessment.
    In the discussion of what advanced does or doesn’t mean, one area that is often overlooked is the role that parents have played in lowering standards. There are certainly many families who truly do want rigorous instruction and standards for their children. However, there are also a lot of families who want the grades–regardless of effort or quality of product. These families have put a lot of pressure on teachers and schools NOT to hold students to high standards. A lot of students have adopted the belief that they are entitled to grades. A few years ago, working with a group of eigth graders in an extracurricular education program, I decided to give a pop quiz. I left the room for a while to “get something”. When I came back, I collected the quizzes and then passed out surveys asking if they cheated and if they thought anyone else had cheated. All but two students cheated. This class came from an upper middle class demographic. It was an ungraded class with no relevance to anyone’s future. The students vigorously defended their decision to cheat based on my decision to give a pop quiz. It was one of the most depressing hours I have ever spent in a classroom. I’d like to say that it was an anomaly, but there have been similar stories in the news and in the lives my colleagues, some of whom have been forced to let students off the hook for cheating or for choosing not to do assignments on time. I would argue that true excellence is not possible if our most capable and advantaged students are never allowed to fail. Rigor requires not only commitment by the instructors, but also by students and their families.

  11. Teacher L:
    Doesn’t the ACT/SAT serve as something of a check on that practice? Who is UW-Madison more likely to enroll — Johnny the casual cheater with grade-pushing parents, who ends up with a 4.0 and an ACT of 23 (a score reflective of his slacker approach to school, since the ACT measures what you learn in high school), or Janey the striving achiever, filling up her schedule with demanding classes, supported by parents who accept B’s for challenging work, a GPA of 3.6, and an ACT of 32 (a score also reflective of her approach to schooling)? I know which one I’d take.
    After all, the ACT is (in part) specifically designed to weed out not just all of the differences students are likely to encounter in a given school (rigor of curriculum, grade inflation), but also weed out students who may excel at getting good grades but haven’t been very demanding of their learning in high school.

  12. But ACT only accounts for about 25% of variability in college success. I know of kids who received 32+ on the ACT and flunked out, while kids receiving 20 on the ACT did splendidly.
    ACT tests are not rigorous in what they measure, and they certainly don’t measure all that is important.
    The ACT does not weed out kids who excel in test-taking skills, but colleges do weed out kids with poor test-taking skills but otherwise good academic skills if they place too much emphasis on these single numbers.
    For all the talk of requirements of academic excellence, the adult decision makers in these areas certainly don’t model that academic excellence themselves. Give them anything that sounds objective and can have a real number attached, and they simply swoon.
    I can guarantee that these folks have no knowledge of psychometrics, and even more importantly, do not know and understand the details of the strengths and weaknesses of the particular tests they rely on. (Except for the literature from the testing companies, who swear by their own tests — all the way to the bank).
    There is much literature on testing, and it is quite enlightening to review the real issues of testing, to understand how test items are constructed, how the test items are evaluated, how the labels of minimum, basic, proficient, advanced are applied, what items are chosen. There is little “objectivity” in reality here.
    For example, for most intelligence type tests (achievement tests, ACT, SAT, NAEP), the purpose is to distribute the scores (and items) into a bell curve. So items where most test takers get it right do not make it into the test, nor do items make it where most test takers get it wrong. No, the range must be in the neighborhood of 40% – 60%.
    If a test item is too easy, often the solution is to rephrase the question to make the language of the question more obtuse (more difficult vocabulary, more complex sentence structure). Even if this is a science, or math, social studies question, increasing language difficulty increases the items correlation with the English proficiency — it no longer really measures the area supposedly being tested.
    Also, questions where the “smart” kids get the item wrong, and the “dumb” kids get it right — well, that doesn’t make it into the test either.
    Example? Ability to spell is not correlated with “intelligence” and is not typically tested. Why? Easy, a lot of “smart” kids cannot spell well, while many “dumb” kids can. Oops! The correlation goes the wrong way!
    At best, ACT/SAT scores are a standin. They simply measure rapid cognition of multiple choice items, that can be mostly scored automatically. Cognition is a very basic measure of what has been learned, and it is a standin because if one does well in the cognition, as a learner, one then has more time to understand, evaluate, synthesize the material being learned. But, as I related above, the prediction accuracy of cognition measures with college success is only 25%. That means, that for the most part, kids who have accomplished cognition skills too often do not precede to push other mental activities — they understood enough to get by in H.S.
    As far as ACT measuring what you learn in high school — absolutely not — unless the schools are teaching to the test, and then the kids are not learning very much at all. Here that means the kids are not learning the core material — that material which is not in fact being measured on the ACT exam, because it is core material and almost everyone should know that.
    ACT of 23 does not necessarily indicate a slacker — how elitist!. The ACT of 22 is arbitrarily set as the average ACT score.
    I’m not opposed to appropriate testing — it’s necessary but one needs to be very cognizant (and skeptical) of the results, and understand the limited place such test results should occupy.
    Some of you might be too young or have never attended your HS reunions. For those of us who have, the bias of believing that the “smart” kids are destined for greatness and the “dumb” kids the society’s also-rans are in for a rude awakening. Except for some, these “dumbs” kids you will find do quite well, and some of the “smart” kids, as adults, seem to have gone nowhere.
    Life is funny that way!

  13. Larry:
    Well, you sure wrote a lot; I’m just not sure what you said.
    My main point was this — selective colleges and universities, like UW-Madison, use the ACT as one measure to assess which students they accept, and which they don’t. You may think that’s a bad idea, but if you do, that’s an awfully big windmill you’re tilting at.
    Should it be the sole determinant? No. High school achievement (GPA), high school record (courses taken, rigor of those courses), breadth of activities outside schooling, leadership positions in and out of school, demonstration of writing competency through essays — all should be taken into consideration when making the hard choices of which students to accept.
    But do you think a student who graduates from Rural School A, valedictorian of a senior class of 30 students, who had all of four English (and science, and social studies, and four math) courses to choose from during their entire high school career, and maybe one foreign language, and graduates with a 4.0, is prepared for the same level for college work as, say, a top 10 percent graduate from West High, whose taken a wealth of AP classes, two foreign languages, English and American literature classes, all manner of electives, and who happens to have a GPA of 3.8 (or whatever the cutoff is for the bottom end of West High’s top 10 percent graduating class)?
    How does a college distinguish between those two students, both arguably high-achieving? Isn’t the ACT one valid tool to sort out those students truly ready for the academic challege of a place like UW-Madison (or Carleton, or Dartmouth, or similar) vs. those who are ready for a regional UW campus? How would you sort them? What tools would you use?

  14. Phil M,
    I can’t claim enough familiarity with the ACT or SAT to know whether or not they serve as a check on grade inflation. When I commented, I was thinking more about the ongoing critiques of education as not being rigorous enough these days. On other threads, posters have voiced concerns that students are not well prepared for their futures and that courses don’t challenge students enough. The usual assumption seems to be that the schools and tests are lowering standards either to support heterogeneous instruction or to give a possibly false appearance of widespread achievement. What I rarely see addressed is the pressure on schools by families who expect A’s. In my previous post, I shared an experience I had with students who felt entitled to success without effort (and who felt justified cheating if there was a possibility of not doing well). Another experience I would share is that of a friend who worked in two high schools, one public and one private. Some families of students who did not earn A’s (late assignments, low quality classwork, average test performance)first pressured her to change the grades, then (when she stuck to her standards), went to the administration. The administrator decided that she should provide those students with extra credit work to bring their points up to an A range or she should change her grading scale.
    Our standard of living has changed so much over the last few generations, and we have so many more conveniences in our lives than we did. I think that as a society/culture/community we have come to expect that nothing should be too time consuming or difficult, and we are raising our children (I am speaking very broadly here…) with these expectations as well. That mindset is not well suited to rigorous academic standards, and when the two clash, teachers who have tried to hold the line have often lost.
    I don’t want to oversimplify here. I am by no means dismissing other contributing factors to our students being less “advanced” than we want them to be–I’m just adding another angle to the discussion.

Comments are closed.