MMSD WKCE Report

The entry The Madison School District on WKCE Data is not accepting comments, so this entry will make a quick note.
The last pages of the MMSD document is a copy of the agenda for a workshop entitled “WKCE DATA ANALYSIS WORKSHOP” for principals and IRT Professional Development, held on May 1 at Olson Elementary School. In this half day workshop, a couple of hours is spent introducing the software package from Turnleaf which allows detailed analysis of student data — according to their site.
This is promising, I would hope. Maybe we will finally be seeing some real analysis of student data and begin to answer the “whys” of the WKCE results. See WKCE Scores Document Decline in the Percentage of Madison’s Advanced Students

4 thoughts on “MMSD WKCE Report”

  1. This is good news. Brian Sniff showed me the Turnleaf application last summer when we were finishing up the Math Task Force report, so the information has been available, but I don’t think there has been a systematic approach to using it across the district. I think Dan Nerad is having a very positive impact on changing the district’s approach to data analysis.

  2. Given the data that the district already gets on each child’s WKCE test performance (see Infinite Campus for your child’s data), I don’t understand why the district hasn’t already been tracking changes in student performance for years.

  3. I agree with Jeff that there has never been good reason for the lack of analysis of student performance data that the District already collects.
    My sense, however, is that the staff is either incapable of performing the needed analyses, or the staff has never been allowed to do their jobs they are competent to due. It might a mixture of both.
    MMSD kept much of their data on a legacy system that, they said, made it very difficult to perform analysis. Here is where I think lack of basic skill comes into play. For the price of no more than a PC (with gigabytes of memory, terabytes of disk storage) costing no more than $2500, a little work to dump the legacy data into a CSV format, import the files into a relational database (even Oracle has a free/cheap version of their flagship product, or open source MySQL), do a little “normalization” of the data for efficiency — and the result is a powerful and robust system capable of performing any kind of simple/complex analyses.
    Having placed the data into a relational database, the data is ripe for basic analysis using SQL by itself — especially if Oracle is used (it has many statistical extensions to standard SQL).
    If you need more (and you will), for a free download, install R, an open source but powerful program for statistical analyses and graphing (UW Madison teaches undergrad statistical courses using this product), or pony-up a couple of thousand for SAS, and you have a complete system for any data analysis (both Exploratory Data Analysis (EDA) and Confirmatory Data Analysis).
    All the above for easily less than $5000. Even way back in the early 1990’s I was running PC-SAS on my Zeos PC — much more limited than today’s PCs, which are supercomputers by almost any reasonable standard.
    That is, there was never a technical reason why MMSD could not have performed great and useful analyses of their data.
    What was/is missing at MMSD? What was/is missing at DPI? I think I know one possible answer?
    The Math Task Force results give a sense, to me, of something that is missing. I remember reading the minutes of some meetings, and conclusions drawn. As I understand it, the task force was expecting to analyze MMSD math data, and had gone into the process with the understanding that MMSD had all this data that would be useful to their charge. Alas, there was no data, they were told. The data was not of sufficient quality to run “controlled analyses”, so they got and did little or nothing with regard to local data.
    Here is where I think the failure lies. Neither the MMSD staff, nor the Math Task Force members know how to perform basic statistical analyses. When I was working at WERC, back in the 1970’s, before John Tukey’s 1975 book “Exploratory Data Analysis”, we were routinely performing exploratory data analysis (it didn’t have a name yet) to understand our data, clean the data, plot our data, look for multiple populations within groups, look for nonlinearity, look for outliers, recode the data — all this before we did the “real” statistical analyses (the “confirmatory data analyses” — linear regression, chi-squares, path analysis, factor analysis, etc.)
    It does seem that people (“experts”) are in to being sophisticated, where complex math and modeling overrides common sense (financial derivatives, hedge funds, anybody?), and blindly stuff data into some computer program which spits out some results, all without the benefit of thought.
    Recently I read some research on brain function changes as people age (very relevant to me, now!). It was a sad example of a well-known researcher with little real data knowledge. She collected some data, ran a linear regression and concluded that the treatment worked. Then she ran an analysis that showed that the relationship had a non-linear component — she got out a number that told her this, and also told her that the non-linearity was not too bad, so she was satisfied with her initial analysis.
    She did publish the scatter plot of the data — very useful — but seemingly not to her. The scatterplot showed a clear non-linearity, and would indicate to anyone with basic knowledge of data exploration and discovery that one needed to apply the log function to straighten it out (that is, linearize it), so that the regression would be more accurate, and also, just perhaps, give her an opportunity to discuss/discover, why the relationship was non-linear. Obviously, not much thought went in to her analysis — she’s getting paid for this?
    That is real data analysis requires significant thought, and time — of which Jeff’s entry is good example, given the lack of useful data.
    Stuffing data into a spreadsheet, and plotting some graphs to be published, all of which we’ve seen before is not data analysis. MMSD’s “analysis” of the 2009 WKCE falls into the category of stuffing data into a spreadsheet.
    All of this begs the question of how well the Turnleaf software will work at MMSD. If the Turnleaf software can think for our staff, principals, and teachers, maybe it will be useful. If training in Turnleaf includes teaching staff, principals and teachers how to think about data, it could be useful.
    However, if the training merely shows how to plug data in mechanically and mechanically churn out the obvious report for the Board and public, we will continue to have no clue about what is going on the our schools, why students are graduating without needed competence, what curricula are beneficial or not.
    Where should staff go to begin to understand data? To start? I remember being impressed by several of the statistical workbooks that is part of the middle school CMP coursework! There is not enough detail nor were there enough examples (this is CMP, of course), but it seemed at the time to be a good introduction.

  4. Larry:
    Your last two paragraphs are the two most salient ones. Collecting data for the sake of having a bunch of numbers is pretty useless. The real reason to collect data is to put it into the hands of the people who need it the most (teachers, and to some extent building principals)and then using it in meaningful ways to change and improve both teaching and — in the long run — curriculum.
    As one who served a district (Monona Grove) that’s been deeply entrenched in data collecting and utilization for a while (the entire decade, really), this isn’t simple. Training teachers on how to use data is both really important and time-consuming. At Monona Grove, we found many teachers (when we first began this in 2000) were initially unfamiliar with how to use assessment data, uncomfortable with its implications, and unable to transform the data into anything meaningful in the classroom, i.e., better instructional practices. But, we hammered away at it, worked very hard to train teachers, spent a lot of time revamping our in-service programs, created some new approaches (district data retreats, for one example), and tried to develop a culture (particularly at the building level) around using data to benefit kids.
    I’d like to think we’ve made some progress, and I think I can point to some ways in which we have. (Our district assessment coordinator likes to say that teachers once afraid of seeing assessment data are now demanding to see it sooner and sooner, and in more depth than before). But it’s still a work in progress. The key point here is that it takes a while to gear up for this, to get teachers and building staff pointed in the same direction, and — most importantly — to use assessment data in the correct and proper ways. This is not easy stuff; it takes a lot of time, commitment and willingness to do things differently. I’d agree with my good friend Jill J. that leadership advocating for this is important (actually, it’s pretty essential). A superintendent and school board that take assessment data seriously, and advocate for using it to improve student achievement, can make a real difference.

Comments are closed.