School Improvement Rubrics

School Improvements, desired by all of us, need financial resources, but primarily, the need is for a quality strategic and measureable plans and goals. Ed Blume has posted a few such rubrics from Victoria Bernhardt. He has not received much input on these worthy rubrics. See.
That was certainly my experience also, as I posted a full Excel spreadsheet of Bernhardt’s rubrics on this site back in August 21, 2005.
I will continue with three more school improvement rubrics from Bernhardt: Continuous Improvement, Information and Analysis, and Student Achievement.
Bernhardt’s rubrics allows one to subjectively rate each of these measures on Approach, Implementation and Outcome on a scale of 1 to 5.
The above three rubrics add to this important list. These rubrics are based on measuring individual schools, and so those with more intimate knowledge of a particular school will be better able to evaluate these rubrics, but using these rubrics to evaluate the District would also be useful. I think finding such raters, at the school level, will be nigh impossible because what I perceive to be the very closed nature of the school administration at MMSD as a whole.
To start the exercise, look at Continuous Improvement.
Level ONE

Approach: Neither goals nor strategies exist for the evaluation and continuous improvement of the school organization or for elements of the school organization.
Implementation: With no overall plan for evaluation and continuous improvement, strategies are changed by individual teachers and administrators only when something sparks the need to improve. Reactive decisions and activities are a daily mode of operation.
Outcome: Individuals struggle with system failure. Finger pointing and blaming others for failure occurs. The effectiveness of strategies is not known. Mistakes are repeated.

Level TWO:

Approach: The approach to continuous improvement and evaluation is problems solving. If there are no problems, or if solutions can be made quickly, there is no need for improvement or analyses. Changes in part of the system are not coordinated with all other parts.
Implementation: Isolated changes are made in some areas of the school organization in response to problem incidents. Changes are not preceded by comprehensive analyses, such as an understanding of the root causes of problems. The effectiveness of the elements of the school organization, or changes made to the elements, is not known.
Outcome: Problems are solved only temporarily and few positive changes results. Additionally, unintended and undesirable consequences often appear in other parts of the system. Many aspects of the school are incongruent, keeping the school from reaching its vision.

Level THREE:

Approach: Some elements of the school organization are evaluated for effectiveness. Some elements are improved on the basis of the evaluation findings.
Implementation: Elements of the school organization are improved on the basis of comprehensive analyses of root causes of problems, client perceptions, and operational effectiveness of processes.
Outcome: Evidence of effective improvement strategies is observable. Positive changes are made and maintained due to comprehensive analyses and evaluation.

Level Four:

Approach: All elements of the school’s operations are evaluated for improvement and to ensure congruence of the elements with respect to the confirmation of students’ learning experience.
Implementation: Continuous improvement analyses of student achievement and instructional strategies are rigorously reinforced within each classroom and across learning levels to develop a comprehensive learning continuum for students and to prevent student failure.
Outcome: Teachers become astute at assessing and in predicting the impact of their instructional strategies on individual student achievement. Sustainable improvements in student achievement are evident at all grade levels, due to continuous improvement.

Level FIVE:

Approach: All aspects of the school organization are rigorously evaluated and improved on a continuous basis. Students, and the maintenance of a comprehensive learning continuum for students, become the focus of all aspects of the school improvement process.
Implementation: Comprehensive continuous improvement becomes the way of doing business at the school. Teachers continuously improve the appropriateness and effectiveness of instructional strategies based on student feedback and performance. All aspects of the school organization are improved to support teachers’ efforts.
Outcome: The school becomes a congruent and effective learning organization. Only instruction and assessment strategies that produce quality student achievement are used. A true continuum of learning results for all students.

*****
Now, on this one measure called Continuous Improvement, how would you rate the Board of Education, MMSD, and one or more individual schools that you are familiar with?

4 thoughts on “School Improvement Rubrics”

  1. Maybe the silence on the rubrics says more about the rubrics than posters would care to admit? My personal feeling is that much of the MMSD is at levels 3, 4 and 5. Some of it is at 1 & 2. But rubrics are artificial impositions of someone’s priorities into a form that isn’t all that useful to me. So frankly, I don’t care to get all anal about “where the MMSD fits” on some artificially imposed rubrical scale. Rubrics work great the other way: a codified expectation of performance that is accepted by those being held accountable to the rubrics’ standards.

  2. David,
    Do you think Madison has “a codified expectation of performance that is accepted by those being held accountable to the rubrics’ standards?”

  3. A terrifically useful project would be to engage parents, staff, students, board, and the entire community in developing rubrics. With those guides, we’d have a better chance on agreeing on where the MMSD stands on commonly defined standards and where the MMSD needs to go in the future.
    It would take months to agree on the rubrics and then decide where the MMSD falls, so an organization, such as United Way, would probably need to fund and organize the work.

  4. I find these rubrics useful, though I would modify them in some narrow ways. But are they so far off that one would want or need to go to the expense of defining our own replacement set of rubrics?
    I always find it useful, where quality is apparent, to use the well-reasoned measures of others, rather than succumb to not-invented-here.
    I suspect I would find many different expressions of these measures acceptable, and the quest for perfection before addressing issues and engaging in constructive discussions leaves no opportunity for progress.
    I believe the rubrics are ignored, not because they are useless, but because most people are not prone to solving problems top-down. It’s easier and certainly quicker to focus on some immediate task, than to take a broader view, then iterate into more detail. It’s the difference between strategies and tactics.
    These days, most people are advocates and position-takers, more desirous to win arguments than to understand.
    I find I need to often reset my focus and step back — these rubrics and other dispassionate approaches are a good way to accomplish this.

Comments are closed.