Examining District Data on the Effects of PBIS

As noted in an earlier post, the school district presented data at Monday night’s meeting on the effects of implementing a strategy of Positive Behavioral Interventions and Support (PBIS). As the report notes, “Documenting behavior referrals is inconsistent across middle schools both in terms of what is recorded and where it is recorded.” While this makes it unwise to make comparisons across middle schools, as one school may refer students who are late to class while another only makes referrals as a consequence of fighting, it is valid to make comparisons across time within the same school in order to see what effect the implementation of PBIS has made on student behavior. Unfortunately, as readers of the report will observe, not even that data is consistently presented across the 11 middle schools where PBIS has been implemented. Some schools only have data for the current academic year, others only have data from February 2008 through February 2009, and others provide more.
While the behavioral scientist in me wants to comment on the parts of the report that are incomprehensible (the self-assessment survey schoolwide system analysis from each school) or redundant (providing charts that show time saved in both minutes, in hours, and in days), I will restrict my comments to the data that documents the effects of the implementation of PBIS. While there have been some impressive successes with PBIS, e.g., Sherman, there have also been failures, e.g., Toki. One interpretation would be that some schools have been successful implementing these strategies and we need to see what they are doing that has led to their success, another interpretation would be that PBIS has by and large failed and resulted in an increase in behavioral referrals across our middle schools. At this point, I’ll take the middle ground and say that this new approach to dealing with student behavior hasn’t made any difference. You can look at the table below and draw your own conclusions, Keep in mind though, as noted above, there is not consistency across schools in what sorts of behavioral problems get documented. It is also true that there is considerable variability in the absolute number of referrals across the 11 middle schools and across months, such that a 30% change in the number of behavioral referrals may reflect 45 referrals at Blackhawk, 10 referrals at Wright, and 170 referrals at Toki.

MMSD Behavioral Referral Data
(presented 4/13/09)

Comparison Data Provided Schools Results: Change from 07/08 to 08/09*




One month only (February)


30% decline (decrease of 40 referrals)

O’Keefe 10% decline (decrease of 10 referrals)
Spring Harbor 35% increase (increase of 8 referrals)
Wright 20% increase (increase of 7 referrals)
Six months (Sept. – Feb.)


Declines in Sept. (20%), Nov. (20%) and Dec. (10%); Increases October (40%) and Jan. (20%); No change in February


Increases every month ranging from 5% (Dec.) to 75% (Feb.), median increase in referrals – 20%


Increases every month ranging from 7% (Nov.) to 200% (Sept), median increase in referrals – 68%

Multiple years Sherman Decreases every month ranging from 30% (Feb.) to 70% (Oct., a drop of more than 250 referrals), median decrease in behavioral referrals – 42%

* Note that these percentages are approximate based on visual inspection of the charts provided by MMSD

5 thoughts on “Examining District Data on the Effects of PBIS”

  1. Jeff, interesting post and way of looking at the numbers. A quick question. Sherman appears to have had the most dramatic impact from PBIS, but then again, Sherman has made a committed effort to implement PBIS for several years. Any help on thinking through how much time the plan needs to be in use before an impact can be expected. I’m not saying this well, but I’m trying to get my head around what a reasonable trial period would be. Help?

  2. That’s a really good point. I don’t know what the differences are across the middle schools in length of implementation of PBIS, and if it has only been a year at most schools but many years at Sherman, it may well be that you need a year or two for everyone to adjust to a new system. I also wonder if we might get Sherman staff to go visit other schools to evaluate what those schools are doing and try and figure out what has helped Sherman’s efforts be successful. Once there is more consistency across schools in what type of data gets recorded and when, the district should be able to examine the impact of variables like class size, school size, poverty, etc. on behavioral referrals and give the district a clearer picture of the effectiveness of PBIS.

  3. There may be justifications for the conclusions reached in the report, but the report itself doesn’t address this. So, there is a decrease by some amount (percentage or absolute numbers). The question after seeing a difference (or no difference) is “why?”. The report seems to simply conclude the reason is PBIS if it’s decreasing. Is the answer PBIS again if the numbers increase?
    If there is no objective standard across schools, we can’t compare. Even within schools, what can we say? Different criteria among teachers? Different or the same kids?
    Over a period of a month? Maybe just the Hawthorne effect. Maybe “experimenter” bias (teacher bias). There is for the most part just one year involved, if that. What are the trends in problem behaviors in pre-PBIS years? Does it look the same (decrease over time).
    Is it the same cohort of kids at each measurement within a school? Is the decrease due to the problem kids leaving? Is the increase due to new kids arriving?
    Maybe there is more justification and experimental control described elsewhere. But, I don’t see it here.
    That is, I don’t see any justification for concluding cause and effect.

  4. For more info on PBIS-go to http://www.PBIS.org. It’s not new, it’s been around for YEARS and has positive outcomes overall and a strong research base (I was involved in the early phases in the state of Oregon). Training and COACHING are key components of course, as are designating a Leadership Team. It’s really a system change process rather than implementation of a new behavioral program and aligns well with response to intervention (RtI). DPI is now promoting PBIS statewide with workshops and training of coaches. Sometimes the increase in early years of implementation is the result of MORE consistent reporting of behaviors and it “looks” like it’s not working, but rather teachers are starting to refer for the SAME behavioral issues.

  5. I will review the PBIS site but even assuming the PBIS research is valid, local data needs to be collected and analyzed too.
    There are some reasons for doing this:
    1) Theories need to be validated in practice.
    2) Assuming the theory is good, one must ensure that the local practices are consistent with those required by the theory — otherwise, all bets are off.
    “Sometimes the increase in early years of implementation is the result of MORE consistent reporting of behaviors and it “looks” like it’s not working, but rather teachers are starting to refer for the SAME behavioral issues.”
    The above statement makes me very uncomfortable. Why? Because theories need to be falsifiable, and “it’s working in practice” needs to be falsifiable. Beth’s statement quoted above is a perfect example of how bad theories and bad practices gain hold; no matter what the data looks like, proponents will always be able to say it’s working.
    If there is an increase in the “early years”, then one should be able to collect the data to prove that is the case locally. And I do love the word “early years”, for that gives the theory carte blanche to reject criticism that “it’s not working” with the reply “Oh, it’s early in the implementation”. One can probably put off criticism for at least 5 years if not 10 years — roll out will take time, new teachers coming in need to be trained, mass retirement of baby boomer teachers, increase in poverty, school funding problems.
    I’m not hopeful.

Comments are closed.