When Evidence-Based Practices do not produce desired effects: Should we throw the baby out with the bathwater?

0
156
Kim Gibbons
Associate Director
Center for Applied Research and Educational Improvement (CAREI)

This month, Education Week re-ran an article that was originally published on November 6th, 2015 titled “RtI Practice Falls Short of Promise.” I responded to this article in the December 2015 MASA and MASE newsletters. Since Education Week chose to re-run the article, I think it is worth reviewing my previous newsletter article. In addition, there have been several rebuttals published. Links to these articles are included at the end of the article.

Nowadays, many administrators embrace the notion of using research to inform policies and practices in schools. But, what happens when policies, practices, and frameworks that are empirically supported through rigorous research do not produce the desired results when implemented in school settings? Should we abandon those practices and start over? Unfortunately, this scenario happens more often than not. In fact, the Institute of Education Sciences (IES) recently released the results of a research study evaluating Response to Intervention (RtI) practices for elementary school reading on November 6th, 2015. Response to Intervention (RtI) is an empirically validated framework shown to produce positive outcomes for students when implemented with fidelity. Many districts around the country are in the process of implementing this framework, and RtI is supported in ESEA and IDEA legislation. The IES study compared a reference sample of elementary schools in 13 states to an impact sample of 146 elementary schools with three or more years of implementing the RtI framework in the area of reading. This study did not focus on the overall effectiveness of RtI, rather, it focused on comparing students who scored just above the district identified proficiency target to students who scored just below. One of the findings that is generating a great deal of interest is that for students who score just below the school-determined eligibility cut point in Grade 1, assignment to receive reading interventions did not improve reading outcomes and, in fact, produced negative impacts. After the findings of this study were released, my e-mail account was flooded with reactions and questions regarding the study. The most common response was “panic” and whether this study meant that districts should stop implementing an RtI framework. The short answer is no.

Before I expand on my response, I think it is helpful to briefly review the key research behind effective use of RtI known as implementation science. Implementation science is the study of methods that influence the integration of evidence-based interventions into practice settings. Implementation science helps answer the following questions. Why do established programs lose effectiveness over days, weeks, or months? Why do tested programs sometimes exhibit unintended effects when transferred to a new setting? The real message around implementation science is that effective intervention practices or models coupled with ineffective or inefficient implementation will result in ineffective, unsustainable program and outcomes! Implementation science focuses on stages of implementation over time and implementation “drivers” that provide the infrastructure needed for effective implementation that support high fidelity, effective, and sustainable programs.

Circling back to the recent RtI study, were the results surprising? Not really. As a field, we recognize the difficulty around scaling up evidence-based practices. The results of the RtI study confirmed that it really was a study about “scaling” and not about the effects of the framework on student outcomes. It confirmed that it is difficult to implement educational initiatives on a large scale. While an in-depth analysis of the study is outside the scope of this article (the report was over 300 pages long), there are some important “takeaways” from the study. First, effective universal instruction (Tier 1) is critical and needs to be the priority. All too often, I have observed districts with large numbers of students below proficiency standards who devote most of their time, energy, and resources to developing Tier 2 and 3 interventions for all students below target. Unfortunately, most districts do not have the resources to provide supplemental and intensive interventions to all students below target. More energy and resources need to be directed at improving universal instruction to prevent large numbers of students from needing supplemental and intensive support. Second, districts need to identify effective interventions that match students’ needs. While many of the buildings in the “impact study” reported using Tier 2 interventions, we do not know whether interventions were research-based or matched to student need. Many schools in the study focused their interventions in the area of fluency, vocabulary, and comprehension, but even if the study demonstrated that the right students were selected and received intervention (which was not the case), the quality of what students received at Tier 2 appears to have been inconsistently implemented and not matched to the needs of the students. Would we expect students to benefit from an intervention that did not target their skill deficit? Finally, collecting data on the fidelity of implementation of interventions is extremely important so that decisions about effectiveness of interventions are based on interventions that were actually implemented correctly and with adequate time and frequency. While participants in the RtI study were asked about fidelity, it was not directly assessed. So, at the end of the day, it is hard to know what actually occurred during the Tier 2 interventions.

In summary, why do established programs lose effectiveness over days, weeks, or months? Why do tested programs sometimes exhibit unintended effects when transferred to a new setting? My message is that it is all about implementation. Districts must use implementation science to bring evidence-based practices to scale, AND they must collect objective data on the fidelity of implementation. Let’s not throw the baby out with the bathwater when we find unexpected outcomes. Rather, let’s continue focusing on the research and providing assistance to districts around “implementation drivers” and fidelity of implementation. CAREI is here to help!

Links to other responses:

Shinn, M. R., & Brown, R. (2016). Much ado about little: The dangers of disseminating the RTI outcome study without careful analysis

Wisconsin School Psychologist Association

Leave a Reply