The “20-80” Rule: It Doesn’t take Much Effort to Obtain Needed Results

by Kim Gibbons, Associate Director, Center for Applied Research and Educational Improvement (CAREI)

0
296

School improvement and data-based decision-making are terms that school leaders hear numerous times per day. Program evaluation is a way of bringing these two terms together by systematically focusing on data collection and analysis to improve programs and ultimately schools and student outcomes. With the budget season approaching, imagine that you had the following evaluation information available to you as you were constructing your budget for the following year.

  • Students of teachers receiving support from instructional coaches gained 4 months more learning than students of teachers who did not receive coaching.
  • Students of teachers who received professional development in math fared no better than students of teachers who did not receive the professional development.
  • Elementary buildings implementing an MTSS framework saw a 50% reduction in SLD prevalence over 3 years.
  • Students who used on-line “flex books” performed similarly on standardized tests of achievement than students who used traditional textbooks.

How might this information impact your budgeting, and more importantly, student achievement? You might decide to invest more resources into instructional coaching, re-examine your professional development in the area math, invest resources in scaling MTSS up into secondary settings, and continue investing in on-line flex books. All of these scenarios are examples of evaluating specific policies, programs, approaches, and frameworks. The Center for Comprehension School Reform and Improvement (2006) defines program evaluation as examining initiatives the school or district has undertake to answer the question, “Is what we are doing working?” Along with determining the effectiveness of a program or practice, program evaluation provides information on what aspects of the program or practice can be improved. In fact, many school leaders may agree that program evaluation is important, but they often think they don’t have the time or skills needed to carry out an evaluation. This assumption was confirmed by a recent statewide needs assessment conducted by CAREI in 2016 finding that 51% of administrators rated their capacity to evaluate policies and programs as poor. High-quality program evaluation was rated as infrequent due to lack of time (78%), inadequate staffing/expertise (63%), and cost (53%).

Some folks have suggested that educators do have the skills and ability to evaluate programs since expertise in the topic is not needed to carry out a useful program evaluation (McNamara, 1998). Now, if you noticed the “20-80” rule in the title, here is where that rule applies. The rule states that 20% of effort generates 80% of the needed results. In plain English, it is better to do some evaluation than to do no evaluation at all! Many evaluation techniques can be utilized by school districts in day to day practice to make use of existing data in a practical manner for teachers and school leaders. The main challenge is to conduct evaluations that provide useful data and also balance the amount of time and effort needed by staff to carry out evaluation activities.

Organizing the program evaluation process involves answering three important questions:

  1. What are we looking for?
  2. How will we look for it?
  3. How will we use the data?

To determine what to look for, leaders will need to determine if they are interested in formative or summative evaluation information (or both). Formative program evaluation involves collecting information during the implementation of a program to ensure that the program is being implemented with fidelity. Formative evaluation helps improve implementation and identify areas where changes are needed to ensure that intended outcomes are met. Summative evaluation evaluates a program after implementation to see if the desired goal has been reached. For example, suppose your district recently adopted a new reading curriculum. A formative evaluation would be useful in making sure that teachers are implementing the new curriculum as intended and receiving enough support during implementation. If not, then resources can be directed toward providing more support during implementation. A summative evaluation might occur at the end of year to determine the percentage of students who made adequate reading growth during the school year.

Once you determine what you are looking for, the next question is to determine how you will look for it. A prerequisite for “how” is to develop a simple and clear plan of action. This action plan should include due dates and timelines and identification of needed resources. In addition, the components of the evaluation will need to be identified along with designations of who will be responsible for overseeing each component. The final step is to determine how to use the data. Using the data involves analyzing it and making meaning out of the results. Remember that multiple data sources will often inform each question you are trying to answer. Triangulating your data will strengthen the judgments you make about the evaluation. Once the steps have been completed, it is time to summarize and communicate the results to your key stakeholders and decision makers. Keep it simple. Provide an executive summary with the purpose of the evaluation, findings by questions, conclusions and recommendations.

CAREI has been helping school districts and other non-profits conduct program evaluations for over 25 years! We have over 200 technical reports on our website (www.cehd.umn.edu/carei/). This year, we are working with the Anoka-Hennepin School District to complete an audit of special education programs and practices in the district. We are looking at the extent to which programs are aligned with research and best practices, appropriately staffed, and whether teachers have the necessary skills to be successful with their students. We are here to help you determine the effectiveness of programs and to make recommendations on next steps needed to reach your goals. We are also available to help “audit” existing frameworks and practices to provide you with information for making programs and frameworks better (e.g., continual improvement). Feel free to give me a call at 612-625-9751.

Leave a Reply