Skip to main content

My Take: Implementation Fidelity in Evidence-Based Programs

My Take: Implementation Fidelity in Evidence-Based Programs

By Pam Drake, PhD | August 21, 2013

When we want to evaluate how well an evidence-based program (EBP) works, one of the important variables we need to measure accurately is implementation fidelity. This variable helps confirm that the program is being presented as intended, and that different educators are doing essentially the same things in teaching the program.

 

With good implementation fidelity, there’s a better chance others can replicate the program’s outcomes. Schools and communities that show good implementation fidelity for a program can affirm they’re taking the correct steps to reach health goals.

 

Implementation fidelity also helps us interpret outcomes—for example, why an intervention did or didn’t work. We can assess how practical the program activities are, or refine programs by determining which components lead to the outcomes we want.

 

Challenges

The simplest and most commonly used data sources for measuring implementation fidelity are educator self-report logs. These have educators indicate which lesson activities they implemented and any changes they made. But self-report measures are fraught with error and often overestimate implementation fidelity. Other measures, such as interviews and in-person observations, are more burdensome and costly.

ETR’s Research

Through a National Institutes of Health grant, ETR recently evaluated an online training program designed to improve implementation fidelity for the EBP Reducing the Risk. We used multiple methods to collect data, including teacher self-reports, in-person observers, and coders who listened to session audiotapes. Some things we found:

  • Teachers and observers agreed on what happened in their implementation only about half the time.
  • Teachers and audio coders agreed on what happened less than two-thirds of the time. Discrepancies between in-person observers and audio coders were similar.
  • Scores from observers and coders were consistently lower than teacher self-reports. Teachers tended to think they had high implementation fidelity. Observers and coders did not.

Why the differences? Many are probably due to differences in how teachers, observers and coders interpret what happened. Teachers also may feel social pressure to say they did what they believe the researchers want them to do.

Recall is also a factor. Observers and coders take notes and rate behaviors as they happen, but teachers must recall what happened after the fact. The time lag between session and self-report was large in some cases.

Observers and coders also seem to pick up on different aspects of a session. Audio coders lack the visual cues that in-person observers see. On the other hand, coders are able to replay segments and attend more closely to sessions than observers, who must capture all of their information in real time.

Solutions

If you use self-report, know the limitations and try to minimize them through training. Offer incentives to complete reports thoroughly and in a timely manner. Monitor the process. Consider adding a small number of in-person observations or asking for audio-recordings. Video may be an option if it works out logistically and there are no consent issues.

Pam Drake, PhD, is a Senior Research Associate at ETR. For more information about ETR’s evaluation services or our current research projects, check our ETR website or contact Pam at evaluation@etr.org.

Sign up for the ETR Health Newsletter.

Social Media :

  • YouTube
  • LinkedIn
  • Twitter
  • Facebook
  • Instagram