There are 23 item(s) tagged with the keyword "Evidence-based interventions".
By ETR | November 4, 2014
In ETR's latest video, Senior Research Associate Jill Glassman, PhD, makes a complex and powerful statistical process understandable. Mediation analysis allows evaluators to determine which specific factors in an evidence-based intervention had the greatest impact on participants. Dr. Glassman also explains how important this work can be in helping us determine what approaches to STI/pregnancy prevention work best.
By Karin Coyle, PhD
ETR's research team is testing some exciting new programs that ask middle and high school students to consider the ways romantic relationships influence their sexual choices and risks. We call this “contextualizing” sexual and reproductive health education—that is, using the context of relationships to build health-promoting information, attitudes and behaviors.
By Pam Drake, PhD | August 21, 2013
When we want to evaluate how well an evidence-based program (EBP) works, one of the important variables we need to measure accurately is implementation fidelity. This variable helps confirm that the program is being presented as intended, and that different educators are doing essentially the same things in teaching the program.
With good implementation fidelity, there’s a better chance others can replicate the program’s outcomes. Schools and communities that show good implementation fidelity for a program can affirm they’re taking the correct steps to reach health goals.
Implementation fidelity also helps us interpret outcomes—for example, why an intervention did or didn’t work. We can assess how practical the program activities are, or refine programs by determining which components lead to the outcomes we want.
Displaying: 21 - 23 of 23