Search and Filter

Submit a paper

Not yet affiliated? Have a paper you wish to post? Check out the EdWorkingPapers' scope and FAQs, and then submit your manuscript here.

Using Implementation Fidelity to Aid in Interpreting Program Impacts: A Brief Review

Poor program implementation constitutes one explanation for null results in trials of educational interventions. For this reason, researchers often collect data about implementation fidelity when conducting such trials. In this article, we document whether and how researchers report and measure program fidelity in recent cluster-randomized trials. We then create two measures—one describing the level of fidelity reported by authors and another describing whether the study reports null results—and examine the correspondence between the two. We also explore whether fidelity is influenced by study size, type of fidelity measured and reported, and features of the intervention. We find that as expected, fidelity level relates to student outcomes; we also find that the presence of new curriculum materials positively predicts fidelity level.

Keywords
descriptive analysis, evaluation, experimental design, policy, program evaluation
Education level
Topics
Document Object Identifier (DOI)
10.26300/dt2s-9v59
EdWorkingPaper suggested citation:
Hill, Heather C., and Anna Erickson. (). Using Implementation Fidelity to Aid in Interpreting Program Impacts: A Brief Review. (EdWorkingPaper: -414). Retrieved from Annenberg Institute at Brown University: https://doi.org/10.26300/dt2s-9v59

Machine-readable bibliographic record: RIS, BibTeX

Published Edworkingpaper:
Hill, H. C., & Erickson, A. (2019). Using Implementation Fidelity to Aid in Interpreting Program Impacts: A Brief Review. Educational Researcher, 48(9), 590-598. https://doi.org/10.3102/0013189X19891436