Academic Competence Evaluation Scales (ACES)
Category: Student Learning
Differences in effect sizes between researcher developed (RD) and independently developed (ID) outcome measures are widely documented but poorly understood in education research. We conduct a meta-analysis using item-level outcome data to test potential mechanisms that explain differences in effects by RD or ID outcome type. Our analysis of 45 effect sizes from 30 studies shows that both greater standard deviations of item-specific treatment effects and lower correlations between item-specific effects and item easiness predict larger effect sizes and reduce the observed difference between RD and ID measures from .24 SDs to .15 SDs. The findings advance our understanding of how item properties predict educational intervention outcomes and underscore the affordances of analyzing item-level data for building theory in education research.