Search and Filter

Item-Level Heterogeneity in Value Added Models: Implications for Reliability, Cross-Study Comparability, and Effect Sizes

Value added models (VAMs) attempt to estimate the causal effects of teachers and schools on student test scores. We apply Generalizability Theory to show how estimated VA effects depend upon the selection of test items. Standard VAMs estimate causal effects on the items that are included on the test. Generalizability demands consideration of how estimates would differ had the test included alternative items. We introduce a model that estimates the magnitude of item-by-teacher/school variance accurately, revealing that standard VAMs overstate reliability and overestimate differences between units. Using a case study and 41 measures from 25 studies with item-level outcome data, we show how standard VAMs overstate reliability by an average of .12 on the 0-1 reliability scale (median = .09, SD = .13) and provide standard deviations of teacher/school effects that are on average 22% too large (median = 7%, SD = 41%). We discuss how imprecision due to heterogeneous VA effects across items attenuates effect sizes, obfuscates comparisons across studies, and causes instability over time. Our results suggest that accurate estimation and interpretation of VAMs requires item-level data, including qualitative data about how items represent the content domain.

Keywords
value-added model, generalizability theory, reliability, education policy, accountability
Education level
Topics
Document Object Identifier (DOI)
10.26300/ez4q-fs31
EdWorkingPaper suggested citation:
Gilbert, Joshua B., Zachary Himmelsbach, Luke W. Miratrix, Andrew D. Ho, and Benjamin W. Domingue. (). Item-Level Heterogeneity in Value Added Models: Implications for Reliability, Cross-Study Comparability, and Effect Sizes. (EdWorkingPaper: -1173). Retrieved from Annenberg Institute at Brown University: https://doi.org/10.26300/ez4q-fs31

Machine-readable bibliographic record: RIS, BibTeX