Search and Filter

Submit a paper

Not yet affiliated? Have a paper you wish to post? Check out the EdWorkingPapers' scope and FAQs, and then submit your manuscript here.

Estimating Treatment Effects with the Explanatory Item Response Model

This simulation study examines the characteristics of the Explanatory Item Response Model (EIRM) when estimating treatment effects when compared to classical test theory (CTT) sum and mean scores and item response theory (IRT)-based theta scores. Results show that the EIRM and IRT theta scores provide generally equivalent bias and false positive rates compared to CTT scores and superior calibration of standard errors under model misspecification. Analysis of the statistical power of each method reveals that the EIRM and IRT theta scores provide a marginal benefit to power and are more robust to missing data than other methods when parametric assumptions are met and provide a substantial benefit to power under heteroskedasticity, but their performance is mixed under other conditions. The methods are illustrated with an empirical data application examining the causal effect of an elementary school literacy intervention on reading comprehension test scores and demonstrates that the EIRM provides a more precise estimate of the average treatment effect than the CTT or IRT theta score approaches. Tradeoffs of model selection and interpretation are discussed.

Keywords
Explanatory Item Response Model, causal inference, statistical power, simulation, educational measurement
Education level
Topics
Document Object Identifier (DOI)
10.26300/snvz-ew19
EdWorkingPaper suggested citation:
Gilbert, Joshua B.. (). Estimating Treatment Effects with the Explanatory Item Response Model. (EdWorkingPaper: -677). Retrieved from Annenberg Institute at Brown University: https://doi.org/10.26300/snvz-ew19

Machine-readable bibliographic record: RIS, BibTeX