The Annenberg Institute at Brown University offers this national working paper series to provide open access to high-quality papers from multiple disciplines on a wide variety of topics related to education. EdWorkingPapers focuses particularly on research with strong implications for education policy. EdWorkingPapers circulates papers prior to publication for comment and discussion; these papers have not gone through a peer review process. Contributors can update papers to provide readers with the most up-to-date findings.
Search for EdWorkingPapers here by author, title, or keywords.
Researchers often include covariates when they analyze the results of randomized controlled trials (RCTs), valuing the increased precision of the estimates over the potential of inducing small-sample bias when doing so. In this paper, we develop a sufficient condition which ensures that the inclusion of covariates does not cause small-sample bias in the effect estimates. Using this result as a building block, we develop a novel approach that uses machine learning techniques to reduce the variance of the average treatment effect estimates while guaranteeing that the effect estimates remain unbiased. The framework also highlights how researchers can use data from outside the study sample to improve the precision of the treatment effect estimate by using the auxiliary data to better model the relationship between the covariates and the outcomes. We conclude with a simulation, which highlights the value of using the proposed approach.