Browse by Topics
- Covid-19 Education Research for Recovery
- Early childhood
- K-12 Education
- Post-secondary education
- Access and admissions
- Education outside of school (after school, summer…)
- Educator labor markets
- Educator preparation, professional development, performance and evaluation
- Finance
- Inequality
- Markets (vouchers, choice, for-profits, vendors)
- Methodology, measurement and data
- Multiple outcomes of education
- Parents and communities
- Politics, governance, philanthropy, and organizations
- Program and policy effects
- Race, ethnicity and culture
- Standards, accountability, assessment, and curriculum
- Students with Learning Differences
Breadcrumb
Search EdWorkingPapers
Marianne Bitler
Estimates of teacher “value-added” suggest teachers vary substantially in their ability to promote student learning. Prompted by this finding, many states and school districts have adopted value-added measures as indicators of teacher job performance. In this paper, we conduct a new test of the validity of value-added models. Using administrative student data from New York City, we apply commonly estimated value-added models to an outcome teachers cannot plausibly affect: student height. We find the standard deviation of teacher effects on height is nearly as large as that for math and reading achievement, raising obvious questions about validity. Subsequent analysis finds these “effects” are largely spurious variation (noise), rather than bias resulting from sorting on unobserved factors related to achievement. Given the difficulty of differentiating signal from noise in real-world teacher effect estimates, this paper serves as a cautionary tale for their use in practice.