Search EdWorkingPapers by author, title, or keywords.
Methodology, measurement and data
Effect sizes in the Cohen’s d family are often used in education to compare estimates across studies, measures, and sample sizes. For example, effect sizes are used to compare gains in achievement students make over time, either in pre- and post-treatment studies or in the absence of intervention, such as when estimating achievement gaps. However, despite extensive research dating back to the paired t-test literature showing that such growth effect sizes should account for within-person correlations of scores over time, such achievement gains are often standardized relative to the standard deviation from a single timepoint or two timepoints pooled. Such a tendency likely occurs in part because there are not many large datasets from which a distribution of student- or school-level gains can be derived. In this study, we present a novel model for estimating student growth in conjunction with a national dataset to show that effect size estimates for student and school growth are often quite different when standardized relative to a distribution of gains rather than static achievement. In particular, we provide nationally representative empirical benchmarks for student achievement and gains, including for male-female gaps in those gains, and examine the sensitivity of those effect sizes to how they are standardized. Our results suggest that effect sizes scaled relative to a distribution of gains are less likely to understate the effects of interventions over time, and that resultant effect sizes often more closely match the estimand of interest for most practice, policy, and evaluation questions.
Recruiting and retaining teachers can be challenging for many schools, especially in low-performing urban schools in which teachers turn over at higher rates. In this study, we examine three types of school-level attributes that may influence teachers’ decisions to enter or transfer schools: malleable school processes, structural features of employment, and school characteristics. Using adaptive conjoint analysis survey design with a sample of teachers from low-performing, urban, turnaround schools in Tennessee, we find that five of the seven most highly valued features of schools are malleable processes: consistent administrative support, consistent enforcement of discipline, school safety, small class sizes, and availability of high-quality professional development. In particular, teachers rated as effective are more likely to prefer performance-based pay than teachers rated ineffective. We validate our results using administrative data from Tennessee on teachers’ actual mobility patterns.
The rise of accountability standards has pressed higher education organizations to oversee the production and publication of data on student outcomes more closely than in the past. However, the most common measure of student outcomes, average bachelor's degree completion rates, potentially provides little information about the direct impacts of colleges and universities on student success. Extending scholarship in the new institutionalist tradition, I hypothesize that higher education organizations today exist as, “superficially coupled systems,” where colleges closely oversee their technical outputs but where those technical outputs provide limited insight into the direct role of colleges and universities in producing them. I test this hypothesis using administrative data from the largest, public, urban university system in the United States together with fixed effects regression and entropy balancing techniques, allowing me to isolate organizational effects. My results provide evidence for superficial coupling, suggesting that inequality in college effectiveness exists both between colleges and within colleges, given students' racial background and family income. They also indicate that institutionalized norms surrounding accountability have backfired, enabling higher education organizations, and other bureaucratic organizations like them, to maintain legitimacy without identifying and addressing inequality.
We use a natural experiment to evaluate sample selection correction methods' performance. In 2007, Michigan began requiring that all students take a college entrance exam, increasing the exam-taking rate from 64 to 99%. We apply different selection correction methods, using different sets of predictors, to the pre-policy exam score data. We then compare the corrected data to the complete post-policy exam score data as a benchmark. We find that performance is sensitive to the choice of predictors, but not the choice of selection correction method. Using stronger predictors such as lagged test scores yields more accurate results, but simple parametric methods and less restrictive semiparametric methods yield similar results for any set of predictors. We conclude that gains in this setting from less restrictive econometric methods are small relative to gains from richer data. This suggests that empirical researchers using selection correction methods should focus more on the predictive power of covariates than robustness across modeling choices.