Search EdWorkingPapers
Search EdWorkingPapers by author, title, or keywords.
Search
Methodology, measurement and data
We estimate the longer-run effects of attending an effective high school (one that improves a combination of test scores, survey measures of socio-emotional development, and behaviors in 9th grade) for students who are more versus less educationally advantaged (i.e., likely to attain more years of education based on 8th-grade characteristics). All students benefit from attending effective schools, but the least advantaged students experience larger improvements in high-school graduation, college going, and school-based arrests. This heterogeneity is not solely due to less-advantaged groups being marginal for particular outcomes. Commonly used test-score value-added understates the long-run importance of effective schools, particularly for less-advantaged populations. Patterns suggest this partly reflects less-advantaged students being relatively more responsive to non-test-score dimensions of school quality.
Graduate education is among the fastest growing segments of the U.S. higher educational system. This paper provides up-to-date causal evidence on labor market returns to Master’s degrees and examines heterogeneity in the returns by field area, student demographics and initial labor market conditions. We use rich administrative data from Ohio and an individual fixed effects model that compares students’ earnings trajectories before and after earning a Master’s degree. Findings show that obtaining a Master’s degree increased quarterly earnings by about 12% on average, but the returns vary largely across graduate fields. We also find gender and racial disparities in the returns, with higher average returns for women than for men, and for White than for Black graduates. In addition, by comparing returns among students who graduated before and under the Great Recession, we show that economic downturns appear to reduce but not eliminate the positive returns to Master’s degrees.
Analyses that reveal how treatment effects vary allow researchers, practitioners, and policymakers to better understand the efficacy of educational interventions. In practice, however, standard statistical methods for addressing Heterogeneous Treatment Effects (HTE) fail to address the HTE that may exist within outcome measures. In this study, we present a novel application of the Explanatory Item Response Model (EIRM) for assessing what we term “item-level” HTE (IL-HTE), in which a unique treatment effect is estimated for each item in an assessment. Results from data simulation reveal that when IL-HTE are present but ignored in the model, standard errors can be underestimated and false positive rates can increase. We then apply the EIRM to assess the impact of a literacy intervention focused on promoting transfer in reading comprehension on a digital formative assessment delivered online to approximately 8,000 third-grade students. We demonstrate that allowing for IL-HTE can reveal treatment effects at the item-level masked by a null average treatment effect, and the EIRM can thus provide fine-grained information for researchers and policymakers on the potentially heterogeneous causal effects of educational interventions.
School principals are viewed as critical mechanisms by which to improve student outcomes, but there remain important methodological questions about how to measure principals' effects. We propose a framework for measuring principals' contributions to student outcomes and apply it empirically using data from Tennessee, New York City, and Oregon. We find that using contemporaneous student outcomes to assess principal performance is flawed. Value-added models misattribute to principals changes in student performance caused by factors that principals minimally control. Further, little to none of the variation in average student test scores or attendance is explained by persistent effectiveness differences between principals.
After near-universal school closures in the United States at the start of the pandemic, lawmakers and educational leaders made plans for when and how to reopen schools for the 2020-21 school year. Educational researchers quickly assessed how a range of public health, political, and demographic factors were associated with school reopening decisions and parent preferences for in-person and remote learning. I review this body of literature, to highlight what we can learn from its findings, limitations, and influence on public discourse. Studies consistently highlighted the influence of partisanship, teachers’ unions, and demographics, with mixed findings on COVID-19 rates. The literature offers useful insight and requires more evidence, and it highlights benefits and limitations to rapid research with large-scale quantitative data.
Given recent evidence challenging the replicability of results in the social and behavioral sciences, critical questions have been raised about appropriate measures for determining replication success in comparing effect estimates across studies. At issue is the fact that conclusions about replication success often depend on the measure used for evaluating correspondence in results. Despite the importance of choosing an appropriate measure, there is still no wide-spread agreement about which measures should be used. This paper addresses these questions by describing formally the most commonly used measures for assessing replication success, and by comparing their performance in different contexts according to their replication probabilities – that is, the probability of obtaining replication success given study-specific settings. The measures may be characterized broadly as conclusion-based approaches, which assess the congruence of two independent studies’ conclusions about the presence of an effect, and distance-based approaches, which test for a significant difference or equivalence of two effect estimates. We also introduce a new measure for assessing replication success called the correspondence test, which combines a difference and equivalence test in the same framework. To help researchers plan prospective replication efforts, we provide closed formulas for power calculations that can be used to determine the minimum detectable effect size (and thus, sample sizes) for each study so that a predetermined minimum replication probability can be achieved. Finally, we use a replication dataset from the Open Science Collaboration (2015) to demonstrate the extent to which conclusions about replication success depend on the correspondence measure selected.
How scholars name different racial groups has powerful salience for understanding what researchers study. We explored how education researchers used racial terminology in recently published, high-profile, peer-reviewed studies. Our sample included all original empirical studies published in the non-review AERA journals from 2009 to 2019. We found two-thirds of articles used at least one racial category term, with an increase from about half to almost three-quarters of published studies between 2009 and 2019. Other trends include the increasing popularity of the term Black, the emergence of gender-expansive terms such as Latinx, the popularity of the term Hispanic in quantitative studies, and the paucity of studies with terms connoting missing race data or including terms describing Indigenous and multiracial peoples.
Teachers are the most important school-specific factor in student learning. Yet, little evidence exists linking teacher professional learning programs and the various strategies or components that comprise them to student achievement. In this paper, we examine a teacher fellowship model for professional learning designed and implemented by Leading Educators, a national nonprofit organization that aims to bridge research and practice to improve instructional quality and accelerate learning across school systems. During the 2015-16 and 2016-17 school years, Leading Educators conducted its fellowship program for teachers and school leaders to provide educators ongoing, collaborative, job-embedded professional development and to improve student achievement. Relying on quasi-experimental methods, we find that a school’s participation in the fellowship model increased student proficiency rates in math and English language arts on state achievement exams. Further, student achievement benefitted from a more sustained duration of teacher participation in the fellowship model, and the impact on student achievement varied depending on the share of a school’s teachers who participated in the fellowship model and the extent to which teachers independently selected into the fellowship model or were appointed to participate by school leaders. Taken together, findings from this paper should inform professional learning organizations, schools, and policymakers on the design, implementation and impact of teacher professional learning.
We design a commitment contract for college students, "Study More Tomorrow," and conduct a randomized control trial testing a model of its demand. The contract commits students to attend peer tutoring if their midterm grade falls below a pre-specified threshold. The contract carries a financial penalty for noncompliance, in contrast to other commitment devices for studying tested in the literature. We find demand for the contract, with take-up of 10% among students randomly assigned a contract offer. Contract demand is not higher among students randomly assigned to a lower contract price, plausibly because a lower contract price also means a lower commitment benefit of the contract. Students with the highest perceived utility for peer tutoring have greater demand for commitment, consistent with our model. Contrary to the model's predictions, we fail to find evidence of increased demand among present-biased students or among those with higher self-reported tendency to procrastinate. Our results show that college students are willing to pay for study commitment devices. The sources of this demand do not align fully with behavioral theories, however.
A significant share of education and development research uses data collected by workers called “enumerators.” It is well-documented that “enumerator effects”—or inconsistent practices between the individual people who administer measurement tools— can be a key source of error in survey data collection. However, it is less understood whether this is a problem for academic assessments or performance tasks. We leverage a remote phone-based mathematics assessment of primary school students and survey of their parents in Kenya. Enumerators were randomized to students to study the presence of enumerator effects. We find that both the academic assessment and survey was prone to enumerator effects and use simulation to show that these effects were large enough to lead to spurious results at a troubling rate in the context of impact evaluation. We therefore recommend assessment administrators randomize enumerators at the student level and focus on training enumerators to minimize bias.