Search EdWorkingPapers by author, title, or keywords.
Methodology, measurement and data
The worldwide school closures in early 2020 led to losses in learning that will not easily be made up for even if schools quickly return to their prior performance levels. These losses will have lasting economic impacts both on the affected students and on each nation unless they are effectively remediated.
While the precise learning losses are not yet known, existing research suggests that the students in grades 1-12 affected by the closures might expect some 3 percent lower income over their entire lifetimes. For nations, the lower long-term growth related to such losses might yield an average of 1.5 percent lower annual GDP for the remainder of the century. These economic losses would grow if schools are unable to re-start quickly.
The economic losses will be more deeply felt by disadvantaged students. All indications are that students whose families are less able to support out-of-school learning will face larger learning losses than their more advantaged peers, which in turn will translate into deeper losses of lifetime earnings.
The present value of the economic losses to nations reach huge proportions. Just returning schools to where they were in 2019 will not avoid such losses. Only making them better can. While a variety of approaches might be attempted, existing research indicates that close attention to the modified re-opening of schools offers strategies that could ameliorate the losses. Specifically, with the expected increase in video-based instruction, matching the skills of the teaching force to the new range of tasks and activities could quickly move schools to heightened performance. Additionally, because the prior disruptions are likely to increase the variations in learning levels within individual classrooms, pivoting to more individualised instruction could leave all students better off as schools resume.
As schools move to re-establish their programmes even as the pandemic continues, it is natural to focus considerable attention on the mechanics and logistics of safe re-opening. But the long-term economic impacts also require serious attention, because the losses already suffered demand more than the best of currently considered re-opening approaches.
State testing programs regularly release previously administered test items to the public. We provide an open-source recipe for state, district, and school assessment coordinators to combine these items flexibly to produce scores linked to established state score scales. These would enable estimation of student score distributions and achievement levels. We discuss how educators can use resulting scores to estimate achievement distributions at the classroom and school level. We emphasize that any use of such tests should be tertiary, with no stakes for students, educators, and schools, particularly in the context of a crisis like the COVID-19 pandemic. These tests and their results should also be lower in priority than assessments of physical, mental, and social–emotional health, and lower in priority than classroom and district assessments that may already be in place. We encourage state testing programs to release all the ingredients for this recipe to support low-stakes, aggregate-level assessments. This is particularly urgent during a crisis where scores may be declining and gaps increasing at unknown rates.
Evidence on educational returns and the factors that determine the demand for schooling in developing countries is extremely scarce. We use two surveys from Tanzania to estimate both the actual and perceived schooling returns and subsequently examine what factors drive individual misperceptions regarding actual returns. Using ordinary least squares and instrumental variable methods, we find that each additional year of schooling in Tanzania increases earnings, on average, by 9 to 11 percent. We find that on average, individuals underestimate returns to schooling by 74 to 79 percent, and three factors are associated with these misperceptions: income, asset poverty, and educational attainment. Shedding light on what factors relate to individual beliefs about educational returns can inform policy on how to structure effective interventions to correct individuals' misperceptions.
Numerous studies have considered the important role of cognition in estimating the returns to schooling. How cognitive abilities affect schooling may have important policy implications, especially in developing countries during periods of increasing educational attainment. Using two longitudinal labor surveys that collect direct proxy measures of cognitive skills, we study the importance of specific cognitive domains for the returns to schooling in two samples. We instrument for schooling levels and we find that each additional year of schooling leads to an increase in earnings by approximately 18-20 percent. The estimated effect sizes—based on the two-stage least squares estimates—are above the corresponding ordinary least squares estimates. Furthermore, we estimate and demonstrate the importance of specific cognitive domains in the classical Mincer equation. We find that executive functioning skills (i.e., memory and orientation) are important drivers of earnings in the rural sample, whereas higher-order cognitive skills (i.e., numeracy) are more important for determining earnings in the urban sample. Although numeracy is tested in both samples, it is only a statistically significant predictor of earnings in the urban sample. (JEL I21, F63, F66, N37)
A common rationale for offering online courses in K-12 schools is that they allow students to take courses not offered at their schools; however, there has been little research on how online courses are used to expand curricular options when operating at scale. We assess the extent to which students and schools use online courses for this purpose by analyzing statewide, student-course level data from high school students in Florida, which has the largest virtual sector in the nation. We introduce a “novel course” framework to address this question. We define a virtual course as “novel” if it is only available to a student virtually, not face-to-face through their own home high school. We find that 7% of high school students in 2013-14 enroll in novel online courses. Novel courses were more commonly used by higher-achieving students, in rural schools, and in schools with relatively few Advanced Placement/International Baccalaureate offerings.
Enrollment in higher education has risen dramatically in Latin America, especially in Chile. Yet graduation and persistence rates remain low. One way to improve graduation and persistence is to use data and analytics to identify students at risk of dropout, target interventions, and evaluate interventions’ effectiveness at improving student success. We illustrate the potential of this approach using data from eight Chilean universities. Results show that data available at matriculation are only weakly predictive of persistence, while prediction improves dramatically once data on university grades become available. Some predictors of persistence are under policy control. Financial aid predicts higher persistence, and being denied a first-choice major predicts lower persistence. Student success programs are ineffective at some universities; they are more effective at others, but when effective they often fail to target the highest risk students. Universities should use data regularly and systematically to identify high-risk students, target them with interventions, and evaluate those interventions’ effectiveness.
Clustered observational studies (COSs) are a critical analytic tool for educational effectiveness research. We present a design framework for the development and critique of COSs. The framework is built on the counterfactual model for causal inference and promotes the concept of designing COSs that emulate the targeted randomized trial that would have been conducted were it feasible. We emphasize the key role of understanding the assignment mechanism to study design. We review methods for statistical adjustment and highlight a recently developed form of matching designed specifically for COSs. We review how regression models can be profitably combined with matching and note best practice for estimates of statistical uncertainty. Finally, we review how sensitivity analyses can determine whether conclusions are sensitive to bias from potential unobserved confounders. We demonstrate concepts with an evaluation of a summer school reading intervention in Wake County, North Carolina.
Many interventions in education occur in settings where treatments are applied to groups. For example, a reading intervention may be implemented for all students in some schools and withheld from students in other schools. When such treatments are non-randomly allocated, outcomes across the treated and control groups may differ due to the treatment or due to baseline differences between groups. When this is the case, researchers can use statistical adjustment to make treated and control groups similar in terms of observed characteristics. Recent work in statistics has developed matching methods designed for contexts where treatments are clustered. This form of matching, known as multilevel matching, may be well suited to many education applications where treatments are assigned to schools. In this article, we provide an extensive evaluation of multilevel matching and compare it to multilevel regression modeling. We evaluate multilevel matching methods in two ways. First, we use these matching methods to recover treatment effect estimates from three clustered randomized trials using a within-study comparison design. Second, we conduct a simulation study. We find evidence that generally favors an analytic approach to statistical adjustment that combines multilevel matching with regression adjustment. We conclude with an empirical application.
Researchers commonly interpret effect sizes by applying benchmarks proposed by Cohen over a half century ago. However, effects that are small by Cohen’s standards are large relative to the impacts of most field-based interventions. These benchmarks also fail to consider important differences in study features, program costs, and scalability. In this paper, I present five broad guidelines for interpreting effect sizes that are applicable across the social sciences. I then propose a more structured schema with new empirical benchmarks for interpreting a specific class of studies: causal research on education interventions with standardized achievement outcomes. Together, these tools provide a practical approach for incorporating study features, cost, and scalability into the process of interpreting the policy importance of effect sizes.
Using rich longitudinal data from one of the largest teacher education programs in Texas, we examine the measurement of pre-service teacher (PST) quality and its relationship with entry into the K–12 public school teacher workforce. Drawing on rubric-based observations of PSTs during clinical teaching, we find that little of the variation in observation scores is attributable to actual differences between PSTs. Instead, differences in scores largely reflect differences in the rating standards of field supervisors. We also find that men and PSTs of color receive systematically lower scores. Finally, higher-scoring PSTs are slightly more likely to enter the teacher workforce and substantially more likely to be hired at the same school as their clinical teaching placement.