Search EdWorkingPapers by author, title, or keywords.
Methodology, measurement and data
Prediction algorithms are used across public policy domains to aid in the identification of at-risk individuals and guide service provision or resource allocation. While growing research has investigated concerns of algorithmic bias, much less research has compared algorithmically-driven targeting to the counterfactual: human prediction. We compare algorithmic and human predictions in the context of a national college advising program, focusing in particular on predicting high-achieving, lower-income students’ college enrollment quality. College advisors slightly outperform a prediction algorithm; however, greater advisor accuracy is concentrated among students with whom advisors had more interactions. The algorithm achieved similar accuracy among students lower in the distribution of interactions, despite advisors having substantially more information. We find no evidence that the advisors or algorithm exhibit bias against vulnerable populations. Our results suggest that, especially at scale, algorithms have the potential to provide efficient, accurate, and unbiased predictions to target scarce social services and resources.
This paper provides one of the first natural experimental evidence on the consequences of a transition from college-major (early specialization) to college-then-major (late specialization) choice mechanism. Specifically, we study a recent reform in China that allows college applicants to apply to a meta-major consisting of different majors and to declare a specialization late in college instead of applying to a specific major. Using administrative data over 18 years on the universe of college applicants in a Chinese province, we examine the impacts of the staggered adoption of the reform across institutions on student composition changes. We find substantial heterogeneous effects across institutions and majors despite the aggregate null effects. This paper provides important policy implications regarding college admissions mechanism designs.
One of the most important mechanism design policies in college admissions is to let students choose a college major sequentially (college-then-major choice) or jointly (college-major choice). In the context of the Chinese meta-major reforms that transition from college-major choice to college-then-major choice, we provide the first experimental evidence on the information frictions and heterogeneous preferences that students have in their response to the meta-major option. In a randomized experiment with a nationwide sample of 11,424 high school graduates, we find that providing information on the benefits of a meta-major significantly increased students’ willingness to choose the meta major; however, information about specific majors and assignment mechanisms did not affect student major choice preferences. We also find that information provision mostly affected the preferences of students who were from disadvantaged backgrounds, lacked accurate information, did not have clear major preferences, or were risk loving.
Classroom discourse is a core medium of instruction --- analyzing it can provide a window into teaching and learning as well as driving the development of new tools for improving instruction. We introduce the largest dataset of mathematics classroom transcripts available to researchers, and demonstrate how this data can help improve instruction. The dataset consists of 1,660 45-60 minute long 4th and 5th grade elementary mathematics observations collected by the National Center for Teacher Effectiveness (NCTE) between 2010-2013. The anonymized transcripts represent data from 317 teachers across 4 school districts that serve largely historically marginalized students. The transcripts come with rich metadata, including turn-level annotations for dialogic discourse moves, classroom observation scores, demographic information, survey responses and student test scores. We demonstrate that our natural language processing model, trained on our turn-level annotations, can learn to identify dialogic discourse moves and these moves are correlated with better classroom observation scores and learning outcomes. This dataset opens up several possibilities for researchers, educators and policymakers to learn about and improve K-12 instruction.
This simulation study examines the characteristics of the Explanatory Item Response Model (EIRM) when estimating treatment effects when compared to classical test theory (CTT) sum and mean scores and item response theory (IRT)-based theta scores. Results show that the EIRM and IRT theta scores provide generally equivalent bias and false positive rates compared to CTT scores and superior calibration of standard errors under model misspecification. Analysis of the statistical power of each method reveals that the EIRM and IRT theta scores provide a marginal benefit to power and are more robust to missing data than other methods when parametric assumptions are met and provide a substantial benefit to power under heteroskedasticity, but their performance is mixed under other conditions. The methods are illustrated with an empirical data application examining the causal effect of an elementary school literacy intervention on reading comprehension test scores and demonstrates that the EIRM provides a more precise estimate of the average treatment effect than the CTT or IRT theta score approaches. Tradeoffs of model selection and interpretation are discussed.
Districts nationwide have revised their educator evaluation systems, increasing the frequency with which administrators observe and evaluate teacher instruction. Yet, limited insight exists on the role of evaluator feedback for instructional improvement. Relying on unique observation-level data, we examine the alignment between evaluator and teacher assessments of teacher instruction and the potential consequences for teacher productivity and mobility. We show that teachers and evaluators typically rate teacher performance similarly during classroom observations, but with significant variability in teacher-evaluator ratings. While teacher performance improves across multiple classroom observations, evaluator ratings likely overstate productivity improvements among the lowest-performing teachers. Evaluators, but not teachers, systematically rate teacher performance lower in classrooms serving higher concentrations of economically disadvantaged students. And while teacher performance improves when evaluators provide more critical feedback about teacher instruction, teachers receiving critical feedback may seek alternative teaching assignments in schools with less critical evaluation settings. We discuss the implications of these findings for the design, implementation and impact of educator evaluation systems.
Increasing numbers of students require internet access to pursue their undergraduate degrees, yet broadband access remains inequitable across student populations. Furthermore, surveys that currently show differences in access by student demographics or location typically do so at high levels of aggregation, thereby obscuring important variation between subpopulations within larger groups. Through the dual lenses of quantitative intersectionality and critical race spatial analysis, we use Bayesian multilevel regression and census microdata to model variation in broadband access among undergraduate populations at deeper interactions of identity. We find substantive heterogeneity in student broadband access by gender, race, and place, including between typically aggregated subpopulations. Our findings speak to inequities in students’ geographies of opportunity and suggest a range of policy prescriptions at both the institutional and federal level.
Community schools are an increasingly popular strategy used to improve the performance of students whose learning may be disrupted by non-academic challenges related to poverty. Community schools partner with community based organizations (CBOs) to provide integrated supports such as health and social services, family education, and extended learning opportunities. With over 300 community schools, the New York City Community Schools Initiative (NYC-CS) is the largest of these programs in the country. Using a novel method that combines multiple rating regression discontinuity design (MRRDD) with machine learning (ML) techniques, we estimate the causal effect of NYC-CS on elementary and middle school student attendance and academic achievement. We find an immediate reduction in chronic absenteeism of 5.6 percentage points, which persists over the following three years. We also find large improvements in math and ELA test scores – an increase of 0.26 and 0.16 standard deviations by the third year after implementation – although these effects took longer to manifest than the effects on attendance. Our findings suggest that improved attendance is a leading indicator of success of this model and may be followed by longer-run improvements in academic achievement, which has important implications for how community school programs should be evaluated.
How much does family demand matter for child learning in settings of extreme poverty? In rural Gambia, families with high aspirations for their children’s future education and career, measured before children start school, go on to invest substantially more than other families in the early years of their children’s education. Despite this, essentially no children are literate or numerate three years later. When villages receive a highly-impactful, teacher-focused supply-side intervention, however, children of these families are 25 percent more likely to achieve literacy and numeracy than other children in the same village. Furthermore, improved supply enables these children to acquire other higher-level skills necessary for later learning and child development. We also document patterns of substitutability and complementarity between demand and supply in generating learning at varying levels of skill difficulty. Our analysis shows that greater demand can map onto developmentally meaningful learning differences in such settings, but only with adequate complementary inputs on the supply side.
Data science applications are increasingly entwined in students’ educational experiences. One prominent application of data science in education is to predict students’ risk of failing a course in or dropping out from college. There is growing interest among higher education researchers and administrators in whether learning management system (LMS) data, which capture very detailed information on students’ engagement in and performance on course activities, can improve model performance. We systematically evaluate whether incorporating LMS data into course performance prediction models improves model performance. We conduct this analysis within an entire state community college system. Among students with prior academic history in college, administrative data-only models substantially outperform LMS data-only models and are quite accurate at predicting whether students will struggle in a course. Among first-time students, LMS data-only models outperform administrative data-only models. We achieve the highest performance for first-time students with models that include data from both sources. We also show that models achieve similar performance with a small and judiciously selected set of predictors; models trained on system-wide data achieve similar performance as models trained on individual courses.