Search EdWorkingPapers
Search EdWorkingPapers by author, title, or keywords.
Search
Methodology, measurement and data
What happens when employers screen their employees but only observe a subset of output? We specify a model with heterogeneous employees and show that their response to the screening affects output in both the probationary period and the post-probationary period. The post-probationary impact is due to their heterogeneous responses affecting which individuals are retained and hence the screening efficiency. We show that the impact of the endogenous response on both the unobserved outcome and screening efficiency depends on whether increased effort on one task increases or decreases the marginal cost of effort on the other task. If the response decreases unobserved output in the probationary period then it increases the screening efficiency, and vice versa. We then assess these predictions empirically by studying a change to teacher tenure policy in New York City, which increased the role that a single measure -- test score value-added -- played in tenure decisions. We show that in response to the policy teachers increased test score value-added and decreased output that did not enter the tenure decision. The increase in test score value-added was largest for the teachers with more ability to improve students' untargeted outcomes, increasing their likelihood of getting tenure. We estimate that the endogenous response to the policy announcement reduced the screening efficiency gap -- defined as the reduction of screening efficiency stemming from the partial observability of output -- by 28%, effectively shifting some of the cost of partial observability from the post-tenure period to the pre-tenure period.
Prediction algorithms are used across public policy domains to aid in the identification of at-risk individuals and guide service provision or resource allocation. While growing research has investigated concerns of algorithmic bias, much less research has compared algorithmically-driven targeting to the counterfactual: human prediction. We compare algorithmic and human predictions in the context of a national college advising program, focusing in particular on predicting high-achieving, lower-income students’ college enrollment quality. College advisors slightly outperform a prediction algorithm; however, greater advisor accuracy is concentrated among students with whom advisors had more interactions. The algorithm achieved similar accuracy among students lower in the distribution of interactions, despite advisors having substantially more information. We find no evidence that the advisors or algorithm exhibit bias against vulnerable populations. Our results suggest that, especially at scale, algorithms have the potential to provide efficient, accurate, and unbiased predictions to target scarce social services and resources.
This paper provides one of the first natural experimental evidence on the consequences of a transition from college-major (early specialization) to college-then-major (late specialization) choice mechanism. Specifically, we study a recent reform in China that allows college applicants to apply to a meta-major consisting of different majors and to declare a specialization late in college instead of applying to a specific major. Using administrative data over 18 years on the universe of college applicants in a Chinese province, we examine the impacts of the staggered adoption of the reform across institutions on student composition changes. We find substantial heterogeneous effects across institutions and majors despite the aggregate null effects. This paper provides important policy implications regarding college admissions mechanism designs.
One of the most important mechanism design policies in college admissions is to let students choose a college major sequentially (college-then-major choice) or jointly (college-major choice). In the context of the Chinese meta-major reforms that transition from college-major choice to college-then-major choice, we provide the first experimental evidence on the information frictions and heterogeneous preferences that students have in their response to the meta-major option. In a randomized experiment with a nationwide sample of 11,424 high school graduates, we find that providing information on the benefits of a meta-major significantly increased students’ willingness to choose the meta major; however, information about specific majors and assignment mechanisms did not affect student major choice preferences. We also find that information provision mostly affected the preferences of students who were from disadvantaged backgrounds, lacked accurate information, did not have clear major preferences, or were risk loving.
We develop a unifying conceptual framework for understanding and predicting teacher shortages at the state, region, district, and school levels. We then generate and test hypotheses about geographic, grade level, and subject variation in teacher shortages using data on teaching vacancies in Tennessee during the fall of 2019. We find that teacher staffing challenges are highly localized, causing shortages and surpluses to coexist. Aggregate descriptions of staffing challenges mask considerable variation between schools and subjects within districts. Schools with fewer local early-career teachers, smaller district salary increases, worse working conditions, and higher historical attrition rates have higher vacancy rates. Our findings illustrate why viewpoints about, and solutions to, shortages depend critically on whether one takes an aggregate or local perspective.
Classroom discourse is a core medium of instruction --- analyzing it can provide a window into teaching and learning as well as driving the development of new tools for improving instruction. We introduce the largest dataset of mathematics classroom transcripts available to researchers, and demonstrate how this data can help improve instruction. The dataset consists of 1,660 45-60 minute long 4th and 5th grade elementary mathematics observations collected by the National Center for Teacher Effectiveness (NCTE) between 2010-2013. The anonymized transcripts represent data from 317 teachers across 4 school districts that serve largely historically marginalized students. The transcripts come with rich metadata, including turn-level annotations for dialogic discourse moves, classroom observation scores, demographic information, survey responses and student test scores. We demonstrate that our natural language processing model, trained on our turn-level annotations, can learn to identify dialogic discourse moves and these moves are correlated with better classroom observation scores and learning outcomes. This dataset opens up several possibilities for researchers, educators and policymakers to learn about and improve K-12 instruction.
The data and its terms of use can be accessed here: https://github.com/ddemszky/classroom-transcript-analysis
This simulation study examines the characteristics of the Explanatory Item Response Model (EIRM) when estimating treatment effects when compared to classical test theory (CTT) sum and mean scores and item response theory (IRT)-based theta scores. Results show that the EIRM and IRT theta scores provide generally equivalent bias and false positive rates compared to CTT scores and superior calibration of standard errors under model misspecification. Analysis of the statistical power of each method reveals that the EIRM and IRT theta scores provide a marginal benefit to power and are more robust to missing data than other methods when parametric assumptions are met and provide a substantial benefit to power under heteroskedasticity, but their performance is mixed under other conditions. The methods are illustrated with an empirical data application examining the causal effect of an elementary school literacy intervention on reading comprehension test scores and demonstrates that the EIRM provides a more precise estimate of the average treatment effect than the CTT or IRT theta score approaches. Tradeoffs of model selection and interpretation are discussed.
Districts nationwide have revised their educator evaluation systems, increasing the frequency with which administrators observe and evaluate teacher instruction. Yet, limited insight exists on the role of evaluator feedback for instructional improvement. Relying on unique observation-level data, we examine the alignment between evaluator and teacher assessments of teacher instruction and the potential consequences for teacher productivity and mobility. We show that teachers and evaluators typically rate teacher performance similarly during classroom observations, but with significant variability in teacher-evaluator ratings. While teacher performance improves across multiple classroom observations, evaluator ratings likely overstate productivity improvements among the lowest-performing teachers. Evaluators, but not teachers, systematically rate teacher performance lower in classrooms serving higher concentrations of economically disadvantaged students. And while teacher performance improves when evaluators provide more critical feedback about teacher instruction, teachers receiving critical feedback may seek alternative teaching assignments in schools with less critical evaluation settings. We discuss the implications of these findings for the design, implementation and impact of educator evaluation systems.
Books shape how children learn about society and norms, in part through representation of different characters. We introduce new artificial intelligence methods for systematically converting images into data and apply them, along with text analysis methods, to measure the representation of race, gender, and age in award-winning children’s books from the past century. We find that more characters with darker skin color appear over time, but the most influential books persistently depict a greater proportion of light-skinned characters than other books, even after conditioning on race; we also find that children are depicted with lighter skin than adults. Relative to their growing share of the U.S. population, Black and Latinx people are underrepresented in these same books, while White males are overrepresented. Over time, females are increasingly present but appear less often in text than in images, suggesting greater symbolic inclusion in pictures than substantive inclusion in stories. We then report empirical evidence for predictions about the supply of and demand for representation that would generate these patterns. On the demand side, we show that people consume books that center their own identities. On the supply side, we document higher prices for books that center non-dominant social identities and fewer copies of these books in libraries that serve predominantly White communities. Lastly, we show that the types of children’s books purchased in a neighborhood are related to local political beliefs.
Increasing numbers of students require internet access to pursue their undergraduate degrees, yet broadband access remains inequitable across student populations. Furthermore, surveys that currently show differences in access by student demographics or location typically do so at high levels of aggregation, thereby obscuring important variation between subpopulations within larger groups. Through the dual lenses of quantitative intersectionality and critical race spatial analysis, we use Bayesian multilevel regression and census microdata to model variation in broadband access among undergraduate populations at deeper interactions of identity. We find substantive heterogeneity in student broadband access by gender, race, and place, including between typically aggregated subpopulations. Our findings speak to inequities in students’ geographies of opportunity and suggest a range of policy prescriptions at both the institutional and federal level.