Search EdWorkingPapers

Search EdWorkingPapers by author, title, or keywords.

Zachary Mabel

Monnica Chan, Zachary Mabel, Preeya Pandya Mbekeani.

Performance-based funding models for higher education, which tie state support for institutions to performance on student outcomes, have proliferated in recent decades. Some states have designed these policies to also address educational attainment gaps by including bonus payments for traditionally low-performing groups. Using a Synthetic Control Method research design, we examine the impact of these funding regimes on race-based completion gaps in Tennessee and Ohio. We find no evidence that performance-based funding narrowed race-based completion gaps. In fact, contrary to their intended purpose, we find that performance-based funding widened existing gaps in certificate completion in Tennessee. Across both states, the estimated impacts on associate degree outcomes are also directionally consistent with performance-based funding exacerbating racial inequities in associate degree attainment.

More →


Eric Bettinger, Benjamin L. Castleman, Alice Choe, Zachary Mabel.

Nearly half of students who enter college do not graduate. The majority of efforts to increase college completion have focused on supporting students before or soon after they enter college, yet many students drop out after making significant progress towards their degree. In this paper, we report results from a multi-year, large-scale experimental intervention conducted across five states and 20 broad-access, public colleges and universities to support students who are late in their college career but still at risk of not graduating. The intervention provided these “near-completer” students with personalized text messages that encouraged them to connect with campus-based academic and financial resources, reminded them of upcoming and important deadlines, and invited them to engage (via text) with campus-based advisors. We find little evidence that the message campaign affected academic performance or attainment in either the full sample or within individual higher education systems or student subgroups. The findings suggest low-cost nudge interventions may be insufficient for addressing barriers to completion among students who have made considerable academic progress.

More →


Kelli A. Bird, Benjamin L. Castleman, Zachary Mabel, Yifeng Song.

Colleges have increasingly turned to predictive analytics to target at-risk students for additional support. Most of the predictive analytic applications in higher education are proprietary, with private companies offering little transparency about their underlying models. We address this lack of transparency by systematically comparing two important dimensions: (1) different approaches to sample and variable construction and how these affect model accuracy; and (2) how the selection of predictive modeling approaches, ranging from methods many institutional researchers would be familiar with to more complex machine learning methods, impacts model performance and the stability of predicted scores. The relative ranking of students’ predicted probability of completing college varies substantially across modeling approaches. While we observe substantial gains in performance from models trained on a sample structured to represent the typical enrollment spells of students and with a robust set of predictors, we observe similar performance between the simplest and most complex models.

More →


Oded Gurantz, Matea Pender, Zachary Mabel, Cassandra Larson, Eric Bettinger.

We examine whether virtual advising – college counseling using technology to communicate remotely – increases postsecondary enrollment in selective colleges. We test this approach using a sample of approximately 16,000 high-achieving, low- and middle-income students identified by the College Board and randomly assigned to receive virtual advising from the College Advising Corps. The offer of virtual advising had no impact on overall college enrollment, but increased enrollment in high graduation rate colleges by 2.7 percentage points (5%), with instrumental variable impacts on treated students of 6.1 percentage points. We also find that non-white students who were randomly assigned to a nonwhite adviser exhibited stronger treatment effects.

More →


Zachary Mabel, CJ Libassi, Michael Hurwitz.

Policymakers are increasingly including early-career earnings data in consumer-facing college search tools to help students and families make more informed post-secondary education decisions. We offer new evidence on the degree to which existing college-specific earnings data equips consumers with useful information by documenting the level of selection bias in the earnings metrics reported in the U.S. Department of Education’s College Scorecard. Given growing interest in reporting earnings by college and major, we focus on the degree to which earnings differences across four-year colleges and universities can be explained by differences in major composition across institutions. We estimate that more than three-quarters of the variation in median earnings across institutions is explained by observable factors, and accounting for differences in major composition explains over 30 percent of the residual variation in earnings after controlling for institutional selectivity, student composition, and local cost of living differences. We also identify large variations in the distribution of earnings within colleges; as a result, comparisons of early-career earnings can be extremely sensitive to whether the median, 25th, or 75th percentiles are presented. Taken together, our findings indicate that consumers can easily draw misleading conclusions about institutional quality when using publicly available earnings data to compare institutions.

More →