Search EdWorkingPapers

Search EdWorkingPapers by author, title, or keywords.

Kylie L. Anglin

Kylie L. Anglin.

Traditional public schools in the United States must comply with a variety of regulations on educational inputs like teacher certification, maximum class sizes, and restrictions on staff contracts. Absent regulations, policymakers fear that troubled districts would make inappropriate decisions that would harm students. However, it is also possible that strict regulations hinder schools from optimizing student learning. This paper tests the salience of these two hypotheses within the context of a widespread deregulation effort in Texas which allows traditional public school districts to claim District of Innovation status and opt out of regulations not related to health, safety, and civil rights. Using a novel dataset of administration data merged with implementation information scraped from district websites, I estimate the impact of District of Innovation status with a difference-in-differences strategy where later implementers act as the comparison group for early implementers. I find that, despite the breadth of regulations exempted, regulatory autonomy does not significantly impact either math or reading achievement nor does it impact hiring or class sizes. Together, the results offer strong evidence against the hypothesis that regulation hinders school improvement and suggests that state input regulations play only a limited role in determining school decision-making or student achievement.

More →


Vivian C. Wong, Kylie L. Anglin, Peter M. Steiner.

Recent interest to promote and support replication efforts assume that there is well-established methodological guidance for designing and implementing these studies. However, no such consensus exists in the methodology literature. This article addresses these challenges by describing design-based approaches for planning systematic replication studies. Our general approach is derived from the Causal Replication Framework (CRF), which formalizes the assumptions under which replication success can be expected. The assumptions may be understood broadly as replication design requirements and individual study design requirements. Replication failure occurs when one or more CRF assumptions are violated. In design-based approaches to replication, CRF assumptions are systematically tested to evaluate the replicability of effects, as well as to identify sources of effect variation when replication failure is observed. In direct replication designs, replication failure is evidence of bias or incorrect reporting in individual study estimates, while in conceptual replication designs, replication failure occurs because of effect variation due to differences in treatments, outcomes, settings, and participant characteristics. The paper demonstrates how multiple research designs may be combined in systematic replication studies, as well as how diagnostic measures may be used to assess the extent to which CRF assumptions are met in field settings.    

More →

10/2020476.4 KB
01/2021560.99 KB

Kylie L. Anglin, Vivian C. Wong.

Researchers are rarely satisfied to learn only whether an intervention works, they also want to understand why and under what circumstances interventions produce their intended effects. These questions have led to increasing calls for implementation research to be included in high quality studies with strong causal claims. Of critical importance is determining whether an intervention can be delivered with adherence to a standardized protocol, and the extent to which an intervention protocol can be replicated across sessions, sites, and studies. When an intervention protocol is highly standardized and delivered through verbal interactions with participants, a set of natural language processing (NLP) techniques termed semantic similarity can be used to provide quantitative summary measures of how closely intervention sessions adhere to a standardized protocol, as well as how consistently the protocol is replicated across sessions. Given the intense methodological, budgetary and logistical challenges for conducting implementation research, semantic similarity approaches have the benefit of being low-cost, scalable, and context agnostic for use. In this paper, we demonstrate how semantic similarity approaches may be utilized in an experimental evaluation of a coaching protocol on teacher pedagogical skills in a simulated classroom environment. We discuss strengths and limitations of the approach, and the most appropriate contexts for applying this method.

More →