Browse by Topics
- Covid-19 Education Research for Recovery
- Early childhood
- K-12 Education
- Post-secondary education
- Access and admissions
- Education outside of school (after school, summer…)
- Educator labor markets
- Educator preparation, professional development, performance and evaluation
- Finance
- Inequality
- Markets (vouchers, choice, for-profits, vendors)
- Methodology, measurement and data
- Multiple outcomes of education
- Parents and communities
- Politics, governance, philanthropy, and organizations
- Program and policy effects
- Race, ethnicity and culture
- Standards, accountability, assessment, and curriculum
- Students with Learning Differences
Breadcrumb
Search EdWorkingPapers
Zid Mancenido
Practice-based teacher education has increasingly been adopted as an alternative to more traditional, conceptually-focused pedagogies, yet the field lacks causal evidence regarding the relative efficacy of these approaches. To address this issue, we randomly assigned 185 college students to one of three experimental conditions reflective of common conceptually-focused and practice-based teacher preparation pedagogies. We find significant and large positive effects of practice-based pedagogies on participants’ skills in eliciting and responding to student thinking as demonstrated through a written assessment and a short teaching episode. Our findings contribute to a developing evidence base that can assist policymakers and teacher educators in designing effective teacher preparation at scale.
Responsive teaching is a highly effective strategy that promotes student learning. In math classrooms, teachers might funnel students towards a normative answer or focus students to reflect on their own thinking, deepening their understanding of math concepts. When teachers focus, they treat students’ contributions as resources for collective sensemaking, and thereby significantly improve students’ achievement and confidence in mathematics. We propose the task of computationally detecting funneling and focusing questions in classroom discourse. We do so by creating and releasing an annotated dataset of 2,348 teacher utterances labeled for funneling and focusing questions, or neither. We introduce supervised and unsupervised approaches to differentiating these questions. Our best model, a supervised RoBERTa model fine-tuned on our dataset, has a strong linear correlation of .76 with human expert labels and with positive educational outcomes, including math instruction quality and student achievement, showing the model’s potential for use in automated teacher feedback tools. Our unsupervised measures show significant but weaker correlations with human labels and outcomes, and they highlight interesting linguistic patterns of funneling and focusing questions. The high performance of the supervised measure indicates its promise for supporting teachers in their instruction.
Many teacher education researchers have expressed concerns with the lack of rigorous impact evaluations of teacher preparation practices. I summarize these various concerns as they relate to issues of internal validity, external validity, and measurement. I then assess the prevalence of these issues by reviewing 166 impact evaluations of teacher preparation practices published in peer-reviewed journals between 2002-2019. Although I find that very few studies address issues of internal validity, external validity and measurement, I highlight some innovative approaches and present a checklist of considerations to assist future researchers in designing more rigorous impact evaluations.
In conversation, uptake happens when a speaker builds on the contribution of their interlocutor by, for example, acknowledging, repeating or reformulating what they have said. In education, teachers' uptake of student contributions has been linked to higher student achievement. Yet measuring and improving teachers' uptake at scale is challenging, as existing methods require expensive annotation by experts. We propose a framework for computationally measuring uptake, by (1) releasing a dataset of student-teacher exchanges extracted from US math classroom transcripts annotated for uptake by experts; (2) formalizing uptake as pointwise Jensen-Shannon Divergence (pJSD), estimated via next utterance classification; (3) conducting a linguistically-motivated comparison of different unsupervised measures and (4) correlating these measures with educational outcomes. We find that although repetition captures a significant part of uptake, pJSD outperforms repetition-based baselines, as it is capable of identifying a wider range of uptake phenomena like question answering and reformulation. We apply our uptake measure to three different educational datasets with outcome indicators. Unlike baseline measures, pJSD correlates significantly with instruction quality in all three, providing evidence for its generalizability and for its potential to serve as an automated professional development tool for teachers.
Despite calls for more evidence regarding the effectiveness of teacher education practices, causal research in the field remains rare. One reason is that we lack designs and measurement approaches that appropriately meet the challenges of causal inference in the context of teacher education programs. This article provides a framework for how to fill this gap. We first outline the difficulties of doing causal research in teacher education. We then describe a set of replicable practices for developing measures of key teaching outcomes, and propose causal research designs suited to the needs of the field. Finally, we identify community-wide initiatives that are necessary to advance effectiveness research in teacher education at scale.