Browse by Topics
- Covid-19 Education Research for Recovery
- Early childhood
- K-12 Education
- Post-secondary education
- Access and admissions
- Education outside of school (after school, summer…)
- Educator labor markets
- Educator preparation, professional development, performance and evaluation
- Finance
- Inequality
- Markets (vouchers, choice, for-profits, vendors)
- Methodology, measurement and data
- Multiple outcomes of education
- Parents and communities
- Politics, governance, philanthropy, and organizations
- Program and policy effects
- Race, ethnicity and culture
- Standards, accountability, assessment, and curriculum
- Students with Learning Differences
Breadcrumb
Search EdWorkingPapers
Rose Wang
Despite well-designed curriculum materials, teachers often face challenges in their implementation due to diverse classroom needs. This paper investigates whether Large Language Models (LLMs) can support middle-school math teachers by helping create high-quality curriculum scaffolds, which we define as the adaptations and supplements teachers employ to ensure all students can access and engage with the curriculum. Through Cognitive Task Analysis with expert teachers, we identify a three-stage process for curriculum scaffolding: observation, strategy formulation, and implementation. We incorporate these insights into three LLM approaches to create warmup tasks that activate background knowledge. The best-performing approach, which provides the model with the original curriculum materials and an expert-informed prompt, generates warmups that are rated significantly higher than warmups created by expert teachers in terms of alignment to learning objectives, accessibility to students working below grade level, and teacher preference. This research demonstrates the potential of LLMs to support teachers in creating effective scaffolds and provides a methodology for developing AI-driven educational tools.
Providing ample opportunities for students to express their thinking is pivotal to their learning of mathematical concepts. We introduce the Talk Meter, which provides in-the-moment automated feedback on student-teacher talk ratios. We conduct a randomized controlled trial on a virtual math tutoring platform (n=742 tutors) to evaluate the effectiveness of the Talk Meter at increasing student talk. In one treatment arm, we show the Talk Meter only to the tutor, while in the other arm we show it to both the student and the tutor. We find that the Talk Meter increases student talk ratios in both treatment conditions by 13-14%; this trend is driven by the tutor talking less in the tutor-facing condition, whereas in the student-facing condition it is driven by the student expressing significantly more mathematical thinking. Through interviews with tutors, we find the student-facing Talk Meter was more motivating to students, especially those with introverted personalities, and was effective at encouraging joint effort towards balanced talk time. These results demonstrate the promise of in-the-moment joint talk time feedback to both teachers and students as a low cost, engaging, and scalable way to increase students' mathematical reasoning.