Early Language and Literacy Classroom Observation Toolkit (ELLCO)
Category: Teacher and Leader Development
Effective writing feedback is a powerful tool for enhancing student learning, encouraging revision, and increasing motivation and agency. Yet, teachers face many challenges that prevent them from consistently providing effective writing feedback. Recent advances in generative artificial intelligence (AI) have led educators and researchers to experiment with AI tools powered by large language models (LLMs) to provide writing feedback, but research in this area has yielded mixed results. In this study, we used qualitative methods to compare LLM writing feedback and expert teacher (n = 12) feedback. Using a framework of dialogic writing feedback as our analytic lens, we highlight differences in LLM and teacher feedback along three dimensions: cognitive, social, and structural. We observed that LLMs primarily enacted corrective feedback at the sentence level and positioned students as novices requiring remediation. By contrast, we observed that teachers enacted more dialogic feedback, offering feedback at multiple levels and employing tactics that positioned students as agentic writers. Our findings support previous research describing limitations of LLM-based writing feedback. More importantly, our study contributes to the growing base of research by identifying specific feedback practices unique to highly skilled teachers that LLMs did not exhibit. These findings have implications for improving the quality of LLM feedback and shifting teachers’ practice to foreground the types of writing feedback that best promote independent thinking and writing skills students will need in the age of generative AI.