Decoding Actionability: A Computational Analysis of Teacher Observation Feedback

Mayank Sharma, Jason Zhang


Abstract
This study presents a computational analysis to classify actionability in teacher feedback. We fine-tuned a RoBERTa model on 662 manually annotated feedback examples from West African classrooms, achieving strong classification performance (accuracy = 0.94, precision = 0.90, recall = 0.96, f1 = 0.93). This enabled classification of over 12,000 feedback instances. A comparison of linguistic features indicated that actionable feedback was associated with lower word count but higher readability, greater lexical diversity, and more modifier usage. These findings suggest that concise, accessible language with precise descriptive terms may be more actionable for teachers. Our results support focusing on clarity in teacher observation protocols while demonstrating the potential of computational approaches in analyzing educational feedback at scale.
Anthology ID:
2025.bea-1.67
Volume:
Proceedings of the 20th Workshop on Innovative Use of NLP for Building Educational Applications (BEA 2025)
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Ekaterina Kochmar, Bashar Alhafni, Marie Bexte, Jill Burstein, Andrea Horbach, Ronja Laarmann-Quante, Anaïs Tack, Victoria Yaneva, Zheng Yuan
Venues:
BEA | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
898–907
Language:
URL:
https://preview.aclanthology.org/acl25-workshop-ingestion/2025.bea-1.67/
DOI:
Bibkey:
Cite (ACL):
Mayank Sharma and Jason Zhang. 2025. Decoding Actionability: A Computational Analysis of Teacher Observation Feedback. In Proceedings of the 20th Workshop on Innovative Use of NLP for Building Educational Applications (BEA 2025), pages 898–907, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
Decoding Actionability: A Computational Analysis of Teacher Observation Feedback (Sharma & Zhang, BEA 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/acl25-workshop-ingestion/2025.bea-1.67.pdf