LLMs Protégés: Tutoring LLMs with Knowledge Gaps Improves Student Learning Outcome

Andrei Kucharavy, Cyril Vallez, Dimitri Percia David


Abstract
Since the release of ChatGPT, Large Langauge Models (LLMs) have been proposed as potential tutors to students in the education outcomes. Such an LLM-as-tutors metaphor is problematic, notably due to the counterfactual generation, perception of learned skills as mastered by an automated system and hence non-valuable, and learning LLM over-reliance.We propose instead the LLM-as-mentee tutoring schema, leveraging the Learning-by-Teaching protégé effect in peer tutoring - LLM Protégés. In this configuration, counterfactual generation is desirable, allowing students to operationalize the learning material and better understand the limitations of LLM-based systems, both a skill in itself and an additional learning motivation. Our preliminary results suggest that LLM Protégés are effective. Students in an introductory algorithms class who successfully diagnosed an LLM teachable agent system prompted to err on a course material gained an average of 0.72 points on a 1-6 scale. Remarkably, if fully adopted, this approach would reduce the failure rate in the second midterm from 28% to 8%, mitigating 72% of midterm failure.We publish code for on-premises deployment of LLM Protégés on https://github.com/Reliable-Information-Lab-HEVS/LLM_Proteges.
Anthology ID:
2025.bea-1.19
Volume:
Proceedings of the 20th Workshop on Innovative Use of NLP for Building Educational Applications (BEA 2025)
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Ekaterina Kochmar, Bashar Alhafni, Marie Bexte, Jill Burstein, Andrea Horbach, Ronja Laarmann-Quante, Anaïs Tack, Victoria Yaneva, Zheng Yuan
Venues:
BEA | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
248–257
Language:
URL:
https://preview.aclanthology.org/landing_page/2025.bea-1.19/
DOI:
Bibkey:
Cite (ACL):
Andrei Kucharavy, Cyril Vallez, and Dimitri Percia David. 2025. LLMs Protégés: Tutoring LLMs with Knowledge Gaps Improves Student Learning Outcome. In Proceedings of the 20th Workshop on Innovative Use of NLP for Building Educational Applications (BEA 2025), pages 248–257, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
LLMs Protégés: Tutoring LLMs with Knowledge Gaps Improves Student Learning Outcome (Kucharavy et al., BEA 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/landing_page/2025.bea-1.19.pdf