Are Large Language Models for Education Reliable Across Languages?
Vansh Gupta, Sankalan Pal Chowdhury, Vilém Zouhar, Donya Rooein, Mrinmaya Sachan
Abstract
Large language models (LLMs) are increasingly being adopted in educational settings. These applications expand beyond English, though current LLMs remain primarily English-centric. In this work, we ascertain if their use in education settings in non-English languages is warranted. We evaluated the performance of popular LLMs on four educational tasks: identifying student misconceptions, providing targeted feedback, interactive tutoring, and grading translations in eight languages (Mandarin, Hindi, Arabic, German, Farsi, Telugu, Ukrainian, Czech) in addition to English. We find that the performance on these tasks somewhat corresponds to the amount of language represented in training data, with lower-resource languages having poorer task performance. However, at least some models are able to more or less maintain their levels of performance across all languages. Thus, we recommend that practitioners first verify that the LLM works well in the target language for their educational task before deployment.- Anthology ID:
- 2025.bea-1.44
- Volume:
- Proceedings of the 20th Workshop on Innovative Use of NLP for Building Educational Applications (BEA 2025)
- Month:
- July
- Year:
- 2025
- Address:
- Vienna, Austria
- Editors:
- Ekaterina Kochmar, Bashar Alhafni, Marie Bexte, Jill Burstein, Andrea Horbach, Ronja Laarmann-Quante, Anaïs Tack, Victoria Yaneva, Zheng Yuan
- Venues:
- BEA | WS
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 612–631
- Language:
- URL:
- https://preview.aclanthology.org/landing_page/2025.bea-1.44/
- DOI:
- Cite (ACL):
- Vansh Gupta, Sankalan Pal Chowdhury, Vilém Zouhar, Donya Rooein, and Mrinmaya Sachan. 2025. Are Large Language Models for Education Reliable Across Languages?. In Proceedings of the 20th Workshop on Innovative Use of NLP for Building Educational Applications (BEA 2025), pages 612–631, Vienna, Austria. Association for Computational Linguistics.
- Cite (Informal):
- Are Large Language Models for Education Reliable Across Languages? (Gupta et al., BEA 2025)
- PDF:
- https://preview.aclanthology.org/landing_page/2025.bea-1.44.pdf