Toward Beginner-Friendly LLMs for Language Learning: Controlling Difficulty in Conversation

Meiqing Jin, Liam Dugan, Chris Callison-Burch


Abstract
Practicing conversations with large language models (LLMs) presents a promising alternative to traditional in-person language learning. However, most LLMs generate text at a near-native level of complexity, making them ill-suited for beginner learners (CEFR: A1–A2). In this paper, we investigate whether controllable generation techniques can adapt LLM outputs to better support absolute beginners. We evaluate these methods through both automatic metrics and a user study with university-level learners of Japanese. Our findings show that while prompting alone fails, controllable generation techniques can successfully improve output comprehensibility for beginner speakers (from 39.4% to 83.3%). We further introduce a new token-level evaluation metric, Token Miss Rate (TMR), that quantifies the proportion of incomprehensible tokens per utterance and correlates strongly with human judgments. To support future research in AI-assisted language learning, we release our code, models, annotation tools, and dataset.
Anthology ID:
2026.findings-eacl.47
Volume:
Findings of the Association for Computational Linguistics: EACL 2026
Month:
March
Year:
2026
Address:
Rabat, Morocco
Editors:
Vera Demberg, Kentaro Inui, Lluís Marquez
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
913–936
Language:
URL:
https://preview.aclanthology.org/ingest-eacl/2026.findings-eacl.47/
DOI:
Bibkey:
Cite (ACL):
Meiqing Jin, Liam Dugan, and Chris Callison-Burch. 2026. Toward Beginner-Friendly LLMs for Language Learning: Controlling Difficulty in Conversation. In Findings of the Association for Computational Linguistics: EACL 2026, pages 913–936, Rabat, Morocco. Association for Computational Linguistics.
Cite (Informal):
Toward Beginner-Friendly LLMs for Language Learning: Controlling Difficulty in Conversation (Jin et al., Findings 2026)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-eacl/2026.findings-eacl.47.pdf
Checklist:
 2026.findings-eacl.47.checklist.pdf