Testing Low-Resource Language Support in LLMs Using Language Proficiency Exams: the Case of Luxembourgish

Cedric Lothritz, Jordi Cabot, Laura Bernardy


Abstract
Large Language Models (LLMs) have become an increasingly important tool in research and society at large. While LLMs are regularly used all over the world by experts and lay-people alike, they are predominantly developed with English-speaking users in mind, performing well in English and other wide-spread languages while less-resourced languages such as Luxembourgish are seen as a lower priority. This lack of attention is also reflected in the sparsity of available evaluation tools and datasets. In this study, we investigate the viability of language proficiency exams as such evaluation tools for the Luxembourgish language. We find that large models such as Claude and DeepSeek-R1 typically achieve high scores, while smaller models show weak performances. We also find that the performances in such language exams can be used to predict performances in other NLP tasks in Luxembourgish.
Anthology ID:
2026.findings-eacl.128
Volume:
Findings of the Association for Computational Linguistics: EACL 2026
Month:
March
Year:
2026
Address:
Rabat, Morocco
Editors:
Vera Demberg, Kentaro Inui, Lluís Marquez
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2453–2476
Language:
URL:
https://preview.aclanthology.org/ingest-eacl/2026.findings-eacl.128/
DOI:
Bibkey:
Cite (ACL):
Cedric Lothritz, Jordi Cabot, and Laura Bernardy. 2026. Testing Low-Resource Language Support in LLMs Using Language Proficiency Exams: the Case of Luxembourgish. In Findings of the Association for Computational Linguistics: EACL 2026, pages 2453–2476, Rabat, Morocco. Association for Computational Linguistics.
Cite (Informal):
Testing Low-Resource Language Support in LLMs Using Language Proficiency Exams: the Case of Luxembourgish (Lothritz et al., Findings 2026)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-eacl/2026.findings-eacl.128.pdf
Checklist:
 2026.findings-eacl.128.checklist.pdf