AL-QASIDA: Analyzing LLM Quality and Accuracy Systematically in Dialectal Arabic

Nathaniel Romney Robinson, Shahd Abdelmoneim, Kelly Marchisio, Sebastian Ruder


Abstract
Dialectal Arabic (DA) varieties are under-served by language technologies, particularly large language models (LLMs). This trend threatens to exacerbate existing social inequalities and limits LLM applications, yet the research community lacks operationalized performance measurements in DA. We present a framework that comprehensively assesses LLMs’ DA modeling capabilities across four dimensions: fidelity, understanding, quality, and diglossia. We evaluate nine LLMs in eight DA varieties and provide practical recommendations. Our evaluation suggests that LLMs do not produce DA as well as they understand it, not because their DA fluency is poor, but because they are reluctant to generate DA. Further analysis suggests that current post-training can contribute to bias against DA, that few-shot examples can overcome this deficiency, and that otherwise no measurable features of input text correlate well with LLM DA performance.
Anthology ID:
2025.findings-acl.1137
Volume:
Findings of the Association for Computational Linguistics: ACL 2025
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Wanxiang Che, Joyce Nabende, Ekaterina Shutova, Mohammad Taher Pilehvar
Venues:
Findings | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
22048–22065
Language:
URL:
https://preview.aclanthology.org/acl25-workshop-ingestion/2025.findings-acl.1137/
DOI:
Bibkey:
Cite (ACL):
Nathaniel Romney Robinson, Shahd Abdelmoneim, Kelly Marchisio, and Sebastian Ruder. 2025. AL-QASIDA: Analyzing LLM Quality and Accuracy Systematically in Dialectal Arabic. In Findings of the Association for Computational Linguistics: ACL 2025, pages 22048–22065, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
AL-QASIDA: Analyzing LLM Quality and Accuracy Systematically in Dialectal Arabic (Robinson et al., Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/acl25-workshop-ingestion/2025.findings-acl.1137.pdf