Malik H. Altakrori
2026
DialectalArabicMMLU: Benchmarking Dialectal Capabilities in Arabic and Multilingual Language Models
Malik H. Altakrori | Nizar Habash | Teresa Lynn | Younes Samih | Abed Alhakim Freihat | Kirill Chirkunov | Muhammed AbuOdeh | Radu Florian | Preslav Nakov | Alham Fikri Aji
Proceedings of the Fifteenth Language Resources and Evaluation Conference
Malik H. Altakrori | Nizar Habash | Teresa Lynn | Younes Samih | Abed Alhakim Freihat | Kirill Chirkunov | Muhammed AbuOdeh | Radu Florian | Preslav Nakov | Alham Fikri Aji
Proceedings of the Fifteenth Language Resources and Evaluation Conference
We present DialectalArabicMMLU, a new benchmark for evaluating the performance of large language models (LLMs) across Arabic dialects. While recently developed Arabic and multilingual benchmarks have advanced LLM evaluation for Modern Standard Arabic (MSA), dialectal varieties remain underrepresented despite their prevalence in everyday communication. DialectalArabicMMLU extends the MMLU-Redux framework through manual translation and adaptation of 3K multiple-choice question–answer pairs into five major dialects (Syrian, Egyptian, Emirati, Saudi, and Moroccan), yielding a total of 15K QA pairs across 32 academic and professional domains (22K QA pairs when also including English and MSA). The benchmark enables systematic assessment of LLM reasoning and comprehension beyond MSA, supporting both task-based and linguistic analysis. We evaluate 19 open-weight Arabic and multilingual LLMs (1B–13B parameters) and report substantial performance variation across dialects, revealing persistent gaps in dialectal generalization. DialectalArabicMMLU provides the first unified, human-curated resource for measuring dialectal understanding in Arabic, thus promoting more inclusive evaluation and future model development.
2025
From Multiple-Choice to Extractive QA: A Case Study for English and Arabic
Teresa Lynn | Malik H. Altakrori | Samar M. Magdy | Rocktim Jyoti Das | Chenyang Lyu | Mohamed Nasr | Younes Samih | Kirill Chirkunov | Alham Fikri Aji | Preslav Nakov | Shantanu Godbole | Salim Roukos | Radu Florian | Nizar Habash
Proceedings of the 31st International Conference on Computational Linguistics
Teresa Lynn | Malik H. Altakrori | Samar M. Magdy | Rocktim Jyoti Das | Chenyang Lyu | Mohamed Nasr | Younes Samih | Kirill Chirkunov | Alham Fikri Aji | Preslav Nakov | Shantanu Godbole | Salim Roukos | Radu Florian | Nizar Habash
Proceedings of the 31st International Conference on Computational Linguistics
The rapid evolution of Natural Language Processing (NLP) has favoured major languages such as English, leaving a significant gap for many others due to limited resources. This is especially evident in the context of data annotation, a task whose importance cannot be underestimated, but which is time-consuming and costly. Thus, any dataset for resource-poor languages is precious, in particular when it is task-specific. Here, we explore the feasibility of repurposing an existing multilingual dataset for a new NLP task: we repurpose a subset of the BELEBELE dataset (Bandarkar et al., 2023), which was designed for multiple-choice question answering (MCQA), to enable the more practical task of extractive QA (EQA) in the style of machine reading comprehension. We present annotation guidelines and a parallel EQA dataset for English and Modern Standard Arabic (MSA). We also present QA evaluation results for several monolingual and cross-lingual QA pairs including English, MSA, and five Arabic dialects. We aim to help others adapt our approach for the remaining 120 BELEBELE language variants, many of which are deemed under-resourced. We also provide a thorough analysis and share insights to deepen understanding of the challenges and opportunities in NLP task reformulation.