Are Dialects Better Prompters? A Case Study on Arabic Subjective Text Classification

Leila Moudjari, Farah Benamara


Abstract
This paper investigates the effect of dialectal prompting, variations in prompting scrip t and model fine-tuning on subjective classification in Arabic dialects. To this end, we evaluate the performances of 12 widely used open LLMs across four tasks and eight benchmark datasets. Our results reveal that specialized fine-tuned models with Arabic and Arabizi scripts dialectal prompts achieve the best results, which constitutes a novel state of the art in the field.
Anthology ID:
2025.findings-acl.892
Volume:
Findings of the Association for Computational Linguistics: ACL 2025
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Wanxiang Che, Joyce Nabende, Ekaterina Shutova, Mohammad Taher Pilehvar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
17356–17371
Language:
URL:
https://preview.aclanthology.org/landing_page/2025.findings-acl.892/
DOI:
Bibkey:
Cite (ACL):
Leila Moudjari and Farah Benamara. 2025. Are Dialects Better Prompters? A Case Study on Arabic Subjective Text Classification. In Findings of the Association for Computational Linguistics: ACL 2025, pages 17356–17371, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
Are Dialects Better Prompters? A Case Study on Arabic Subjective Text Classification (Moudjari & Benamara, Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/landing_page/2025.findings-acl.892.pdf