Ahmed Ashraf


2025

pdf bib
AraTrust: An Evaluation of Trustworthiness for LLMs in Arabic
Emad A. Alghamdi | Reem Masoud | Deema Alnuhait | Afnan Y. Alomairi | Ahmed Ashraf | Mohamed Zaytoon
Proceedings of the 31st International Conference on Computational Linguistics

The swift progress and widespread acceptance of artificial intelligence (AI) systems highlight a pressing requirement to comprehend both the capabilities and potential risks associated with AI. Given the linguistic complexity, cultural richness, and underrepresented status of Arabic in AI research, there is a pressing need to focus on Large Language Models (LLMs) performance and safety for Arabic related tasks. Despite some progress in their development, there is a lack of comprehensive trustworthiness evaluation benchmarks which presents a major challenge in accurately assessing and improving the safety of LLMs when prompted in Arabic. In this paper, we introduce AraTrust, the first comprehensive trustworthiness benchmark for LLMs in Arabic. AraTrust comprises 522 human-written multiple-choice questions addressing diverse dimensions related to truthfulness, ethics, privacy, illegal activities, mental health, physical health, unfairness, and offensive language. We evaluated a set of LLMs against our benchmark to assess their trustworthiness. GPT-4 was the most trustworthy LLM, while open-source models, particularly AceGPT 7B and Jais 13B, struggled to achieve a score of 60% in our benchmark. The benchmark dataset is publicly available at https://huggingface.co/datasets/asas-ai/AraTrust

pdf bib
AraReasoner: Evaluating Reasoning-Based LLMs for Arabic NLP
Ahmed Abul Hasanaath | Aisha Alansari | Ahmed Ashraf | Salmane Chafik | Hamzah Luqman | Saad Ezzini
Findings of the Association for Computational Linguistics: EMNLP 2025

Large language models (LLMs) have shown remarkable progress in reasoning abilities and general natural language processing (NLP) tasks, yet their performance on Arabic data, characterized by rich morphology, diverse dialects, and complex script, remains underexplored. This paper presents a comprehensive benchmarking study of multiple reasoning-focused LLMs, with a special emphasis on the newly introduced DeepSeek models, across a suite of fifteen Arabic NLP tasks. We experiment with various strategies, including zero-shot, few-shot, and fine-tuning. This allows us to systematically evaluate performance on datasets covering a range of applications to examine their capacity for linguistic reasoning under different levels of complexity. Our experiments reveal several key findings. First, carefully selecting just three in-context examples delivers an average uplift of over 13 F1 points on classification tasks—boosting sentiment analysis from 35.3% to 87.5% and paraphrase detection from 56.1% to 87.0%. Second, reasoning-focused DeepSeek architectures outperform a strong GPT o4-mini baseline by an average of 12 F1 points on complex inference tasks in the zero-shot setting. Third, LoRA-based fine-tuning yields up to an additional 8 points in F1 and BLEU compared to equivalent increases in model scale. The code is available at https://anonymous.4open.science/r/AraReasoner41299

pdf bib
Pearl: A Multimodal Culturally-Aware Arabic Instruction Dataset
Fakhraddin Alwajih | Samar M. Magdy | Abdellah El Mekki | Omer Nacar | Youssef Nafea | Safaa Taher Abdelfadil | Abdulfattah Mohammed Yahya | Hamzah Luqman | Nada Almarwani | Samah Aloufi | Baraah Qawasmeh | Houdaifa Atou | Serry Sibaee | Hamzah A. Alsayadi | Walid Al-Dhabyani | Maged S. Al-shaibani | Aya El aatar | Nour Qandos | Rahaf Alhamouri | Samar Ahmad | Mohammed Anwar AL-Ghrawi | Aminetou Yacoub | Ruwa AbuHweidi | Vatimetou Mohamed Lemin | Reem Abdel-Salam | Ahlam Bashiti | Adel Ammar | Aisha Alansari | Ahmed Ashraf | Nora Alturayeif | Alcides Alcoba Inciarte | AbdelRahim A. Elmadany | Mohamedou Cheikh Tourad | Ismail Berrada | Mustafa Jarrar | Shady Shehata | Muhammad Abdul-Mageed
Findings of the Association for Computational Linguistics: EMNLP 2025

Mainstream large vision-language models (LVLMs) inherently encode cultural biases, highlighting the need for diverse multimodal datasets. To address this gap, we introduce PEARL, a large-scale Arabic multimodal dataset and benchmark explicitly designed for cultural understanding. Constructed through advanced agentic workflows and extensive human-in-the-loop annotations by 37 annotators from across the Arab world, PEARL comprises over 309K multimodal examples spanning ten culturally significant domains covering all Arab countries. We further provide two robust evaluation benchmarks (PEARL and PEARL-LITE) along with a specialized subset (PEARL-X) explicitly developed to assess nuanced cultural variations. Comprehensive evaluations on state-of-the-art open and proprietary LVLMs demonstrate that reasoning-centric instruction alignment substantially improves models’ cultural grounding compared to conventional scaling methods. PEARL establishes a foundational resource for advancing culturally-informed multimodal modeling research. All datasets and benchmarks are publicly available.

2024

pdf bib
CIDAR: Culturally Relevant Instruction Dataset For Arabic
Zaid Alyafeai | Khalid Almubarak | Ahmed Ashraf | Deema Alnuhait | Saied Alshahrani | Gubran A. Q. Abdulrahman | Gamil Ahmed | Qais Gawah | Zead Saleh | Mustafa Ghaleb | Yousef Ali | Maged S. Al-shaibani
Findings of the Association for Computational Linguistics: ACL 2024

Instruction tuning has emerged as a prominent methodology for teaching Large Language Models (LLMs) to follow instructions. However, current instruction datasets predominantly cater to English or are derived from English-dominated LLMs, leading to inherent biases toward Western culture. This bias negatively impacts non-English languages such as Arabic and the unique culture of the Arab region. This paper addresses this limitation by introducing CIDAR, the first open Arabic instruction-tuning dataset culturally aligned by native Arabic speakers. CIDAR contains 10,000 instruction and output pairs that represent the Arab region. We discuss the cultural relevance of CIDAR via the analysis and comparison to a few models fine-tuned on other datasets. Our experiments indicate that models fine-tuned on CIDAR achieve better cultural alignment compared to those fine-tuned on 30x more data.