Adel Ammar
2025
ANLPers at AraGenEval Shared Task: Descriptive Author Tokens for Transparent Arabic Authorship Style Transfer
Omer Nacar | Mahmoud Reda | Serry Sibaee | Yasser Alhabashi | Adel Ammar | Wadii Boulila
Proceedings of The Third Arabic Natural Language Processing Conference: Shared Tasks
Omer Nacar | Mahmoud Reda | Serry Sibaee | Yasser Alhabashi | Adel Ammar | Wadii Boulila
Proceedings of The Third Arabic Natural Language Processing Conference: Shared Tasks
ANLPers at BAREC Shared Task 2025: Readability of Embeddings Training Neural Readability Classifiers on the BAREC Corpus
Serry Sibaee | Omer Nacar | Yasser Alhabashi | Adel Ammar | Wadii Boulila
Proceedings of The Third Arabic Natural Language Processing Conference: Shared Tasks
Serry Sibaee | Omer Nacar | Yasser Alhabashi | Adel Ammar | Wadii Boulila
Proceedings of The Third Arabic Natural Language Processing Conference: Shared Tasks
ANPLers at IqraEval Shared task: Adapting Whisper-large-v3 as Speech-to-Phoneme for Qur’anic Recitation Mispronunciation Detection
Nour Qandos | Serry Sibaee | Samar Ahmad | Omer Nacar | Adel Ammar | Wadii Boulila | Yasser Alhabashi
Proceedings of The Third Arabic Natural Language Processing Conference: Shared Tasks
Nour Qandos | Serry Sibaee | Samar Ahmad | Omer Nacar | Adel Ammar | Wadii Boulila | Yasser Alhabashi
Proceedings of The Third Arabic Natural Language Processing Conference: Shared Tasks
ANLPers at MAHED2025: From Hate to Hope: Boosting Arabic Text Classification
Yasser Alhabashi | Serry Sibaee | Omer Nacar | Adel Ammar | Wadii Boulila
Proceedings of The Third Arabic Natural Language Processing Conference: Shared Tasks
Yasser Alhabashi | Serry Sibaee | Omer Nacar | Adel Ammar | Wadii Boulila
Proceedings of The Third Arabic Natural Language Processing Conference: Shared Tasks
ANLPers at QIAS: CoT for Islamic Inheritance
Serry Sibaee | Mahmoud Reda | Omer Nacar | Yasser Alhabashi | Adel Ammar | Wadii Boulila
Proceedings of The Third Arabic Natural Language Processing Conference: Shared Tasks
Serry Sibaee | Mahmoud Reda | Omer Nacar | Yasser Alhabashi | Adel Ammar | Wadii Boulila
Proceedings of The Third Arabic Natural Language Processing Conference: Shared Tasks
Pearl: A Multimodal Culturally-Aware Arabic Instruction Dataset
Fakhraddin Alwajih | Samar M. Magdy | Abdellah El Mekki | Omer Nacar | Youssef Nafea | Safaa Taher Abdelfadil | Abdulfattah Mohammed Yahya | Hamzah Luqman | Nada Almarwani | Samah Aloufi | Baraah Qawasmeh | Houdaifa Atou | Serry Sibaee | Hamzah A. Alsayadi | Walid Al-Dhabyani | Maged S. Al-shaibani | Aya El aatar | Nour Qandos | Rahaf Alhamouri | Samar Ahmad | Mohammed Anwar AL-Ghrawi | Aminetou Yacoub | Ruwa AbuHweidi | Vatimetou Mohamed Lemin | Reem Abdel-Salam | Ahlam Bashiti | Adel Ammar | Aisha Alansari | Ahmed Ashraf | Nora Alturayeif | Alcides Alcoba Inciarte | AbdelRahim A. Elmadany | Mohamedou Cheikh Tourad | Ismail Berrada | Mustafa Jarrar | Shady Shehata | Muhammad Abdul-Mageed
Findings of the Association for Computational Linguistics: EMNLP 2025
Fakhraddin Alwajih | Samar M. Magdy | Abdellah El Mekki | Omer Nacar | Youssef Nafea | Safaa Taher Abdelfadil | Abdulfattah Mohammed Yahya | Hamzah Luqman | Nada Almarwani | Samah Aloufi | Baraah Qawasmeh | Houdaifa Atou | Serry Sibaee | Hamzah A. Alsayadi | Walid Al-Dhabyani | Maged S. Al-shaibani | Aya El aatar | Nour Qandos | Rahaf Alhamouri | Samar Ahmad | Mohammed Anwar AL-Ghrawi | Aminetou Yacoub | Ruwa AbuHweidi | Vatimetou Mohamed Lemin | Reem Abdel-Salam | Ahlam Bashiti | Adel Ammar | Aisha Alansari | Ahmed Ashraf | Nora Alturayeif | Alcides Alcoba Inciarte | AbdelRahim A. Elmadany | Mohamedou Cheikh Tourad | Ismail Berrada | Mustafa Jarrar | Shady Shehata | Muhammad Abdul-Mageed
Findings of the Association for Computational Linguistics: EMNLP 2025
Mainstream large vision-language models (LVLMs) inherently encode cultural biases, highlighting the need for diverse multimodal datasets. To address this gap, we introduce PEARL, a large-scale Arabic multimodal dataset and benchmark explicitly designed for cultural understanding. Constructed through advanced agentic workflows and extensive human-in-the-loop annotations by 37 annotators from across the Arab world, PEARL comprises over 309K multimodal examples spanning ten culturally significant domains covering all Arab countries. We further provide two robust evaluation benchmarks (PEARL and PEARL-LITE) along with a specialized subset (PEARL-X) explicitly developed to assess nuanced cultural variations. Comprehensive evaluations on state-of-the-art open and proprietary LVLMs demonstrate that reasoning-centric instruction alignment substantially improves models’ cultural grounding compared to conventional scaling methods. PEARL establishes a foundational resource for advancing culturally-informed multimodal modeling research. All datasets and benchmarks are publicly available.
Towards Inclusive Arabic LLMs: A Culturally Aligned Benchmark in Arabic Large Language Model Evaluation
Omer Nacar | Serry Taiseer Sibaee | Samar Ahmed | Safa Ben Atitallah | Adel Ammar | Yasser Alhabashi | Abdulrahman S. Al-Batati | Arwa Alsehibani | Nour Qandos | Omar Elshehy | Mohamed Abdelkader | Anis Koubaa
Proceedings of the First Workshop on Language Models for Low-Resource Languages
Omer Nacar | Serry Taiseer Sibaee | Samar Ahmed | Safa Ben Atitallah | Adel Ammar | Yasser Alhabashi | Abdulrahman S. Al-Batati | Arwa Alsehibani | Nour Qandos | Omar Elshehy | Mohamed Abdelkader | Anis Koubaa
Proceedings of the First Workshop on Language Models for Low-Resource Languages
Arabic Large Language Models are usually evaluated using Western-centric benchmarks that overlook essential cultural contexts, making them less effective and culturally misaligned for Arabic-speaking communities. This study addresses this gap by evaluating the Arabic Massive Multitask Language Understanding (MMLU) Benchmark to assess its cultural alignment and relevance for Arabic Large Language Models (LLMs) across culturally sensitive topics. A team of eleven experts annotated over 2,500 questions, evaluating them based on fluency, adequacy, cultural appropriateness, bias detection, religious sensitivity, and adherence to social norms. Through human assessment, the study highlights significant cultural misalignments and biases, particularly in sensitive areas like religion and morality. In response to these findings, we propose annotation guidelines and integrate culturally enriched data sources to enhance the benchmark’s reliability and relevance. The research highlights the importance of cultural sensitivity in evaluating inclusive Arabic LLMs, fostering more widely accepted LLMs for Arabic-speaking communities.
Search
Fix author
Co-authors
- Omer Nacar 7
- Yasser Alhabashi 6
- Serry Sibaee 6
- Wadii Boulila 5
- Nour Qandos 3
- Samar Ahmad 2
- Mahmoud Reda 2
- Mohammed Anwar AL-Ghrawi 1
- Reem Abdel-Salam 1
- Safaa Taher Abdelfadil 1
- Mohamed Abdelkader 1
- Muhammad Abdul-Mageed 1
- Ruwa AbuHweidi 1
- Samar Ahmed 1
- Abdulrahman S. Al-Batati 1
- Walid Al-Dhabyani 1
- Maged S. Al-shaibani 1
- Aisha Alansari 1
- Alcides Alcoba Inciarte 1
- Rahaf Alhamouri 1
- Nada Almarwani 1
- Samah Aloufi 1
- Hamzah A. Alsayadi 1
- Arwa Alsehibani 1
- Nora Alturayeif 1
- Fakhraddin Alwajih 1
- Ahmed Ashraf 1
- Houdaifa Atou 1
- Ahlam Bashiti 1
- Safa Ben Atitallah 1
- Ismail Berrada 1
- Abdellah El Mekki 1
- Aya El aatar 1
- AbdelRahim A. Elmadany 1
- Omar Elshehy 1
- Mustafa Jarrar 1
- Anis Koubaa 1
- Vatimetou Mohamed Lemin 1
- Hamzah Luqman 1
- Samar Mohamed Magdy 1
- Youssef Nafea 1
- Baraah Qawasmeh 1
- Shady Shehata 1
- Serry Taiseer Sibaee 1
- Mohamedou Cheikh Tourad 1
- Aminetou Yacoub 1
- Abdulfattah Mohammed Yahya 1