Nora Altwairesh


2025

pdf bib
BALSAM: A Platform for Benchmarking Arabic Large Language Models
Rawan Nasser Almatham | Kareem Mohamed Darwish | Raghad Al-Rasheed | Waad Thuwaini Alshammari | Muneera Alhoshan | Amal Almazrua | Asma Al Wazrah | Mais Alheraki | Firoj Alam | Preslav Nakov | Norah A. Alzahrani | Eman Albilali | Nizar Habash | Abdelrahman Mustafa El-Sheikh | Muhammad Elmallah | Hamdy Mubarak | Zaid Alyafeai | Mohamed Anwar | Haonan Li | Ahmed Abdelali | Nora Altwairesh | Maram Hasanain | Abdulmohsen Al-Thubaity | Shady Shehata | Bashar Alhafni | Injy Hamed | Go Inoue | Khalid N. Elmadani | Ossama Obeid | Fatima Haouari | Tamer Elsayed | Emad A. Alghamdi | Khalid Almubarak | Saied Alshahrani | Ola Aljareh | Safa Alajlan | Areej Alshaqarawi | Maryam Alshihri | Sultana Alghurabi | Atikah Alzeghayer | Afrah Altamimi | Abdullah Alfaifi | Abdulrahman M Alosaimy
Proceedings of The Third Arabic Natural Language Processing Conference

The impressive advancement of Large Language Models (LLMs) in English has not been matched across all languages. In particular, LLM performance in Arabic lags behind, due to data scarcity, linguistic diversity of Arabic and its dialects, morphological complexity, etc. Progress is further hindered by the quality of Arabic benchmarks, which typically rely on static, publicly available data, lack comprehensive task coverage, or do not provide dedicated platforms with blind test sets. This makes it challenging to measure actual progress and to mitigate data contamination. Here, we aim to bridge these gaps. In particular, we introduce BALSAM, a comprehensive, community-driven benchmark aimed at advancing Arabic LLM development and evaluation. It includes 78 NLP tasks from 14 broad categories, with 52K examples divided into 37K test and 15K development, and a centralized, transparent platform for blind evaluation. We envision BALSAM as a unifying platform that sets standards and promotes collaborative research to advance Arabic LLM capabilities.

2021

pdf bib
What does BERT Learn from Arabic Machine Reading Comprehension Datasets?
Eman Albilali | Nora Altwairesh | Manar Hosny
Proceedings of the Sixth Arabic Natural Language Processing Workshop

In machine reading comprehension tasks, a model must extract an answer from the available context given a question and a passage. Recently, transformer-based pre-trained language models have achieved state-of-the-art performance in several natural language processing tasks. However, it is unclear whether such performance reflects true language understanding. In this paper, we propose adversarial examples to probe an Arabic pre-trained language model (AraBERT), leading to a significant performance drop over four Arabic machine reading comprehension datasets. We present a layer-wise analysis for the transformer’s hidden states to offer insights into how AraBERT reasons to derive an answer. The experiments indicate that AraBERT relies on superficial cues and keyword matching rather than text understanding. Furthermore, hidden state visualization demonstrates that prediction errors can be recognized from vector representations in earlier layers.