Abdulrahman AlOsaimy
Also published as: Abdulrahman Alosaimy, Abdulrahman M AlOsaimy
2025
Evaluation of Large Language Models on Arabic Punctuation Prediction
Asma Ali Al Wazrah | Afrah Altamimi | Hawra Aljasim | Waad Alshammari | Rawan Al-Matham | Omar Elnashar | Mohamed Amin | Abdulrahman AlOsaimy
Proceedings of the 1st Workshop on NLP for Languages Using Arabic Script
Asma Ali Al Wazrah | Afrah Altamimi | Hawra Aljasim | Waad Alshammari | Rawan Al-Matham | Omar Elnashar | Mohamed Amin | Abdulrahman AlOsaimy
Proceedings of the 1st Workshop on NLP for Languages Using Arabic Script
The linguistic inclusivity of Large Language Models (LLMs) such as ChatGPT, Gemni, JAIS, and AceGPT has not been sufficiently explored, particularly in their handling of low-resource languages like Arabic compared to English. While these models have shown impressive performance across various tasks, their effectiveness in Arabic remains under-examined. Punctuation, critical for sentence structure and comprehension in tasks like speech analysis, synthesis, and machine translation, requires precise prediction. This paper assesses seven LLMs: GPT4-o, Gemni1.5, JAIS, AceGPT, SILMA, ALLaM, and CommandR+ for Arabic punctuation prediction. Additionally, the performance of fine-tuned AraBERT is compared with these models in zero-shot and few-shot settings using a proposed Arabic punctuation prediction corpus of 10,044 sentences. The experiments demonstrate that while AraBERT performs well for specific punctuation marks, LLMs show significant promise in zero-shot learning, with further improvements in few-shot scenarios. These findings highlight the potential of LLMs to enhance the automation and accuracy of Arabic text processing.
Evaluating RAG Pipelines for Arabic Lexical Information Retrieval: A Comparative Study of Embedding and Generation Models
Raghad Al-Rasheed | Abdullah Al Muaddi | Hawra Aljasim | Rawan Al-Matham | Muneera Alhoshan | Asma Al Wazrah | Abdulrahman AlOsaimy
Proceedings of the 1st Workshop on NLP for Languages Using Arabic Script
Raghad Al-Rasheed | Abdullah Al Muaddi | Hawra Aljasim | Rawan Al-Matham | Muneera Alhoshan | Asma Al Wazrah | Abdulrahman AlOsaimy
Proceedings of the 1st Workshop on NLP for Languages Using Arabic Script
This paper investigates the effectiveness of retrieval-augmented generation (RAG) pipelines, focusing on the Arabic lexical information retrieval. Specifically, it analyzes how embedding models affect the recall of Arabic lexical information and evaluates the ability of large language models (LLMs) to produce accurate and contextually relevant answers within the RAG pipelines. We examine a dataset of over 88,000 words from the Riyadh dictionary and evaluate the models using metrics such as Top-K Recall, Mean Reciprocal Rank (MRR), F1 Score, Cosine Similarity, and Accuracy. The research assesses the capabilities of several embedding models, including E5-large, BGE, AraBERT, CAMeLBERT, and AraELECTRA, highlighting a disparity in performance between sentence embeddings and word embeddings. Sentence embedding with E5 achieved the best results, with a Top-5 Recall of 0.88, and an MRR of 0.48. For the generation models, we evaluated GPT-4, GPT-3.5, SILMA-9B, Gemini-1.5, Aya-8B, and AceGPT-13B based on their ability to generate accurate and contextually appropriate responses. GPT-4 demonstrated the best performance, achieving an F1 score of 0.90, an accuracy of 0.82, and a cosine similarity of 0.87. Our results emphasize the strengths and limitations of both embedding and generation models in Arabic tasks.
BALSAM: A Platform for Benchmarking Arabic Large Language Models
Rawan Al-Matham | Kareem Darwish | Raghad Al-Rasheed | Waad Alshammari | Muneera Alhoshan | Amal Almazrua | Asma Al Wazrah | Mais Alheraki | Firoj Alam | Preslav Nakov | Norah Alzahrani | Eman AlBilali | Nizar Habash | Abdelrahman El-Sheikh | Muhammad Elmallah | Haonan Li | Hamdy Mubarak | Mohamed Anwar | Zaid Alyafeai | Ahmed Abdelali | Nora Altwairesh | Maram Hasanain | Abdulmohsen Al Thubaity | Shady Shehata | Bashar Alhafni | Injy Hamed | Go Inoue | Khalid Elmadani | Ossama Obeid | Fatima Haouari | Tamer Elsayed | Emad Alghamdi | Khalid Almubarak | Saied Alshahrani | Ola Aljarrah | Safa Alajlan | Areej Alshaqarawi | Maryam Alshihri | Sultana Alghurabi | Atikah Alzeghayer | Afrah Altamimi | Abdullah Alfaifi | Abdulrahman AlOsaimy
Proceedings of The Third Arabic Natural Language Processing Conference
Rawan Al-Matham | Kareem Darwish | Raghad Al-Rasheed | Waad Alshammari | Muneera Alhoshan | Amal Almazrua | Asma Al Wazrah | Mais Alheraki | Firoj Alam | Preslav Nakov | Norah Alzahrani | Eman AlBilali | Nizar Habash | Abdelrahman El-Sheikh | Muhammad Elmallah | Haonan Li | Hamdy Mubarak | Mohamed Anwar | Zaid Alyafeai | Ahmed Abdelali | Nora Altwairesh | Maram Hasanain | Abdulmohsen Al Thubaity | Shady Shehata | Bashar Alhafni | Injy Hamed | Go Inoue | Khalid Elmadani | Ossama Obeid | Fatima Haouari | Tamer Elsayed | Emad Alghamdi | Khalid Almubarak | Saied Alshahrani | Ola Aljarrah | Safa Alajlan | Areej Alshaqarawi | Maryam Alshihri | Sultana Alghurabi | Atikah Alzeghayer | Afrah Altamimi | Abdullah Alfaifi | Abdulrahman AlOsaimy
Proceedings of The Third Arabic Natural Language Processing Conference
The impressive advancement of Large Language Models (LLMs) in English has not been matched across all languages. In particular, LLM performance in Arabic lags behind, due to data scarcity, linguistic diversity of Arabic and its dialects, morphological complexity, etc. Progress is further hindered by the quality of Arabic benchmarks, which typically rely on static, publicly available data, lack comprehensive task coverage, or do not provide dedicated platforms with blind test sets. This makes it challenging to measure actual progress and to mitigate data contamination. Here, we aim to bridge these gaps. In particular, we introduce BALSAM, a comprehensive, community-driven benchmark aimed at advancing Arabic LLM development and evaluation. It includes 78 NLP tasks from 14 broad categories, with 52K examples divided into 37K test and 15K development, and a centralized, transparent platform for blind evaluation. We envision BALSAM as a unifying platform that sets standards and promotes collaborative research to advance Arabic LLM capabilities.
2024
KSAA-CAD Shared Task: Contemporary Arabic Dictionary for Reverse Dictionary and Word Sense Disambiguation
Waad Alshammari | Amal Almazrua | Asma Al Wazrah | Rawan Almatham | Muneera Alhoshan | Abdulrahman Alosaimy
Proceedings of the Second Arabic Natural Language Processing Conference
Waad Alshammari | Amal Almazrua | Asma Al Wazrah | Rawan Almatham | Muneera Alhoshan | Abdulrahman Alosaimy
Proceedings of the Second Arabic Natural Language Processing Conference
This paper outlines the KSAA-CAD shared task, highlighting the Contemporary Arabic Language Dictionary within the scenario of developing a Reverse Dictionary (RD) system and enhancing Word Sense Disambiguation (WSD) capabilities. The first KSAA-RD (Al-Matham et al., 2023) highlighted significant gaps in the domain of RDs, which are designed to retrieve words by their meanings or definitions. This shared task comprises two tasks: RD and WSD. The RD task focuses on identifying word embeddings that most accurately match a given definition, termed a “gloss,” in Arabic. Conversely, the WSD task involves determining the specific meaning of a word in context, particularly when the word has multiple meanings. The winning team achieved the highest-ranking score of 0.0644 in RD using Electra embeddings. In this paper, we describe the methods employed by the participating teams and provide insights into the future direction of KSAA-CAD.
2023
KSAA-RD Shared Task: Arabic Reverse Dictionary
Rawan Al-Matham | Waad Alshammari | Abdulrahman AlOsaimy | Sarah Alhumoud | Asma Wazrah | Afrah Altamimi | Halah Alharbi | Abdullah Alaifi
Proceedings of ArabicNLP 2023
Rawan Al-Matham | Waad Alshammari | Abdulrahman AlOsaimy | Sarah Alhumoud | Asma Wazrah | Afrah Altamimi | Halah Alharbi | Abdullah Alaifi
Proceedings of ArabicNLP 2023
This paper outlines the KSAA-RD shared task, which aims to develop a Reverse Dictionary (RD) system for the Arabic language. RDs allow users to find words based on their meanings or definition. This shared task, KSAA-RD, includes two subtasks: Arabic RD and cross-lingual reverse dictionaries (CLRD). Given a definition (referred to as a “gloss”) in either Arabic or English, the teams compete to find the most similar word embeddings of their corresponding word. The winning team achieved 24.20 and 12.70 for RD and CLRD, respectively in terms of rank metric. In this paper, we describe the methods employed by the participating teams and offer an outlook for KSAA-RD.
2018
Web-based Annotation Tool for Inflectional Language Resources
Abdulrahman Alosaimy | Eric Atwell
Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)
Abdulrahman Alosaimy | Eric Atwell
Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)
2016
Arabic Language WEKA-Based Dialect Classifier for Arabic Automatic Speech Recognition Transcripts
Areej Alshutayri | Eric Atwell | Abdulrahman Alosaimy | James Dickins | Michael Ingleby | Janet Watson
Proceedings of the Third Workshop on NLP for Similar Languages, Varieties and Dialects (VarDial3)
Areej Alshutayri | Eric Atwell | Abdulrahman Alosaimy | James Dickins | Michael Ingleby | Janet Watson
Proceedings of the Third Workshop on NLP for Similar Languages, Varieties and Dialects (VarDial3)
This paper describes an Arabic dialect identification system which we developed for the Discriminating Similar Languages (DSL) 2016 shared task. We classified Arabic dialects by using Waikato Environment for Knowledge Analysis (WEKA) data analytic tool which contains many alternative filters and classifiers for machine learning. We experimented with several classifiers and the best accuracy was achieved using the Sequential Minimal Optimization (SMO) algorithm for training and testing process set to three different feature-sets for each testing process. Our approach achieved an accuracy equal to 42.85% which is considerably worse in comparison to the evaluation scores on the training set of 80-90% and with training set “60:40” percentage split which achieved accuracy around 50%. We observed that Buckwalter transcripts from the Saarland Automatic Speech Recognition (ASR) system are given without short vowels, though the Buckwalter system has notation for these. We elaborate such observations, describe our methods and analyse the training dataset.
Search
Fix author
Co-authors
- Rawan Al-Matham 4
- Waad Thuwaini Alshammari 4
- Asma Al Wazrah 3
- Muneera Alhoshan 3
- Afrah Altamimi 3
- Raghad Al-Rasheed 2
- Hawra Aljasim 2
- Amal Almazrua 2
- Eric Atwell 2
- Ahmed Abdelali 1
- Abdullah Al Muaddi 1
- Asma Ali Al Wazrah 1
- Abdulmohsen Al-Thubaity 1
- Abdullah Alaifi 1
- Safa Alajlan 1
- Firoj Alam 1
- Eman Albilali 1
- Abdullah Alfaifi 1
- Emad Alghamdi 1
- Sultana Alghurabi 1
- Bashar Alhafni 1
- Halah Alharbi 1
- Mais Alheraki 1
- Sarah Alhumoud 1
- Ola Aljarrah 1
- Rawan Almatham 1
- Khalid Almubarak 1
- Saied Alshahrani 1
- Areej Alshaqarawi 1
- Maryam Alshihri 1
- Areej Alshutayri 1
- Nora Altwairesh 1
- Zaid Alyafeai 1
- Norah A. Alzahrani 1
- Atikah Alzeghayer 1
- Mohamed Amin 1
- Mohamed Anwar 1
- Kareem Darwish 1
- James Dickins 1
- Abdelrahman El-Sheikh 1
- Khalid Elmadani 1
- Muhammad Elmallah 1
- Omar Elnashar 1
- Tamer Elsayed 1
- Nizar Habash 1
- Injy Hamed 1
- Fatima Haouari 1
- Maram Hasanain 1
- Michael Ingleby 1
- Go Inoue 1
- Haonan Li 1
- Hamdy Mubarak 1
- Preslav Nakov 1
- Ossama Obeid 1
- Shady Shehata 1
- Janet Watson 1
- Asma Wazrah 1