2025
pdf
bib
abs
HausaNLP: Current Status, Challenges and Future Directions for Hausa Natural Language Processing
Shamsuddeen Hassan Muhammad
|
Ibrahim Said Ahmad
|
Idris Abdulmumin
|
Falalu Ibrahim Lawan
|
Sukairaj Hafiz Imam
|
Yusuf Aliyu
|
Sani Abdullahi Sani
|
Ali Usman Umar
|
Tajuddeen Gwadabe
|
Kenneth Church
|
Vukosi Marivate
Proceedings of the Sixth Workshop on African Natural Language Processing (AfricaNLP 2025)
Hausa Natural Language Processing (NLP) has gained increasing attention in recent years, yet remains understudied as a low-resource language despite having over 120 million first-language (L1) and 80 million second-language (L2) speakers worldwide. While significant advances have been made in high-resource languages, Hausa NLP faces persistent challenges including limited open-source datasets and inadequate model representation. This paper presents an overview of the current state of Hausa NLP, systematically examining existing resources, research contributions, and gaps across fundamental NLP tasks: text classification, machine translation, named entity recognition, speech recognition, and question answering. We introduce HausaNLP, a curated catalog that aggregates datasets, tools, and research works to enhance accessibility and drive further development. Furthermore, we discuss challenges in integrating Hausa into large language models (LLMs), addressing issues of suboptimal tokenization, and dialectal variation. Finally, we propose strategic research directions emphasizing dataset expansion, improved language modeling approaches, and strengthened community collaboration to advance Hausa NLP. Our work provides both a foundation for accelerating Hausa NLP progress and valuable insights for broader multilingual NLP research.
pdf
bib
abs
Investigating the Impact of Language-Adaptive Fine-Tuning on Sentiment Analysis in Hausa Language Using AfriBERTa
Sani Abdullahi Sani
|
Shamsuddeen Hassan Muhammad
|
Devon Jarvis
Proceedings of the First Workshop on Language Models for Low-Resource Languages
Sentiment analysis (SA) plays a vital role in Natural Language Processing (NLP) by identifying sentiments expressed in text. Although significant advances have been made in SA for widely spoken languages, low-resource languages such as Hausa face unique challenges, primarily due to a lack of digital resources. This study investigates the effectiveness of Language-Adaptive Fine-Tuning (LAFT) to improve SA performance in Hausa. We first curate a diverse, unlabeled corpus to expand the model’s linguistic capabilities, followed by applying LAFT to adapt AfriBERTa specifically to the nuances of the Hausa language. The adapted model is then fine-tuned on the labeled NaijaSenti sentiment dataset to evaluate its performance. Our findings demonstrate that LAFT gives modest improvements, which may be attributed to the use of formal Hausa text rather than informal social media data. Nevertheless, the pre-trained AfriBERTa model significantly outperformed models not specifically trained on Hausa, highlighting the importance of using pre-trained models in low-resource contexts. This research emphasizes the necessity for diverse data sources to advance NLP applications for low-resource African languages. We will publish the code and the data set to encourage further research and facilitate reproducibility in low-resource NLP
pdf
bib
abs
HausaNLP at SemEval-2025 Task 2: Entity-Aware Fine-tuning vs. Prompt Engineering in Entity-Aware Machine Translation
Abdulhamid Abubakar
|
Hamidatu Abdulkadir
|
Rabiu Ibrahim
|
Abubakar Auwal
|
Ahmad Wali
|
Amina Umar
|
Maryam Bala
|
Sani Abdullahi Sani
|
Ibrahim Said Ahmad
|
Shamsuddeen Hassan Muhammad
|
Idris Abdulmumin
|
Vukosi Marivate
Proceedings of the 19th International Workshop on Semantic Evaluation (SemEval-2025)
This paper presents our findings for SemEval 2025 Task 2, a shared task on entity-aware machine translation (EA-MT). The goal of this task is to develop translation models that can accurately translate English sentences into target languages, with a particular focus on handling named entities, which often pose challenges for MT systems. The task covers 10 target languages with English as the source. In this paper, we describe the different systems we employed, detail our results, and discuss insights gained from our experiments.
pdf
bib
abs
HausaNLP at SemEval-2025 Task 3: Towards a Fine-Grained Model-Aware Hallucination Detection
Maryam Bala
|
Amina Abubakar
|
Abdulhamid Abubakar
|
Abdulkadir Bichi
|
Hafsa Ahmad
|
Sani Abdullahi Sani
|
Idris Abdulmumin
|
Shamsuddeen Hassan Muhammad
|
Ibrahim Said Ahmad
Proceedings of the 19th International Workshop on Semantic Evaluation (SemEval-2025)
This paper presents our findings of the Multilingual Shared Task on Hallucinations and Related Observable Overgeneration Mistakes, MU-SHROOM, which focuses on identifying hallucinations and related overgeneration errors in large language models (LLMs). The shared task involves detecting specific text spans that constitute hallucinations in the outputs generated by LLMs in 14 languages. To address this task, we aim to provide a nuanced, model-aware understanding of hallucination occurrences and severity in English. We used natural language inference and fine-tuned a ModernBERT model using a synthetic dataset of 400 samples, achieving an Intersection over Union (IoU) score of 0.032 and a correlation score of 0.422. These results indicate a moderately positive correlation between the model’s confidence scores and the actual presence of hallucinations. The IoU score indicates that our modelhas a relatively low overlap between the predicted hallucination span and the truth annotation. The performance is unsurprising, given the intricate nature of hallucination detection. Hallucinations often manifest subtly, relying on context, making pinpointing their exact boundaries formidable.
pdf
bib
abs
HausaNLP at SemEval-2025 Task 11: Advancing Hausa Text-based Emotion Detection
Sani Abdullahi Sani
|
Salim Abubakar
|
Falalu Ibrahim Lawan
|
Abdulhamid Abubakar
|
Maryam Bala
Proceedings of the 19th International Workshop on Semantic Evaluation (SemEval-2025)
This paper presents our approach to multi-label emotion detection in Hausa, a low-resource African language, as part of SemEval Track A. We fine-tuned AfriBERTa, a transformer-based model pre-trained on African languages, to classify Hausa text into six emotions: anger, disgust, fear, joy, sadness, and surprise. Our methodology involved data preprocessing, tokenization, and model fine-tuning using the Hugging Face Trainer API. The system achieved a validation accuracy of 74.00%, with an F1-score of 73.50%, demonstrating the effectiveness of transformer-based models for emotion detection in low-resource languages.