Murtadha Ahmed
2024
AlclaM: Arabic Dialect Language Model
Murtadha Ahmed
|
Saghir Alfasly
|
Bo Wen
|
Jamal Addeen
|
Mohammed Ahmed
|
Yunfeng Liu
Proceedings of The Second Arabic Natural Language Processing Conference
Pre-trained Language Models (PLMs) are integral to many modern natural language processing (NLP) systems. Although multilingual models cover a wide range of languages, they often grapple with challenges like high inference costs and a lack of diverse non-English training data. Arabic-specific PLMs are trained predominantly on modern standard Arabic, which compromises their performance on regional dialects. To tackle this, we construct an Arabic dialectal corpus comprising 3.4M sentences gathered from social media platforms. We utilize this corpus to expand the vocabulary and retrain a BERT-based model from scratch. Named AlcLaM, our model was trained using only 13GB of text, which represents a fraction of the data used by existing models such as CAMeL, MARBERT, and ArBERT, compared to 7.8%%, and 21.3%, respectively. Remarkably, AlcLaM demonstrates superior performance on a variety of Arabic NLP tasks despite the limited training data. AlcLaM is available at: https://github.com/amurtadha/Alclam.
Naive Bayes-based Context Extension for Large Language Models
Jianlin Su
|
Murtadha Ahmed
|
Bo Wen
|
Luo Ao
|
Mingren Zhu
|
Yunfeng Liu
Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)
Large Language Models (LLMs) have shown promising in-context learning abilities. However, conventional In-Context Learning (ICL) approaches are often impeded by length limitations of transformer architecture, which pose challenges when attempting to effectively integrate supervision from a substantial number of demonstration examples. In this paper, we introduce a novel framework, called Naive Bayes-based Context Extension (NBCE), to enable existing LLMs to perform ICL with an increased number of demonstrations by significantly expanding their context size. Importantly, this expansion does not require fine-tuning or dependence on particular model architectures, all the while preserving linear efficiency. NBCE initially splits the context into equal-sized windows fitting the target LLM’s maximum length. Then, it introduces a voting mechanism to select the most relevant window, regarded as the posterior context. Finally, it employs Bayes’ theorem to generate the test task. Our experimental results demonstrate that NBCE substantially enhances performance, particularly as the number of demonstration examples increases, consistently outperforming alternative methods. The NBCE code will be made publicly accessible. The code NBCE is available at: https://github.com/amurtadha/NBCE-master
2021
DNN-driven Gradual Machine Learning for Aspect-term Sentiment Analysis
Murtadha Ahmed
|
Qun Chen
|
Yanyan Wang
|
Youcef Nafa
|
Zhanhuai Li
|
Tianyi Duan
Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021
Search
Co-authors
- Bo Wen 2
- Yunfeng Liu 2
- Saghir Alfasly 1
- Jamal Addeen 1
- Mohammed Ahmed 1
- show all...