Ma Zhuoheng
2025
Second Language (Arabic) Acquisition of LLMs via Progressive Vocabulary Expansion
Jianqing Zhu
|
Huang Huang
|
Zhihang Lin
|
Juhao Liang
|
Zhengyang Tang
|
Khalid Almubarak
|
Mosen Alharthi
|
Bang An
|
Juncai He
|
Xiangbo Wu
|
Fei Yu
|
Junying Chen
|
Ma Zhuoheng
|
Yuhao Du
|
He Zhang
|
Saied Alshahrani
|
Emad A. Alghamdi
|
Lian Zhang
|
Ruoyu Sun
|
Haizhou Li
|
Benyou Wang
|
Jinchao Xu
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
This paper addresses the critical need for democratizing large language models (LLM) in the Arab world, a region that has seen slower progress in developing models comparable to state-of-the-art offerings like GPT-4 or GPT-3.5, due to a predominant focus on mainstream languages (e.g., English and Chinese). One practical objective for Arabic LLMs is to utilize Arabic-specific vocabulary in the tokenizer to accelerate decoding. However, using a different vocabulary often leads to degradation of the model’s learned knowledge, since many words become out-of-vocabulary (OOV) at the beginning of training. Inspired by the vocabulary learning during Second Language (Arabic) Acquisition for humans, the released AraLLaMA employs progressive vocabulary expansion, which is implemented by a modified BPE algorithm that progressively extends the Arabic subwords in its dynamic vocabulary during training, thereby balancing the OOV ratio at every stage. The ablation study demonstrated the effectiveness of Progressive Vocabulary Expansion.Moreover, AraLLaMA achieves decent performance comparable to the best Arabic LLMs across a variety of Arabic benchmarks. Our model weights are available at: https://github.com/FreedomIntelligence/AraLLaMa.
Search
Fix author
Co-authors
- Emad A. Alghamdi 1
- Mosen Alharthi 1
- Khalid Almubarak 1
- Saied Alshahrani 1
- Bang An 1
- show all...
Venues
- acl1