Lian Zhang
2024
AceGPT, Localizing Large Language Models in Arabic
Huang Huang
|
Fei Yu
|
Jianqing Zhu
|
Xuening Sun
|
Hao Cheng
|
Song Dingjie
|
Zhihong Chen
|
Mosen Alharthi
|
Bang An
|
Juncai He
|
Ziche Liu
|
Junying Chen
|
Jianquan Li
|
Benyou Wang
|
Lian Zhang
|
Ruoyu Sun
|
Xiang Wan
|
Haizhou Li
|
Jinchao Xu
Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)
This paper is devoted to the development of a localized Large Language Model (LLM) specifically for Arabic, a language imbued with unique cultural characteristics inadequately addressed by current mainstream models. Significant concerns emerge when addressing cultural sensitivity and local values. To address this, the paper proposes a comprehensive solution that includes further pre-training with Arabic texts, Supervised Fine-Tuning (SFT) utilizing native Arabic instructions, and GPT-4 responses in Arabic, alongside Reinforcement Learning with AI Feedback (RLAIF) employing a reward model attuned to local culture and values. The goal is to cultivate culturally cognizant and value-aligned Arabic LLMs capable of accommodating the diverse, application-specific needs of Arabic-speaking communities. Comprehensive evaluations reveal that the resulting model, dubbed ‘AceGPT’, sets the state-of-the-art standard for open Arabic LLMs across various benchmarks. Codes, data, and models are in https://github.com/FreedomIntelligence/AceGPT.
Search
Co-authors
- Huang Huang 1
- Fei Yu 1
- Jianqing Zhu 1
- Xuening Sun 1
- Hao Cheng 1
- show all...