AfriInstruct: Instruction Tuning of African Languages for Diverse Tasks
Kosei Uemura, Mahe Chen, Alex Pejovic, Chika Maduabuchi, Yifei Sun, En-Shiun Annie Lee
Abstract
Large language models (LLMs) for African languages perform worse compared to their performance in high-resource languages. To address this issue, we introduce AfriInstruct, which specializes in instruction-tuning of multiple African languages covering various tasks. We trained the LLaMa-2-7B using continual pretraining and instruction fine-tuning, which demonstrates superior performance across multiple tasks. Our mixed task evaluation shows that our model outperforms GPT-3.5-Turbo and other baseline models of similar size. Our contributions fill a critical gap of LLM performance between high-resource and African languages.- Anthology ID:
- 2024.findings-emnlp.793
- Volume:
- Findings of the Association for Computational Linguistics: EMNLP 2024
- Month:
- November
- Year:
- 2024
- Address:
- Miami, Florida, USA
- Editors:
- Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
- Venue:
- Findings
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 13571–13585
- Language:
- URL:
- https://preview.aclanthology.org/fix-sig-urls/2024.findings-emnlp.793/
- DOI:
- 10.18653/v1/2024.findings-emnlp.793
- Cite (ACL):
- Kosei Uemura, Mahe Chen, Alex Pejovic, Chika Maduabuchi, Yifei Sun, and En-Shiun Annie Lee. 2024. AfriInstruct: Instruction Tuning of African Languages for Diverse Tasks. In Findings of the Association for Computational Linguistics: EMNLP 2024, pages 13571–13585, Miami, Florida, USA. Association for Computational Linguistics.
- Cite (Informal):
- AfriInstruct: Instruction Tuning of African Languages for Diverse Tasks (Uemura et al., Findings 2024)
- PDF:
- https://preview.aclanthology.org/fix-sig-urls/2024.findings-emnlp.793.pdf