Jennifer@cohere.com Jennifer@cohere.com


2025

pdf bib
Command R7B Arabic: a small, enterprise-focused, multilingual, and culturally aware Arabic LLM
Yazeed Alnumay | Alexandre Barbet | Anna Bialas | William Michael Darling | Shaan@cohere.com Shaan@cohere.com | Joan@cohere.com Joan@cohere.com | Kyle Duffy | Stephaniehowe@cohere.com Stephaniehowe@cohere.com | Olivia Lasche | Justin Seonyong Lee | Anirudh@cohere.com Anirudh@cohere.com | Jennifer@cohere.com Jennifer@cohere.com
Proceedings of the Sixth Workshop on African Natural Language Processing (AfricaNLP 2025)

Building high-quality large language models (LLMs) for enterprise Arabic applications remains challenging due to the limited availability of digitized Arabic data. In this work, we present a data synthesis and refinement strategy to help address this problem, namely, by leveraging synthetic data generation and human-in-the-loop annotation to expand our Arabic training corpus. We further present our iterative post training recipe that is essential to achieving state-of-the-art performance in aligning the model with human preferences, a critical aspect to enterprise use cases. The culmination of this effort is the release of a small, 7B, open-weight model that outperforms similarly sized peers in head-to-head comparisons and on Arabic-focused benchmarks covering cultural knowledge, instruction following, RAG, and contextual faithfulness.