2025
pdf
bib
abs
Command R7B Arabic: a small, enterprise-focused, multilingual, and culturally aware Arabic LLM
Yazeed Alnumay
|
Alexandre Barbet
|
Anna Bialas
|
William Michael Darling
|
Shaan@cohere.com Shaan@cohere.com
|
Joan@cohere.com Joan@cohere.com
|
Kyle Duffy
|
Stephaniehowe@cohere.com Stephaniehowe@cohere.com
|
Olivia Lasche
|
Justin Seonyong Lee
|
Anirudh@cohere.com Anirudh@cohere.com
|
Jennifer@cohere.com Jennifer@cohere.com
Proceedings of the Sixth Workshop on African Natural Language Processing (AfricaNLP 2025)
Building high-quality large language models (LLMs) for enterprise Arabic applications remains challenging due to the limited availability of digitized Arabic data. In this work, we present a data synthesis and refinement strategy to help address this problem, namely, by leveraging synthetic data generation and human-in-the-loop annotation to expand our Arabic training corpus. We further present our iterative post training recipe that is essential to achieving state-of-the-art performance in aligning the model with human preferences, a critical aspect to enterprise use cases. The culmination of this effort is the release of a small, 7B, open-weight model that outperforms similarly sized peers in head-to-head comparisons and on Arabic-focused benchmarks covering cultural knowledge, instruction following, RAG, and contextual faithfulness.
2024
pdf
bib
abs
When Benchmarks are Targets: Revealing the Sensitivity of Large Language Model Leaderboards
Norah Alzahrani
|
Hisham Alyahya
|
Yazeed Alnumay
|
Sultan AlRashed
|
Shaykhah Alsubaie
|
Yousef Almushayqih
|
Faisal Mirza
|
Nouf Alotaibi
|
Nora Al-Twairesh
|
Areeb Alowisheq
|
M Saiful Bari
|
Haidar Khan
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Large Language Model (LLM) leaderboards based on benchmark rankings are regularly used to guide practitioners in model selection. Often, the published leaderboard rankings are taken at face value — we show this is a (potentially costly) mistake. Under existing leaderboards, the relative performance of LLMs is highly sensitive to (often minute) details. We show that for popular multiple-choice question benchmarks (e.g., MMLU), minor perturbations to the benchmark, such as changing the order of choices or the method of answer selection, result in changes in rankings up to 8 positions. We explain this phenomenon by conducting systematic experiments over three broad categories of benchmark perturbations and identifying the sources of this behavior. Our analysis results in several best-practice recommendations, including the advantage of a *hybrid* scoring method for answer selection. Our study highlights the dangers of relying on simple benchmark evaluations and charts the path for more robust evaluation schemes on the existing benchmarks. The code for this paper is available at [https://github.com/National-Center-for-AI-Saudi-Arabia/lm-evaluation-harness](https://github.com/National-Center-for-AI-Saudi-Arabia/lm-evaluation-harness).