Implicit Cross-Lingual Rewarding for Efficient Multilingual Preference Alignment

Wen Yang, Junhong Wu, Chen Wang, Chengqing Zong, Jiajun Zhang


Abstract
Direct Preference Optimization (DPO) has become a prominent method for aligning Large Language Models (LLMs) with human preferences. While DPO has enabled significant progress in aligning English LLMs, multilingual preference alignment is hampered by data scarcity. To address this, we propose a novel approach that captures learned preferences from well-aligned English models by implicit rewards and transfers them to other languages through iterative training. Specifically, we derive an implicit reward model from the logits of an English DPO-aligned model and its corresponding reference model. This reward model is then leveraged to annotate preference relations in cross-lingual instruction-following pairs, using English instructions to evaluate multilingual responses. The annotated data is subsequently used for multilingual DPO fine-tuning, facilitating preference knowledge transfer from English to other languages. Fine-tuning Llama3 for two iterations resulted in a 12.72% average improvement in Win Rate and a 5.97% increase in Length Control Win Rate across all training languages on the X-AlpacaEval leaderboard. Our findings demonstrate that leveraging existing English-aligned models can enable efficient and effective multilingual preference alignment, significantly reducing the need for extensive multilingual preference data.
Anthology ID:
2025.findings-acl.1088
Volume:
Findings of the Association for Computational Linguistics: ACL 2025
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Wanxiang Che, Joyce Nabende, Ekaterina Shutova, Mohammad Taher Pilehvar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
21125–21147
Language:
URL:
https://preview.aclanthology.org/transition-to-people-yaml/2025.findings-acl.1088/
DOI:
10.18653/v1/2025.findings-acl.1088
Bibkey:
Cite (ACL):
Wen Yang, Junhong Wu, Chen Wang, Chengqing Zong, and Jiajun Zhang. 2025. Implicit Cross-Lingual Rewarding for Efficient Multilingual Preference Alignment. In Findings of the Association for Computational Linguistics: ACL 2025, pages 21125–21147, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
Implicit Cross-Lingual Rewarding for Efficient Multilingual Preference Alignment (Yang et al., Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/transition-to-people-yaml/2025.findings-acl.1088.pdf