Training of LLM-Based List-Wise Multilingual Reranker

Hao Yu, David Ifeoluwa Adelani


Abstract
Multilingual retrieval-augmented generation (MRAG) systems heavily rely on robust Information Retrieval (IR). Reranking as a key component optimizes the initially retrieved document set to present the most pertinent information to the generative model, addressing context limitations and minimizing hallucinations. We propose an approach that trains Large Language Models (LLMs) as multilingual listwise rerankers through supervised fine-tuning (SFT) on a diverse mixture of multilingual and extended English ranking examples, and enhancing reasoning capabilities through Direct Preference Optimization (DPO) from translated task-specific reasoning processes. Experiments demonstrate that the approach improves accuracy@5 by 20-30% across all six high- mediumand low-resource languages compared to the BM25. The posted training 1B models achieve comparable performance to 7B baseline models while enabling faster inference. Finally, we investigate the effectiveness of different reasoning strategies in DPO with crosslingual and monolingual thinking processes.
Anthology ID:
2025.mrl-main.42
Volume:
Proceedings of the 5th Workshop on Multilingual Representation Learning (MRL 2025)
Month:
November
Year:
2025
Address:
Suzhuo, China
Editors:
David Ifeoluwa Adelani, Catherine Arnett, Duygu Ataman, Tyler A. Chang, Hila Gonen, Rahul Raja, Fabian Schmidt, David Stap, Jiayi Wang
Venues:
MRL | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
652–663
Language:
URL:
https://preview.aclanthology.org/ingest-emnlp/2025.mrl-main.42/
DOI:
Bibkey:
Cite (ACL):
Hao Yu and David Ifeoluwa Adelani. 2025. Training of LLM-Based List-Wise Multilingual Reranker. In Proceedings of the 5th Workshop on Multilingual Representation Learning (MRL 2025), pages 652–663, Suzhuo, China. Association for Computational Linguistics.
Cite (Informal):
Training of LLM-Based List-Wise Multilingual Reranker (Yu & Adelani, MRL 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-emnlp/2025.mrl-main.42.pdf