@inproceedings{yu-adelani-2025-training,
    title = "Training of {LLM}-Based List-Wise Multilingual Reranker",
    author = "Yu, Hao  and
      Adelani, David Ifeoluwa",
    editor = "Adelani, David Ifeoluwa  and
      Arnett, Catherine  and
      Ataman, Duygu  and
      Chang, Tyler A.  and
      Gonen, Hila  and
      Raja, Rahul  and
      Schmidt, Fabian  and
      Stap, David  and
      Wang, Jiayi",
    booktitle = "Proceedings of the 5th Workshop on Multilingual Representation Learning (MRL 2025)",
    month = nov,
    year = "2025",
    address = "Suzhuo, China",
    publisher = "Association for Computational Linguistics",
    url = "https://preview.aclanthology.org/ingest-emnlp/2025.mrl-main.42/",
    pages = "652--663",
    ISBN = "979-8-89176-345-6",
    abstract = "Multilingual retrieval-augmented generation (MRAG) systems heavily rely on robust Information Retrieval (IR). Reranking as a key component optimizes the initially retrieved document set to present the most pertinent information to the generative model, addressing context limitations and minimizing hallucinations. We propose an approach that trains Large Language Models (LLMs) as multilingual listwise rerankers through supervised fine-tuning (SFT) on a diverse mixture of multilingual and extended English ranking examples, and enhancing reasoning capabilities through Direct Preference Optimization (DPO) from translated task-specific reasoning processes. Experiments demonstrate that the approach improves accuracy@5 by 20-30{\%} across all six high- mediumand low-resource languages compared to the BM25. The posted training 1B models achieve comparable performance to 7B baseline models while enabling faster inference. Finally, we investigate the effectiveness of different reasoning strategies in DPO with crosslingual and monolingual thinking processes."
}Markdown (Informal)
[Training of LLM-Based List-Wise Multilingual Reranker](https://preview.aclanthology.org/ingest-emnlp/2025.mrl-main.42/) (Yu & Adelani, MRL 2025)
ACL