Explain then Rank: Scale Calibration of Neural Rankers Using Natural Language Explanations from LLMs

Puxuan Yu, Daniel Cohen, Hemank Lamba, Joel R. Tetreault, Alejandro Jaimes


Abstract
In search settings, calibrating the scores during the ranking process to quantities such as click-through rates or relevance levels enhances a system’s usefulness and trustworthiness for downstream users. While previous research has improved this notion of calibration for low complexity learning-to-rank models, the larger data demands and parameter count specific to modern neural text rankers produce unique obstacles that hamper the efficacy of methods intended for the learning-to-rank setting.This paper proposes exploiting large language models (LLMs) to provide relevance and uncertainty signals for these neural text rankers to produce scale-calibrated scores through Monte Carlo sampling of natural language explanations (NLEs). Our approach transforms the neural ranking task from ranking textual query-document pairs to ranking corresponding synthesized NLEs. Comprehensive experiments on two popular document ranking datasets show that the NLE-based calibration approach consistently outperforms past calibration methods and LLM-based methods for ranking, calibration, and query performance prediction tasks.
Anthology ID:
2025.findings-acl.1167
Volume:
Findings of the Association for Computational Linguistics: ACL 2025
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Wanxiang Che, Joyce Nabende, Ekaterina Shutova, Mohammad Taher Pilehvar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
22716–22730
Language:
URL:
https://preview.aclanthology.org/display_plenaries/2025.findings-acl.1167/
DOI:
Bibkey:
Cite (ACL):
Puxuan Yu, Daniel Cohen, Hemank Lamba, Joel R. Tetreault, and Alejandro Jaimes. 2025. Explain then Rank: Scale Calibration of Neural Rankers Using Natural Language Explanations from LLMs. In Findings of the Association for Computational Linguistics: ACL 2025, pages 22716–22730, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
Explain then Rank: Scale Calibration of Neural Rankers Using Natural Language Explanations from LLMs (Yu et al., Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/display_plenaries/2025.findings-acl.1167.pdf