Batched Self-Consistency Improves LLM Relevance Assessment and Ranking

Anton Korikov, Pan Du, Scott Sanner, Navid Rekabsaz


Abstract
LLM query-passage relevance assessment is typically studied using a one-by-one pointwise (PW) strategy where each LLM call judges one passage at a time. However, this strategy requires as many LLM calls as there are passages while also preventing information sharing between passages. We thus hypothesize that batched PW methods, which evaluate multiple passages per LLM call, can improve not only efficiency but also judgment quality — by enabling content from multiple passages to be seen jointly. Moreover, batched PW methods may be better suited to harness the test-time scaling benefits of self-consistency — the ensembling technique of repeating (potentially perturbed) LLM tasks in parallel and aggregating results — since batching can naturally enable prompt diversification through varied batch permutations and compositions to create more robust ensembles. We evaluate several batched PW methods against one-by-one PW and listwise ranking baselines on LLM relevance assessment and ranking tasks, using three passage retrieval datasets and GPT-4o, Claude Sonnet 3, and Amazon Nova Pro. We show that batching can greatly amplify self-consistency benefits, making batched PW methods achieve the best performance while often reducing latency by an order of magnitude or more compared to one-by-one PW methods. For instance, on legal search, batched PW ranking with GPT-4o improves from 43.8% to 51.3% NDCG@10 when using 1 vs. 15 self-consistency calls, compared to one-by-one PW ranking improving from 44.9% to 46.8% and being 15.3x slower.
Anthology ID:
2025.emnlp-main.1661
Volume:
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
32675–32691
Language:
URL:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.1661/
DOI:
Bibkey:
Cite (ACL):
Anton Korikov, Pan Du, Scott Sanner, and Navid Rekabsaz. 2025. Batched Self-Consistency Improves LLM Relevance Assessment and Ranking. In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, pages 32675–32691, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
Batched Self-Consistency Improves LLM Relevance Assessment and Ranking (Korikov et al., EMNLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.1661.pdf
Checklist:
 2025.emnlp-main.1661.checklist.pdf