Beyond Sampling: Self-Sorting for Long-Context Ranking

Juseon Do, Sungwoo Han, Jingun Kwon, Hidetaka Kamigaito, Katsuhiko Hayashi, Taro Watanabe


Abstract
Ranking is a fundamental component in a wide range of AI applications. However, large language models (LLMs) remain unstable on long-context ranking. Sliding-window processing is costly and listwise prompting over full candidates still yields inconsistent orders. We show that sampling alone, even with selection-based methods, cannot stabilize ranking because LLM consistency decomposes into within-list order and cross-list preference, in which a single stochastic process cannot align. To address this, we introduce Self-Sorting (SS), which generates m candidate lists and performs n selection-time re-rankings over those lists. SS fuses explicit within-list positions with implicit cross-list preferences to score entities and return a top-k set. Experimental results on five widely used ranking benchmarks show significant improvements in nDCG@1,5,10, highlighting the critical role of implicit consistency.
Anthology ID:
2026.findings-eacl.256
Volume:
Findings of the Association for Computational Linguistics: EACL 2026
Month:
March
Year:
2026
Address:
Rabat, Morocco
Editors:
Vera Demberg, Kentaro Inui, Lluís Marquez
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
4901–4910
Language:
URL:
https://preview.aclanthology.org/ingest-eacl/2026.findings-eacl.256/
DOI:
Bibkey:
Cite (ACL):
Juseon Do, Sungwoo Han, Jingun Kwon, Hidetaka Kamigaito, Katsuhiko Hayashi, and Taro Watanabe. 2026. Beyond Sampling: Self-Sorting for Long-Context Ranking. In Findings of the Association for Computational Linguistics: EACL 2026, pages 4901–4910, Rabat, Morocco. Association for Computational Linguistics.
Cite (Informal):
Beyond Sampling: Self-Sorting for Long-Context Ranking (Do et al., Findings 2026)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-eacl/2026.findings-eacl.256.pdf
Checklist:
 2026.findings-eacl.256.checklist.pdf