Wenhan Liu
2025
Sliding Windows Are Not the End: Exploring Full Ranking with Long-Context Large Language Models
Wenhan Liu
|
Xinyu Ma
|
Yutao Zhu
|
Ziliang Zhao
|
Shuaiqiang Wang
|
Dawei Yin
|
Zhicheng Dou
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Large Language Models (LLMs) have shown exciting performance in listwise passage ranking. Due to the limited input length, existing methods often adopt the sliding window strategy. Such a strategy, though effective, is inefficient as it involves repetitive and serialized processing, which usually re-evaluates relevant passages multiple times. As a result, it incurs redundant API costs, which are proportional to the number of inference tokens. The development of long-context LLMs enables the full ranking of all passages within a single inference, avoiding redundant API costs. In this paper, we conduct a comprehensive study of long-context LLMs for ranking tasks in terms of efficiency and effectiveness. Surprisingly, our experiments reveal that full ranking with long-context LLMs can deliver superior performance in the supervised fine-tuning setting with a huge efficiency improvement. Furthermore, we identify two limitations of fine-tuning the full ranking model based on existing methods: (1) sliding window strategy fails to produce a full ranking list as a training label, and (2) the language modeling loss cannot emphasize top-ranked passage IDs in the label. To alleviate these issues, we propose a new complete listwise label construction approach and a novel importance-aware learning objective for full ranking. Experiments show the superior performance of our method over baselines.
CoRanking: Collaborative Ranking with Small and Large Ranking Agents
Wenhan Liu
|
Xinyu Ma
|
Yutao Zhu
|
Lixin Su
|
Shuaiqiang Wang
|
Dawei Yin
|
Zhicheng Dou
Findings of the Association for Computational Linguistics: EMNLP 2025
Listwise ranking based on Large Language Models (LLMs) has achieved state-of-the-art performance in Information Retrieval (IR).However, their effectiveness often depends on LLMs with massive parameter scales and computationally expensive sliding window processing, leading to substantial efficiency bottlenecks. In this paper, we propose a Collaborative Ranking framework (CoRanking) for LLM-based listwise ranking.Specifically, we strategically combine an efficient small reranker and an effective large reranker for collaborative ranking.The small reranker performs initial passage ranking, effectively filtering the passage set to a condensed top-k list (e.g., top-20 passages), and the large reranker (with stronger ranking capability) then reranks only this condensed subset rather than the full list, significantly improving efficiency. We further address that directly passing the top-ranked passages from the small reranker to the large reranker is suboptimal because of the LLM’s strong positional bias in processing input sequences. To resolve this issue, we propose a passage order adjuster learned by RL that dynamically reorders the top passages returned by the small reranker to better align with the large LLM’s input preferences. Our extensive experiments across three IR benchmarks demonstrate that CoRanking achieves superior efficiency, reducing ranking latency by approximately 70% while simultaneously improving effectiveness, compared to the standalone large reranker.
Search
Fix author
Co-authors
- Zhicheng Dou (窦志成) 2
- Xinyu Ma 2
- Shuaiqiang Wang 2
- Dawei Yin 2
- Yutao Zhu (朱余韬) 2
- show all...