Yuepeng Fu
2025
Distance between Relevant Information Pieces Causes Bias in Long-Context LLMs
Runchu Tian
|
Yanghao Li
|
Yuepeng Fu
|
Siyang Deng
|
Qinyu Luo
|
Cheng Qian
|
Shuo Wang
|
Xin Cong
|
Zhong Zhang
|
Yesai Wu
|
Yankai Lin
|
Huadong Wang
|
Xiaojiang Liu
Findings of the Association for Computational Linguistics: ACL 2025
Positional bias in large language models hinders their ability to effectively process long inputs. A prominent example is the “lost in the middle” phenomenon, where LLMs struggle to utilize relevant information situated in the middle of the input. While prior research primarily focuses on single pieces of relevant information, real-world applications often involve multiple relevant information pieces. To bridge this gap, we present LongPiBench, a benchmark designed to assess positional bias involving multiple pieces of relevant information. It includes various tasks and input lengths. Thorough experiments are conducted with three commercial and six open-source models. These experiments reveal that while most current models are more robust against the “lost in the middle” issue, there also exist noticeable biases related to the spacing of relevant information pieces. These findings highlight the importance of evaluating and reducing positional biases for long-context LLMs.
Search
Fix author
Co-authors
- Xin Cong 1
- Siyang Deng 1
- Yanghao Li 1
- Yankai Lin 1
- Xiaojiang Liu 1
- show all...