Zikai Xiao
2025
Mitigating Posterior Salience Attenuation in Long-Context LLMs with Positional Contrastive Decoding
Zikai Xiao
|
Ziyang Wang
|
Wen Ma
|
Yan Zhang
|
Wei Shen
|
WangYan WangYan
|
Luqi Gong
|
Zuozhu Liu
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)
While Large Language Models (LLMs) support long contexts, they struggle with performance degradation within the context window. Current solutions incur prohibitive training costs, leaving statistical behaviors and cost-effective approaches underexplored. From the decoding perspective, we identify the Posterior Salience Attenuation (PSA) phenomenon, where the salience ratio correlates with long-text performance degradation. Notably, despite the attenuation, gold tokens still occupy high-ranking positions in the decoding space. Motivated by it, we propose the training-free Positional Contrastive Decoding (PCD) that contrasts the logits derived from long-aware attention with those from designed local-aware attention, enabling the model to focus on the gains introduced by large-scale short-to-long training. Through the analysis of long-term decay simulation, we demonstrate that PCD effectively alleviates attention score degradation. Experimental results show that PCD achieves state-of-the-art performance on long-context benchmarks.
Search
Fix author
Co-authors
- Luqi Gong 1
- Zuozhu Liu 1
- Wen Ma 1
- Wei Shen 1
- Ziyang Wang 1
- show all...
Venues
- acl1