Can Calibration of Positional Encodings Enhance Long Context Utilization?

Tom Zehle, Matthias Aßenmacher


Abstract
Large language models suffer from positional biases like the "Lost in the Middle" (LiM) phenomenon and recency bias, which reduce the effective utilization of long contexts. In this work, we investigate the role of Positional Encodings in this context. Our empirical study confirms the persistence of these biases in modern large language models. Drawing on these findings, we introduce Caliope, a training-free framework for calibrating Positional Encodings at inference time. Our calibrators yield substantial improvements on needle-in-a-haystack and cross-chunk reasoning benchmarks, and offer a practical, lightweight method for improving long-context utilization.
Anthology ID:
2026.findings-eacl.120
Volume:
Findings of the Association for Computational Linguistics: EACL 2026
Month:
March
Year:
2026
Address:
Rabat, Morocco
Editors:
Vera Demberg, Kentaro Inui, Lluís Marquez
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2268–2280
Language:
URL:
https://preview.aclanthology.org/ingest-eacl/2026.findings-eacl.120/
DOI:
Bibkey:
Cite (ACL):
Tom Zehle and Matthias Aßenmacher. 2026. Can Calibration of Positional Encodings Enhance Long Context Utilization?. In Findings of the Association for Computational Linguistics: EACL 2026, pages 2268–2280, Rabat, Morocco. Association for Computational Linguistics.
Cite (Informal):
Can Calibration of Positional Encodings Enhance Long Context Utilization? (Zehle & Aßenmacher, Findings 2026)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-eacl/2026.findings-eacl.120.pdf
Checklist:
 2026.findings-eacl.120.checklist.pdf