Temporal Referential Consistency: Do LLMs Favor Sequences Over Absolute Time References?

Ashutosh Bajpai, Tanmoy Chakraborty


Abstract
The increasing acceptance of large language models (LLMs) as an alternative to knowledge sources marks a significant paradigm shift across various domains, including time-sensitive fields such as law, healthcare, and finance. To fulfill this expanded role, LLMs must not only be factually accurate but also demonstrate consistency across temporal dimensions, necessitating robust temporal reasoning capabilities. Despite this critical requirement, efforts to ensure temporal consistency in LLMs remain scarce including noticeable absence of endeavors aimed at evaluating or augmenting LLMs across temporal references in time-sensitive inquiries. In this paper, we seek to address this gap by introducing a novel benchmark entitled temporal referential consistency, accompanied by a resource TEMP-ReCon designed to benchmark a wide range of both open-source and closed-source LLMs with various linguistic contexts characterized by differing resource richness (including English, French, and Romanian). The findings emphasis that LLMs do exhibit insufficient temporal referent consistency. To address this, we propose , a reasoning path alignment-based model that aims to enhance the temporal referential consistency of LLMs. Our empirical experiments substantiate the efficacy of UnTRaP compared to several baseline models.
Anthology ID:
2025.emnlp-main.889
Volume:
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
17629–17647
Language:
URL:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.889/
DOI:
Bibkey:
Cite (ACL):
Ashutosh Bajpai and Tanmoy Chakraborty. 2025. Temporal Referential Consistency: Do LLMs Favor Sequences Over Absolute Time References?. In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, pages 17629–17647, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
Temporal Referential Consistency: Do LLMs Favor Sequences Over Absolute Time References? (Bajpai & Chakraborty, EMNLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.889.pdf
Checklist:
 2025.emnlp-main.889.checklist.pdf