Temporal Alignment of Time Sensitive Facts with Activation Engineering

Sanjay Govindan, Maurice Pagnucco, Yang Song


Abstract
Large Language Models (LLMs) are trained on diverse and often conflicting knowledge spanning multiple domains and time periods. Some of this knowledge is only valid within specific temporal contexts, such as answering the question, “Who is the President of the United States in 2022?” Ensuring LLMs generate time-appropriate responses is crucial for maintaining relevance and accuracy. In this work we explore activation engineering as a method for temporally aligning LLMs to improve factual recall without any training. Activation engineering has predominantly been used to steer subjective and qualitative outcomes such as toxicity or behavior. Our research is one of few that uncovers the bounds of activation engineering on objective outcomes. We explore an activation engineering technique to anchor LLaMA 2, LLaMA 3.1, Qwen 2 and Gemma 2 to specific points in time and examine the effects of varying injection layers and prompting strategies. Our experiments demonstrate up to a 44% and 16% improvement in relative and explicit prompting respectively, achieving comparable performance to the fine-tuning method proposed by Zhao et al. (2024). Notably, for LLaMA 2 and LLaMA 3.1 our approach achieves similar results to the fine-tuning baseline while being significantly more computationally efficient and requiring no pre-aligned datasets.
Anthology ID:
2025.findings-emnlp.404
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2025
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
7640–7657
Language:
URL:
https://preview.aclanthology.org/name-variant-enfa-fane/2025.findings-emnlp.404/
DOI:
10.18653/v1/2025.findings-emnlp.404
Bibkey:
Cite (ACL):
Sanjay Govindan, Maurice Pagnucco, and Yang Song. 2025. Temporal Alignment of Time Sensitive Facts with Activation Engineering. In Findings of the Association for Computational Linguistics: EMNLP 2025, pages 7640–7657, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
Temporal Alignment of Time Sensitive Facts with Activation Engineering (Govindan et al., Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/name-variant-enfa-fane/2025.findings-emnlp.404.pdf
Checklist:
 2025.findings-emnlp.404.checklist.pdf