Path Drift in Large Reasoning Models: How First-Person Commitments Override Safety

Yuyi Huang, Runzhe Zhan, Lidia S. Chao, Ailin Tao, Derek F. Wong


Abstract
As large language models (LLMs) are increasingly deployed for complex reasoning tasks, Long Chain-of-Thought (Long-CoT) prompting has emerged as a key paradigm for structured inference. Despite early-stage safeguards enabled by alignment techniques such as RLHF, we identify a previously underexplored vulnerability: reasoning trajectories in Long-CoT models can drift from aligned paths, resulting in content that violates safety constraints. We term this phenomenon Path Drift. Through empirical analysis, we uncover three behavioral triggers of Path Drift: (1) first-person commitments that induce goal-driven reasoning that delays refusal signals; (2) ethical evaporation, where surface-level disclaimers bypass alignment checkpoints; (3) condition chain escalation, where layered cues progressively steer models toward unsafe completions. Building on these insights, we introduce a three-stage Path Drift Induction Framework comprising cognitive load amplification, self-role priming, and condition chain hijacking. Each stage independently reduces refusal rates, while their combination further compounds the effect. To mitigate these risks, we propose a path-level defense strategy incorporating role attribution correction and metacognitive reflection (reflective safety cues). Our findings highlight the need for trajectory-level alignment oversight in long-form reasoning beyond token-level alignment.
Anthology ID:
2025.emnlp-main.990
Volume:
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
19613–19627
Language:
URL:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.990/
DOI:
Bibkey:
Cite (ACL):
Yuyi Huang, Runzhe Zhan, Lidia S. Chao, Ailin Tao, and Derek F. Wong. 2025. Path Drift in Large Reasoning Models: How First-Person Commitments Override Safety. In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, pages 19613–19627, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
Path Drift in Large Reasoning Models: How First-Person Commitments Override Safety (Huang et al., EMNLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.990.pdf
Checklist:
 2025.emnlp-main.990.checklist.pdf