Robust Knowledge Editing via Explicit Reasoning Chains for Distractor-Resilient Multi-Hop QA

Yuchen Wu, Liang Ding, Li Shen, Dacheng Tao


Abstract
Large language models (LLMs) encode vast amounts of world knowledge but remain static once trained, making timely integration of emerging facts prohibitively expensive via full retraining. Knowledge-editing techniques have thus emerged to inject or overwrite specific facts into LLMs, yet they either over-rely on superficial cues or incur complex, iterative pipelines that collapse under noisy, multi-hop conditions. We introduce **Reason-KE**, an end-to-end reasoning-chain-based editing framework that steers a pretrained LLM through four structured stages—fact acknowledgment, relevance determination, selective application, and final reasoning—to filter distractors in a single pass. Trained on MQuAKE-CF with up to four irrelevant facts, Reason-KE elevates Qwen2.5-7B’s multi-hop QA accuracy to 90.2% (↑17.6 pp) while suffering merely 6.3% drop under heavy distraction and <1% when answers are leaked. Our quantitative analysis confirms Reason-KE’s resilience and efficiency, establishing a new state of the art for reliable LLM knowledge updates. The code will be released.
Anthology ID:
2025.findings-emnlp.786
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2025
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
14578–14586
Language:
URL:
https://preview.aclanthology.org/author-page-yu-wang-polytechnic/2025.findings-emnlp.786/
DOI:
10.18653/v1/2025.findings-emnlp.786
Bibkey:
Cite (ACL):
Yuchen Wu, Liang Ding, Li Shen, and Dacheng Tao. 2025. Robust Knowledge Editing via Explicit Reasoning Chains for Distractor-Resilient Multi-Hop QA. In Findings of the Association for Computational Linguistics: EMNLP 2025, pages 14578–14586, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
Robust Knowledge Editing via Explicit Reasoning Chains for Distractor-Resilient Multi-Hop QA (Wu et al., Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/author-page-yu-wang-polytechnic/2025.findings-emnlp.786.pdf
Checklist:
 2025.findings-emnlp.786.checklist.pdf