RuleEdit: Towards Rule-Level Knowledge Generalization to Mitigate Over-Editing in Large Language Models
Bihan Zhou, HaoPeng Ren, Li Yuan, Yi Cai, Liuwen Cao, Zikun Deng
Abstract
Knowledge editing emerges as a promising approach for updating target knowledge in Large Language Models (LLMs) in a timely manner, thereby preventing undesirable behaviors stemming from outdated, inaccurate, or incomplete knowledge. However, existing methods mainly focus on instance-level editing, which is prone to over-editing risk featuring knowledge degradation and general ability deterioration, due to redundant instance-specific modifications for knowledge. To mitigate the over-editing risk, we explore the rule-level editing problem that avoids case-by-case modification by generalizing rule-level knowledge to update rule-derived instances. We further construct a benchmark called RuleEdit for systematic evaluation on rule-level editing. Moreover, we propose a Rule-Transfer Editing (RTE) method to facilitate effective updates and generalizations of rule-level knowledge in LLMs. Experimental results highlight our significant improvements, with the enhancements of 28.1% in portability and 8.1% in average performance over the best-performing baselines for LLaMA-2-7B on RULEmix.- Anthology ID:
- 2025.findings-acl.164
- Volume:
- Findings of the Association for Computational Linguistics: ACL 2025
- Month:
- July
- Year:
- 2025
- Address:
- Vienna, Austria
- Editors:
- Wanxiang Che, Joyce Nabende, Ekaterina Shutova, Mohammad Taher Pilehvar
- Venue:
- Findings
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 3159–3175
- Language:
- URL:
- https://preview.aclanthology.org/corrections-2025-08/2025.findings-acl.164/
- DOI:
- 10.18653/v1/2025.findings-acl.164
- Cite (ACL):
- Bihan Zhou, HaoPeng Ren, Li Yuan, Yi Cai, Liuwen Cao, and Zikun Deng. 2025. RuleEdit: Towards Rule-Level Knowledge Generalization to Mitigate Over-Editing in Large Language Models. In Findings of the Association for Computational Linguistics: ACL 2025, pages 3159–3175, Vienna, Austria. Association for Computational Linguistics.
- Cite (Informal):
- RuleEdit: Towards Rule-Level Knowledge Generalization to Mitigate Over-Editing in Large Language Models (Zhou et al., Findings 2025)
- PDF:
- https://preview.aclanthology.org/corrections-2025-08/2025.findings-acl.164.pdf