Phudish Prateepamornkul
2025
Lifelong Knowledge Editing requires Better Regularization
Akshat Gupta
|
Phudish Prateepamornkul
|
Maochuan Lu
|
Ahmed Alaa
|
Thomas Hartvigsen
|
Gopala Anumanchipalli
Findings of the Association for Computational Linguistics: EMNLP 2025
Knowledge editing is a promising way to improve factuality in large language models, but recent studies have shown significant model degradation during sequential editing. In this paper, we formalize the popular locate-then-edit methods as a two-step fine-tuning process, allowing us to precisely identify the root cause of this degradation. We show that model degradation occurs due to (1) over-optimization of internal activations and (2) continuous norm-growth of edited matrices. To mitigate these issues, we introduce two regularization techniques: (1) Most-Probable Early Stopping (MPES) and (2) explicit Frobenius norm-constraint. We demonstrate that applying these simple yet effective regularization techniques at key points in the editing process can substantially mitigate model degradation. Combining these regularization methods enables scaling locate-then-edit methods to 10,000 edits while reducing editing time by 42-61%. These results show that targeted regularization is essential for lifelong knowledge editing.