AdaEdit: Advancing Continuous Knowledge Editing For Large Language Models

Qi Li, Xiaowen Chu


Abstract
Knowledge editing (KE) has emerged as a prominent alternative that enables efficient and precise information modification inside language models. However, a critical challenge arises in continuous language models editing — a significant performance decline both in knowledge update and retention when the number of edits increases. By dissecting the perturbation weight of language model in continuous KE, we uncover that disentangled and sparsified knowledge representation can significantly alleviate the performance decline. Building on these insights, we introduce AdaEdit, a novel knowledge editing method. Extensive empirical evaluations on multiple LLMs demonstrate that our proposed methods can enhance the performance of edited LLMs in large-size continuous editing regimes, outperforming existing ones without substantially compromising the general abilities of these models.
Anthology ID:
2025.acl-long.208
Volume:
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Wanxiang Che, Joyce Nabende, Ekaterina Shutova, Mohammad Taher Pilehvar
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
4127–4149
Language:
URL:
https://preview.aclanthology.org/ingestion-acl-25/2025.acl-long.208/
DOI:
Bibkey:
Cite (ACL):
Qi Li and Xiaowen Chu. 2025. AdaEdit: Advancing Continuous Knowledge Editing For Large Language Models. In Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 4127–4149, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
AdaEdit: Advancing Continuous Knowledge Editing For Large Language Models (Li & Chu, ACL 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingestion-acl-25/2025.acl-long.208.pdf