LyapLock: Bounded Knowledge Preservation in Sequential Large Language Model Editing

Peng Wang, Biyu Zhou, Xuehai Tang, Jizhong Han, Songlin Hu


Abstract
Large Language Models often contain factually incorrect or outdated knowledge, giving rise to model editing methods for precise knowledge updates. However, current mainstream locate-then-edit approaches exhibit a progressive performance decline during sequential editing, due to inadequate mechanisms for long-term knowledge preservation. To tackle this, we model the sequential editing as a constrained stochastic programming. Given the challenges posed by the cumulative preservation error constraint and the gradually revealed editing tasks, **LyapLock** is proposed. It integrates queuing theory and Lyapunov optimization to decompose the long-term constrained programming into tractable stepwise subproblems for efficient solving. This is the first model editing framework with rigorous theoretical guarantees, achieving asymptotic optimal editing performance while meeting the constraints of long-term knowledge preservation. Experimental results show that our framework scales sequential editing capacity to over 10,000 edits while stabilizing general capabilities and boosting average editing efficacy by 11.89% over SOTA baselines. Furthermore, it can be leveraged to enhance the performance of baseline methods. Our code is released on https://github.com/caskcsg/LyapLock.
Anthology ID:
2025.emnlp-main.327
Volume:
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
6445–6470
Language:
URL:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.327/
DOI:
Bibkey:
Cite (ACL):
Peng Wang, Biyu Zhou, Xuehai Tang, Jizhong Han, and Songlin Hu. 2025. LyapLock: Bounded Knowledge Preservation in Sequential Large Language Model Editing. In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, pages 6445–6470, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
LyapLock: Bounded Knowledge Preservation in Sequential Large Language Model Editing (Wang et al., EMNLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.327.pdf
Checklist:
 2025.emnlp-main.327.checklist.pdf