Xiaoliang Yang
2025
One for All: Update Parameterized Knowledge Across Multiple Models with Once Edit
Weitao Ma
|
Xiyuan Du
|
Xiaocheng Feng
|
Lei Huang
|
Yichong Huang
|
Huiyi Zhang
|
Xiaoliang Yang
|
Baohang Li
|
Xiachong Feng
|
Ting Liu
|
Bing Qin
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Large language models (LLMs) encode vast world knowledge but struggle to stay up-to-date, often leading to errors and hallucinations. Knowledge editing offers an efficient alternative to retraining, enabling targeted modifications by updating specific model parameters. However, existing methods primarily focus on individual models, posing challenges in efficiently updating multiple models and adapting to new models. To address this, we propose OnceEdit, a novel ensemble-based approach that employs a plug-in model as the editing module, enabling stable knowledge updates across multiple models. Building on the model ensemble, OnceEdit introduces two key mechanisms to enhance its effectiveness. First, we introduce a dynamic weight mechanism through a weight token for distinguishing between edit-related and non-edit-related instances, ensuring the appropriate utilization of knowledge from integrated models. Second, we incorporate an ensemble enhancement mechanism to mitigate the excessive reliance on the central model inherent in the model ensemble technique, making it more suitable for knowledge editing. Extensive experiments on diverse LLMs demonstrate that OnceEdit consistently outperforms existing methods while achieving superior editing efficiency. Further analysis confirms its adaptability and stability in multi-model editing scenarios.
Search
Fix author
Co-authors
- Xiyuan Du 1
- Xiaocheng Feng 1
- Xiachong Feng 1
- Lei Huang 1
- Yichong Huang 1
- show all...
Venues
- acl1