DocMEdit: Towards Document-Level Model Editing

Li Zeng, Zeming Liu, Chong Feng, Heyan Huang, Yuhang Guo


Abstract
Model editing aims to correct errors and outdated knowledge in the Large language models (LLMs) with minimal cost. Prior research has proposed a variety of datasets to assess the effectiveness of these model editing methods. However, most existing datasets only require models to output short phrases or sentences, overlooks the widespread existence of document level tasks in the real world, raising doubts about their practical usability. Aimed at addressing this limitation and promoting the application of model editing in real-world scenarios, we propose the task of document-level model editing. To tackle such challenges and enhance model capabilities in practical settings, we introduce DocMEdit, a dataset focused on document-level model editing, characterized by document-level inputs and outputs, extrapolative, and multiple facts within a single edit. We propose a series of evaluation metrics and experiments. The results show that the difficulties in document-level model editing pose challenges for existing model editing methods.
Anthology ID:
2025.findings-acl.1012
Volume:
Findings of the Association for Computational Linguistics: ACL 2025
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Wanxiang Che, Joyce Nabende, Ekaterina Shutova, Mohammad Taher Pilehvar
Venues:
Findings | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
19725–19743
Language:
URL:
https://preview.aclanthology.org/ingestion-acl-25/2025.findings-acl.1012/
DOI:
Bibkey:
Cite (ACL):
Li Zeng, Zeming Liu, Chong Feng, Heyan Huang, and Yuhang Guo. 2025. DocMEdit: Towards Document-Level Model Editing. In Findings of the Association for Computational Linguistics: ACL 2025, pages 19725–19743, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
DocMEdit: Towards Document-Level Model Editing (Zeng et al., Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingestion-acl-25/2025.findings-acl.1012.pdf