MAKE: Memory-Associated Knowledge Editing

Seongsik Park, Sangmin Park, Jaieun Kim, Harksoo Kim


Abstract
Since their emergence, large language models (LLMs) have rapidly advanced, exerting substantial influence across various domains. Consequently, the importance of model editing techniques, aimed at locally correcting outdated or incorrect knowledge within language models, has grown significantly. However, traditional model editing methods face limitations: They cannot guarantee that highly related knowledge will transfer to the post-edited model, and they often rely on external knowledge bases to address this issue. In this paper, we propose a novel approach that leverages the internal knowledge of the language model to overcome the shortcomings of existing methods. First, we explore how to recall indirect associated knowledge from the model itself, which can be utilized in the editing process. Building on this, we propose MAKE (Memory-Associated Knowledge Editing), an editing method that takes into account the transfer of associated knowledge. As a result, MAKE successfully updates associated knowledge and achieves state-of-the-art performance in experiments conducted on the zsRE+, CounterFact+ and MQuAKE datasets.
Anthology ID:
2025.tacl-1.44
Volume:
Transactions of the Association for Computational Linguistics, Volume 13
Month:
Year:
2025
Address:
Cambridge, MA
Venue:
TACL
SIG:
Publisher:
MIT Press
Note:
Pages:
938–952
Language:
URL:
https://preview.aclanthology.org/ingest-eacl/2025.tacl-1.44/
DOI:
10.1162/tacl.a.26
Bibkey:
Cite (ACL):
Seongsik Park, Sangmin Park, Jaieun Kim, and Harksoo Kim. 2025. MAKE: Memory-Associated Knowledge Editing. Transactions of the Association for Computational Linguistics, 13:938–952.
Cite (Informal):
MAKE: Memory-Associated Knowledge Editing (Park et al., TACL 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-eacl/2025.tacl-1.44.pdf