Efficient Knowledge Editing via Minimal Precomputation

Akshat Gupta, Maochuan Lu, Thomas Hartvigsen, Gopala Anumanchipalli


Abstract
Knowledge editing methods like MEMIT are able to make data and compute efficient updates of factual knowledge by using a single sentence to update facts and their consequences. However, what is often overlooked is a “precomputation step”, which requires a one-time but significant computational cost. The authors of MEMIT (CITATION) originally precompute approximately 44 million hidden vectors per edited layer, which requires a forward pass over 44 million tokens. For GPT-J (6B), this precomputation step takes 36 hours on a single GPU, while it takes approximately 40 hours for Llama2-7B. Additionally, this precomputation time grows with model size. In this paper, we show that this excessive computational cost is unnecessary. Knowledge editing using MEMIT and related methods, such as ROME and EMMET, can be performed by pre-computing a very small portion of the 44 million hidden vectors. We first present the theoretical minimum number of hidden vector precomputation required for solutions of these editing methods to exist. We then empirically show that knowledge editing using these methods can be done by pre-computing significantly fewer hidden vectors. Specifically, we show that the precomputation step can be done with less than 0.3% of the originally stipulated number of hidden vectors. This saves a significant amount of precomputation time and allows users to begin editing new models within a few minutes.
Anthology ID:
2025.acl-short.65
Volume:
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Wanxiang Che, Joyce Nabende, Ekaterina Shutova, Mohammad Taher Pilehvar
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
829–840
Language:
URL:
https://preview.aclanthology.org/ingestion-acl-25/2025.acl-short.65/
DOI:
Bibkey:
Cite (ACL):
Akshat Gupta, Maochuan Lu, Thomas Hartvigsen, and Gopala Anumanchipalli. 2025. Efficient Knowledge Editing via Minimal Precomputation. In Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 829–840, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
Efficient Knowledge Editing via Minimal Precomputation (Gupta et al., ACL 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingestion-acl-25/2025.acl-short.65.pdf