Understanding Verbatim Memorization in LLMs Through Circuit Discovery

Ilya Lasy, Peter Knees, Stefan Woltran


Abstract
Underlying mechanisms of memorization in LLMs—the verbatim reproduction of training data—remain poorly understood. What exact part of the network decides to retrieve a token that we would consider as start of memorization sequence? How exactly is the models’ behaviour different when producing memorized sentence vs non-memorized? In this work we approach these questions from mechanistic interpretability standpoint by utilizing transformer circuits—the minimal computational subgraphs that perform specific functions within the model. Through carefully constructed contrastive datasets, we identify points where model generation diverges from memorized content and isolate the specific circuits responsible for two distinct aspects of memorization. We find that circuits that initiate memorization can also maintain it once started, while circuits that only maintain memorization cannot trigger its initiation. Intriguingly, memorization prevention mechanisms transfer robustly across different text domains, while memorization induction appears more context-dependent.
Anthology ID:
2025.l2m2-1.7
Volume:
Proceedings of the First Workshop on Large Language Model Memorization (L2M2)
Month:
August
Year:
2025
Address:
Vienna, Austria
Editors:
Robin Jia, Eric Wallace, Yangsibo Huang, Tiago Pimentel, Pratyush Maini, Verna Dankers, Johnny Wei, Pietro Lesci
Venues:
L2M2 | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
83–94
Language:
URL:
https://preview.aclanthology.org/acl25-workshop-ingestion/2025.l2m2-1.7/
DOI:
Bibkey:
Cite (ACL):
Ilya Lasy, Peter Knees, and Stefan Woltran. 2025. Understanding Verbatim Memorization in LLMs Through Circuit Discovery. In Proceedings of the First Workshop on Large Language Model Memorization (L2M2), pages 83–94, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
Understanding Verbatim Memorization in LLMs Through Circuit Discovery (Lasy et al., L2M2 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/acl25-workshop-ingestion/2025.l2m2-1.7.pdf