Controllable Memorization in LLMs via Weight Pruning

Chenjie Ni, Zhepeng Wang, Runxue Bao, Shangqian Gao, Yanfu Zhang


Abstract
The evolution of pre-trained large language models (LLMs) has significantly transformed natural language processing. However, these advancements pose challenges, particularly the unintended memorization of training data, which raises ethical and privacy concerns. While prior research has largely focused on mitigating memorization or extracting memorized information, the deliberate control of memorization has been underexplored. This study addresses this gap by introducing a novel and unified gradient-based weight pruning framework to freely control memorization rates in LLMs. Our method enables fine-grained control over pruning parameters, allowing models to suppress or enhance memorization based on application-specific requirements. Experimental results demonstrate that our approach effectively balances the trade-offs between memorization and generalization, with an increase of up to 89.3% in Fractional ER suppression and 40.9% in Exact ER amplification compared to the original models.
Anthology ID:
2025.emnlp-main.765
Volume:
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
15141–15156
Language:
URL:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.765/
DOI:
Bibkey:
Cite (ACL):
Chenjie Ni, Zhepeng Wang, Runxue Bao, Shangqian Gao, and Yanfu Zhang. 2025. Controllable Memorization in LLMs via Weight Pruning. In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, pages 15141–15156, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
Controllable Memorization in LLMs via Weight Pruning (Ni et al., EMNLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.765.pdf
Checklist:
 2025.emnlp-main.765.checklist.pdf