Learning What to Remember: Adaptive Probabilistic Memory Retention for Memory-Efficient Language Models

S M Rafiuddin, Muntaha Nujat Khan


Abstract
Transformer attention scales quadratically with sequence length O(n2), limiting long-context use. We propose Adaptive Retention, a probabilistic, layer-wise token selection mechanism that learns which representations to keep under a strict global budget M. Retention is modeled with Bernoulli gates trained via a Hard-Concrete/variational relaxation and enforced with a simple top-M rule at inference, making the method differentiable and drop-in for standard encoders. Across classification, extractive QA, and long-document summarization, keeping only 30–50% of tokens preserves ≥ 95% of full-model performance while cutting peak memory by ∼ 35–45% and improving throughput by up to ∼ 1.8×. This architecture-agnostic approach delivers practical long-context efficiency without modifying base attention or task heads.
Anthology ID:
2025.findings-emnlp.212
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2025
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
3969–3981
Language:
URL:
https://preview.aclanthology.org/author-page-yu-wang-polytechnic/2025.findings-emnlp.212/
DOI:
10.18653/v1/2025.findings-emnlp.212
Bibkey:
Cite (ACL):
S M Rafiuddin and Muntaha Nujat Khan. 2025. Learning What to Remember: Adaptive Probabilistic Memory Retention for Memory-Efficient Language Models. In Findings of the Association for Computational Linguistics: EMNLP 2025, pages 3969–3981, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
Learning What to Remember: Adaptive Probabilistic Memory Retention for Memory-Efficient Language Models (Rafiuddin & Khan, Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/author-page-yu-wang-polytechnic/2025.findings-emnlp.212.pdf
Checklist:
 2025.findings-emnlp.212.checklist.pdf