PRIME: Large Language Model Personalization with Cognitive Dual-Memory and Personalized Thought Process

Xinliang Frederick Zhang, Nicholas Beauchamp, Lu Wang


Abstract
Large language model (LLM) personalization aims to align model outputs with individuals’ unique preferences and opinions. While recent efforts have implemented various personalization methods, a unified theoretical framework that can systematically understand the drivers of effective personalization is still lacking. In this work, we integrate the well-established cognitive dual-memory model into LLM personalization, by mirroring episodic memory to historical user engagements and semantic memory to long-term, evolving user beliefs. Specifically, we systematically investigate memory instantiations and introduce a unified framework, PRIME, using episodic and semanticmemory mechanisms. We further augment PRIME with a novel personalized thinking capability inspired by the slow thinking strategy. Moreover, recognizing the absence of suitable benchmarks, we introduce a dataset using Change My View (CMV) from Reddit, specifically designed to evaluate long-context personalization. Extensive experiments validate PRIME’s effectiveness across both long- and short-context scenarios. Further analysis confirms that PRIME effectively captures dynamic personalization beyond mere popularity biases.
Anthology ID:
2025.emnlp-main.1711
Volume:
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
33695–33724
Language:
URL:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.1711/
DOI:
Bibkey:
Cite (ACL):
Xinliang Frederick Zhang, Nicholas Beauchamp, and Lu Wang. 2025. PRIME: Large Language Model Personalization with Cognitive Dual-Memory and Personalized Thought Process. In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, pages 33695–33724, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
PRIME: Large Language Model Personalization with Cognitive Dual-Memory and Personalized Thought Process (Zhang et al., EMNLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.1711.pdf
Checklist:
 2025.emnlp-main.1711.checklist.pdf