On the Way to LLM Personalization: Learning to Remember User Conversations
Lucie Charlotte Magister, Katherine Metcalf, Yizhe Zhang, Maartje Ter Hoeve
Abstract
Large Language Models (LLMs) have quickly become an invaluable assistant for a variety of tasks. However, their effectiveness is constrained by their ability to tailor responses to human preferences and behaviors via personalization. Prior work in LLM personalization has largely focused on style transfer or incorporating small factoids about the user, as knowledge injection remains an open challenge. In this paper, we explore injecting knowledge of prior conversations into LLMs to enable future work on less redundant, personalized conversations. We identify two real-world constraints: (1) conversations are sequential in time and must be treated as such during training, and (2) per-user personalization is only viable in parameter-efficient settings. To this aim, we propose PLUM, a pipeline performing data augmentation for up-sampling conversations as question-answer pairs, that are then used to finetune a low-rank adaptation adapter with a weighted cross entropy loss. Even in this first exploration of the problem, we perform competitively with baselines such as RAG, attaining an accuracy of 81.5% across 100 conversations.- Anthology ID:
- 2025.l2m2-1.5
- Volume:
- Proceedings of the First Workshop on Large Language Model Memorization (L2M2)
- Month:
- August
- Year:
- 2025
- Address:
- Vienna, Austria
- Editors:
- Robin Jia, Eric Wallace, Yangsibo Huang, Tiago Pimentel, Pratyush Maini, Verna Dankers, Johnny Wei, Pietro Lesci
- Venues:
- L2M2 | WS
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 61–77
- Language:
- URL:
- https://preview.aclanthology.org/landing_page/2025.l2m2-1.5/
- DOI:
- Cite (ACL):
- Lucie Charlotte Magister, Katherine Metcalf, Yizhe Zhang, and Maartje Ter Hoeve. 2025. On the Way to LLM Personalization: Learning to Remember User Conversations. In Proceedings of the First Workshop on Large Language Model Memorization (L2M2), pages 61–77, Vienna, Austria. Association for Computational Linguistics.
- Cite (Informal):
- On the Way to LLM Personalization: Learning to Remember User Conversations (Magister et al., L2M2 2025)
- PDF:
- https://preview.aclanthology.org/landing_page/2025.l2m2-1.5.pdf