PROPER: A Progressive Learning Framework for Personalized Large Language Models with Group-Level Adaptation

Linhai Zhang, Jialong Wu, Deyu Zhou, Yulan He


Abstract
Personalized large language models (LLMs) aim to tailor their outputs to user preferences. Recent advances in parameter-efficient fine-tuning (PEFT) methods have highlighted the effectiveness of adapting population-level LLMs to personalized LLMs by fine-tuning user-specific parameters with user history. However, user data is typically sparse, making it challenging to adapt LLMs to specific user patterns. To address this challenge, we propose PROgressive PERsonalization (PROPER), a novel progressive learning framework inspired by meso-level theory in social science. PROPER bridges population-level and user-level models by grouping users based on preferences and adapting LLMs in stages. It combines a Mixture-of-Experts (MoE) structure with Low Ranked Adaptation (LoRA), using a user-aware router to assign users to appropriate groups automatically. Additionally, a LoRA-aware router is proposed to facilitate the integration of individual user LoRAs with the group-level LoRA. Experimental results show that PROPER significantly outperforms SOTA models across multiple tasks, demonstrating the effectiveness of our approach.
Anthology ID:
2025.acl-long.800
Volume:
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Wanxiang Che, Joyce Nabende, Ekaterina Shutova, Mohammad Taher Pilehvar
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
16399–16411
Language:
URL:
https://preview.aclanthology.org/ingestion-acl-25/2025.acl-long.800/
DOI:
Bibkey:
Cite (ACL):
Linhai Zhang, Jialong Wu, Deyu Zhou, and Yulan He. 2025. PROPER: A Progressive Learning Framework for Personalized Large Language Models with Group-Level Adaptation. In Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 16399–16411, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
PROPER: A Progressive Learning Framework for Personalized Large Language Models with Group-Level Adaptation (Zhang et al., ACL 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingestion-acl-25/2025.acl-long.800.pdf