A Survey on Personalized Alignment—The Missing Piece for Large Language Models in Real-World Applications

Jian Guan, Junfei Wu, Jia-Nan Li, Chuanqi Cheng, Wei Wu


Abstract
Large Language Models (LLMs) have demonstrated remarkable capabilities, yet their transition to real-world applications reveals a critical limitation: the inability to adapt to individual preferences while maintaining alignment with universal human values. Current alignment techniques adopt a one-size-fits-all approach that fails to accommodate users’ diverse backgrounds and needs. This paper presents the first comprehensive survey of personalized alignment—a paradigm that enables LLMs to adapt their behavior within ethical boundaries based on individual preferences. We propose a unified framework comprising preference memory management, personalized generation, and feedback-based alignment, systematically analyzing implementation approaches and evaluating their effectiveness across various scenarios. By examining current techniques, potential risks, and future challenges, this survey provides a structured foundation for developing more adaptable and ethically-aligned LLMs.
Anthology ID:
2025.findings-acl.277
Volume:
Findings of the Association for Computational Linguistics: ACL 2025
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Wanxiang Che, Joyce Nabende, Ekaterina Shutova, Mohammad Taher Pilehvar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
5313–5333
Language:
URL:
https://preview.aclanthology.org/mtsummit-25-ingestion/2025.findings-acl.277/
DOI:
10.18653/v1/2025.findings-acl.277
Bibkey:
Cite (ACL):
Jian Guan, Junfei Wu, Jia-Nan Li, Chuanqi Cheng, and Wei Wu. 2025. A Survey on Personalized Alignment—The Missing Piece for Large Language Models in Real-World Applications. In Findings of the Association for Computational Linguistics: ACL 2025, pages 5313–5333, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
A Survey on Personalized Alignment—The Missing Piece for Large Language Models in Real-World Applications (Guan et al., Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/mtsummit-25-ingestion/2025.findings-acl.277.pdf