Cultural Learning-Based Culture Adaptation of Language Models

Chen Cecilia Liu, Anna Korhonen, Iryna Gurevych


Abstract
Adapting large language models (LLMs) to diverse cultural values is a challenging task, as existing LLMs often reflect the values of specific groups by default, and potentially cause harm to others. In this paper, we present CLCA, a novel framework for enhancing LLM alignment with cultural values based on cultural learning. The framework leverages simulated social interactions to generate conversations in which LLMs engage in role-playing within culturally adapted social scenarios, capturing implicit cultural norms for model fine-tuning. CLCA improves cultural value alignment across various model architectures measured using World Value Survey data, demonstrating the effectiveness of our proposed approach. Our results provide early evidence that understanding intent and social interactions can enhance cultural value adaptation in LLMs, highlighting the promise of training approaches based on cultural learning.
Anthology ID:
2025.acl-long.156
Volume:
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Wanxiang Che, Joyce Nabende, Ekaterina Shutova, Mohammad Taher Pilehvar
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
3114–3134
Language:
URL:
https://preview.aclanthology.org/ingestion-acl-25/2025.acl-long.156/
DOI:
Bibkey:
Cite (ACL):
Chen Cecilia Liu, Anna Korhonen, and Iryna Gurevych. 2025. Cultural Learning-Based Culture Adaptation of Language Models. In Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 3114–3134, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
Cultural Learning-Based Culture Adaptation of Language Models (Liu et al., ACL 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingestion-acl-25/2025.acl-long.156.pdf