Human Alignment: How Much Do We Adapt to LLMs?

Cazalets Tanguy, Ruben Janssens, Tony Belpaeme, Joni Dambre


Abstract
Large Language Models (LLMs) are becoming a common part of our lives, yet few studies have examined how they influence our behavior. Using a cooperative language game in which players aim to converge on a shared word, we investigate how people adapt their communication strategies when paired with either an LLM or another human. Our study demonstrates that LLMs exert a measurable influence on human communication strategies and that humans notice and adapt to these differences irrespective of whether they are aware they are interacting with an LLM. These findings highlight the reciprocal influence of human–AI dialogue and raise important questions about the long-term implications of embedding LLMs in everyday communication.
Anthology ID:
2025.acl-short.47
Volume:
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Wanxiang Che, Joyce Nabende, Ekaterina Shutova, Mohammad Taher Pilehvar
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
603–613
Language:
URL:
https://preview.aclanthology.org/ingestion-acl-25/2025.acl-short.47/
DOI:
Bibkey:
Cite (ACL):
Cazalets Tanguy, Ruben Janssens, Tony Belpaeme, and Joni Dambre. 2025. Human Alignment: How Much Do We Adapt to LLMs?. In Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 603–613, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
Human Alignment: How Much Do We Adapt to LLMs? (Tanguy et al., ACL 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingestion-acl-25/2025.acl-short.47.pdf