LLMs syntactically adapt their language use to their conversational partner

Florian Kandra, Vera Demberg, Alexander Koller


Abstract
It has been frequently observed that human speakers align their language use with each other during conversations. In this paper, we study empirically whether large language models (LLMs) exhibit the same behavior of conversational adaptation.We construct a corpus of conversations between LLMs and find that two LLM agents end up making more similar syntactic choices as conversations go on, confirming that modern LLMs adapt their language use to their conversational partners in at least a rudimentary way.
Anthology ID:
2025.acl-short.68
Volume:
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Wanxiang Che, Joyce Nabende, Ekaterina Shutova, Mohammad Taher Pilehvar
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
873–886
Language:
URL:
https://preview.aclanthology.org/landing_page/2025.acl-short.68/
DOI:
Bibkey:
Cite (ACL):
Florian Kandra, Vera Demberg, and Alexander Koller. 2025. LLMs syntactically adapt their language use to their conversational partner. In Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 873–886, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
LLMs syntactically adapt their language use to their conversational partner (Kandra et al., ACL 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/landing_page/2025.acl-short.68.pdf