LLM Agents in Interaction: Measuring Personality Consistency and Linguistic Alignment in Interacting Populations of Large Language Models

Ivar Frisch, Mario Giulianelli


Abstract
Agent interaction has long been a key topic in psychology, philosophy, and artificial intelligence, and it is now gaining traction in large language model (LLM) research. This experimental study seeks to lay the groundwork for our understanding of dialogue-based interaction between LLMs: Do persona-prompted LLMs show consistent personality and language use in interaction? We condition GPT-3.5 on asymmetric personality profiles to create a population of LLM agents, administer personality tests and submit the agents to a collaborative writing task. We find different profiles exhibit different degrees of personality consistency and linguistic alignment in interaction.
Anthology ID:
2024.personalize-1.9
Volume:
Proceedings of the 1st Workshop on Personalization of Generative AI Systems (PERSONALIZE 2024)
Month:
March
Year:
2024
Address:
St. Julians, Malta
Editors:
Ameet Deshpande, EunJeong Hwang, Vishvak Murahari, Joon Sung Park, Diyi Yang, Ashish Sabharwal, Karthik Narasimhan, Ashwin Kalyan
Venues:
PERSONALIZE | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
102–111
Language:
URL:
https://aclanthology.org/2024.personalize-1.9
DOI:
Bibkey:
Cite (ACL):
Ivar Frisch and Mario Giulianelli. 2024. LLM Agents in Interaction: Measuring Personality Consistency and Linguistic Alignment in Interacting Populations of Large Language Models. In Proceedings of the 1st Workshop on Personalization of Generative AI Systems (PERSONALIZE 2024), pages 102–111, St. Julians, Malta. Association for Computational Linguistics.
Cite (Informal):
LLM Agents in Interaction: Measuring Personality Consistency and Linguistic Alignment in Interacting Populations of Large Language Models (Frisch & Giulianelli, PERSONALIZE-WS 2024)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-4/2024.personalize-1.9.pdf