Amitava Datta
2025
Can LLM Agents Maintain a Persona in Discourse?
Pranav Bhandari
|
Nicolas Fay
|
Michael J Wise
|
Amitava Datta
|
Stephanie Meek
|
Usman Naseem
|
Mehwish Nasim
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Large Language Models (LLMs) are widely used as conversational agents exploiting their capabilities in various sectors such as education, law, medicine, and more. However, LLMs are often subjected to context-shifting behaviour, resulting in a lack of consistent and interpretable personality-aligned interactions. Adherence to psychological traits lacks comprehensive analysis, especially in the case of dyadic (pairwise) conversations. We examine this challenge from two viewpoints, initially using two conversation agents to generate a discourse on a certain topic with an assigned personality from the OCEAN framework (Openness, Conscientiousness, Extraversion, Agreeableness, and Neuroticism) as High/Low for each trait. This is followed by using multiple judge agents to infer the original traits assigned to explore prediction consistency, inter-model agreement, and alignment with the assigned personality. Our findings indicate that while LLMs can be guided toward personality-driven dialogue, their ability to maintain personality traits varies significantly depending on the combination of models and discourse settings. These inconsistencies emphasise the challenges in achieving stable and interpretable personality-aligned interactions in LLMs.
Search
Fix author
Co-authors
- Pranav Bhandari 1
- Nicolas Fay 1
- Stephanie Meek 1
- Usman Naseem 1
- Mehwish Nasim 1
- show all...