Tianyi Niu


2025

pdf bib
Chameleon LLMs: User Personas Influence Chatbot Personality Shifts
Jane Xing | Tianyi Niu | Shashank Srivastava
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing

As large language models (LLMs) integrate into society, their ability to adapt to users is as critical as their accuracy. While prior work has used personality tests to examine the perceived personalities of LLMs, little research has explored whether LLMs adapt their perceived personalities in response to user interactions. We investigate whether and how LLMs exhibit conversational adaptations over prolonged interactions. Using a controlled simulations where a user and chatbot engage in dialogue, we measure the chatbot’s personality shift before and after the conversation. Across multiple models, we find that traits such as Agreeableness, Extraversion, and Conscientiousness are highly susceptible to user influence, whereas Emotional Stability and Intellect remain relatively more stable. Our results suggest that LLMs dynamically adjust their conversational style in response to user personas, raising important implications for AI alignment, trust, and safety.

pdf bib
Probing Neural Network Generalization using Default Patterns
Brandon Prickett | Tianyi Niu | Katya Pertsova
Proceedings of the 22nd SIGMORPHON workshop on Computational Morphology, Phonology, and Phonetics

Whether neural-net models can learn minoritydefault patterns has been a matter of some controversy. Results based on modeling real human language data are hard to interpret due to complexity. Therefore, we examine the learning of a simple artificial language pattern involving defaults using three computational models”:" an Encoder-Decoder RNN, a Transformer Encoder, and a Logistic Regression. Overall, we find that the models have the hardest time with minority defaults, but can eventually learn them and apply them to novel words (although not always extend them to completely novel segments or novel CV-sequences). Typefrequency has the largest effect on learning in all models, trumping the effect of distribution. We examine the weights of two models to provide further insights into how defaults are represented inside the models.