Dissecting Persona-Driven Reasoning in Language Models via Activation Patching

Ansh Poonia, Maeghal Jain


Abstract
Large language models (LLMs) exhibit remarkable versatility in adopting diverse personas. In this study, we examine how assigning a persona influences a model’s reasoning on an objective task. Using activation patching, we take a first step toward understanding how key components of the model encode persona-specific information. Our findings reveal that the early Multi-Layer Perceptron (MLP) layers attend not only to the syntactic structure of the input but also process its semantic content. These layers transform persona tokens into richer representations, which are then used by the middle Multi-Head Attention (MHA) layers to shape the model’s output. Additionally, we identify specific attention heads that disproportionately attend to racial and color-based identities.
Anthology ID:
2025.findings-emnlp.1335
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2025
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
24553–24566
Language:
URL:
https://preview.aclanthology.org/author-page-yu-wang-polytechnic/2025.findings-emnlp.1335/
DOI:
10.18653/v1/2025.findings-emnlp.1335
Bibkey:
Cite (ACL):
Ansh Poonia and Maeghal Jain. 2025. Dissecting Persona-Driven Reasoning in Language Models via Activation Patching. In Findings of the Association for Computational Linguistics: EMNLP 2025, pages 24553–24566, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
Dissecting Persona-Driven Reasoning in Language Models via Activation Patching (Poonia & Jain, Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/author-page-yu-wang-polytechnic/2025.findings-emnlp.1335.pdf
Checklist:
 2025.findings-emnlp.1335.checklist.pdf