Modeling Motivated Reasoning in Law: Evaluating Strategic Role Conditioning in LLM Summarization

Eunjung Cho, Alexander Miserlis Hoyle, Yoan Hermstrüwer


Abstract
Large Language Models (LLMs) are increasingly used to generate user-tailored summaries, adapting outputs to specific stakeholders. In legal contexts, this raises important questions about motivated reasoning — how models strategically frame information to align with a stakeholder’s position within the legal system. Building on theories of legal realism and recent trends in legal practice, we investigate how LLMs respond to prompts conditioned on different legal roles (e.g., judges, prosecutors, attorneys) when summarizing judicial decisions. We introduce an evaluation framework grounded in legal fact and reasoning inclusion, also considering favorability towards stakeholders. Our results show that even when prompts include balancing instructions, models exhibit selective inclusion patterns that reflect role-consistent perspectives. These findings raise broader concerns about how similar alignment may emerge as LLMs begin to infer user roles from prior interactions or context, even without explicit role instructions. Our results underscore the need for role-aware evaluation of LLM summarization behavior in high-stakes legal settings.
Anthology ID:
2025.nllp-1.7
Volume:
Proceedings of the Natural Legal Language Processing Workshop 2025
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Nikolaos Aletras, Ilias Chalkidis, Leslie Barrett, Cătălina Goanță, Daniel Preoțiuc-Pietro, Gerasimos Spanakis
Venues:
NLLP | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
68–112
Language:
URL:
https://preview.aclanthology.org/ingest-emnlp/2025.nllp-1.7/
DOI:
Bibkey:
Cite (ACL):
Eunjung Cho, Alexander Miserlis Hoyle, and Yoan Hermstrüwer. 2025. Modeling Motivated Reasoning in Law: Evaluating Strategic Role Conditioning in LLM Summarization. In Proceedings of the Natural Legal Language Processing Workshop 2025, pages 68–112, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
Modeling Motivated Reasoning in Law: Evaluating Strategic Role Conditioning in LLM Summarization (Cho et al., NLLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-emnlp/2025.nllp-1.7.pdf