Better Aligned with Survey Respondents or Training Data? Unveiling Political Leanings of LLMs on U.S. Supreme Court Cases
Shanshan Xu, Santosh T.y.s.s, Yanai Elazar, Quirin Vogel, Barbara Plank, Matthias Grabmair
Abstract
Recent works have shown that Large Language Models (LLMs) have a tendency to memorize patterns and biases present in their training data, raising important questions about how such memorized content influences model behavior. One such concern is the emergence of political bias in LLM outputs. In this paper, we investigate the extent to which LLMs’ political leanings reflect memorized patterns from their pretraining corpora. We propose a method to quantitatively evaluate political leanings embedded in the large pretraining corpora. Subsequently we investigate to whom are the LLMs’ political leanings more aligned with, their pretrainig corpora or the surveyed human opinions. As a case study, we focus on probing the political leanings of LLMs in 32 U.S. Supreme Court cases, addressing contentious topics such as abortion and voting rights. Our findings reveal that LLMs strongly reflect the political leanings in their training data, and no strong correlation is observed with their alignment to human opinions as expressed in surveys. These results underscore the importance of responsible curation of training data, and the methodology for auditing the memorization in LLMs to ensure human-AI alignment.- Anthology ID:
- 2025.l2m2-1.16
- Volume:
- Proceedings of the First Workshop on Large Language Model Memorization (L2M2)
- Month:
- August
- Year:
- 2025
- Address:
- Vienna, Austria
- Editors:
- Robin Jia, Eric Wallace, Yangsibo Huang, Tiago Pimentel, Pratyush Maini, Verna Dankers, Johnny Wei, Pietro Lesci
- Venues:
- L2M2 | WS
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 205–226
- Language:
- URL:
- https://preview.aclanthology.org/acl25-workshop-ingestion/2025.l2m2-1.16/
- DOI:
- Cite (ACL):
- Shanshan Xu, Santosh T.y.s.s, Yanai Elazar, Quirin Vogel, Barbara Plank, and Matthias Grabmair. 2025. Better Aligned with Survey Respondents or Training Data? Unveiling Political Leanings of LLMs on U.S. Supreme Court Cases. In Proceedings of the First Workshop on Large Language Model Memorization (L2M2), pages 205–226, Vienna, Austria. Association for Computational Linguistics.
- Cite (Informal):
- Better Aligned with Survey Respondents or Training Data? Unveiling Political Leanings of LLMs on U.S. Supreme Court Cases (Xu et al., L2M2 2025)
- PDF:
- https://preview.aclanthology.org/acl25-workshop-ingestion/2025.l2m2-1.16.pdf