POW: Political Overton Windows of Large Language Models

Leif Azzopardi, Yashar Moshfeghi


Abstract
Political bias in Large Language Models (LLMs) presents a growing concern for the responsible deployment of AI systems. Traditional audits often attempt to locate a model’s political position as a point estimate, masking the broader set of ideological boundaries that shape what a model is willing or unwilling to say. In this paper, we draw upon the concept of the Overton Window as a framework for mapping these boundaries: the range of political views that a given LLM will espouse, remain neutral on, or refuse to endorse. To uncover these windows, we applied an auditing-based methodology, called PRISM, that probes LLMs through task-driven prompts designed to elicit political stances indirectly. Using the Political Compass Test, we evaluated twenty-eight LLMs from eight providers to reveal their distinct Overton Windows. While many models default to economically left and socially liberal positions, we show that their willingness to express or reject certain positions varies considerably, where DeepSeek models tend to be very restrictive in what they will discuss and Gemini models tend to be most expansive. Our findings demonstrate that Overton Windows offer a richer, more nuanced view of political bias in LLMs and provide a new lens for auditing their normative boundaries.
Anthology ID:
2025.findings-emnlp.1347
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2025
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
24767–24773
Language:
URL:
https://preview.aclanthology.org/author-page-yu-wang-polytechnic/2025.findings-emnlp.1347/
DOI:
10.18653/v1/2025.findings-emnlp.1347
Bibkey:
Cite (ACL):
Leif Azzopardi and Yashar Moshfeghi. 2025. POW: Political Overton Windows of Large Language Models. In Findings of the Association for Computational Linguistics: EMNLP 2025, pages 24767–24773, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
POW: Political Overton Windows of Large Language Models (Azzopardi & Moshfeghi, Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/author-page-yu-wang-polytechnic/2025.findings-emnlp.1347.pdf
Checklist:
 2025.findings-emnlp.1347.checklist.pdf