The Hidden Bias: A Study on Explicit and Implicit Political Stereotypes in Large Language Models

Konrad Löhr, Shuzhou Yuan, Michael Färber


Abstract
Large Language Models (LLMs) are increasingly integral to information dissemination and decision-making processes. Given their growing societal influence, understanding potential biases, particularly within the political domain, is crucial to prevent undue influence on public opinion and democratic processes. This work investigates political bias and stereotype propagation across eight prominent LLMs using the two-dimensional Political Compass Test (PCT). Initially, the PCT is employed to assess the inherent political leanings of these models. Subsequently, persona prompting with the PCT is used to explore explicit stereotypes across various social dimensions. In a final step, implicit stereotypes are uncovered by evaluating models with multilingual versions of the PCT. Key findings reveal a consistent left-leaning political alignment across all investigated models. Furthermore, while the nature and extent of stereotypes vary considerably between models, implicit stereotypes elicited through language variation are more pronounced than those identified via explicit persona prompting. Interestingly, for most models, implicit and explicit stereotypes show a notable alignment, suggesting a degree of transparency or "awareness" regarding their inherent biases. This study underscores the complex interplay of political bias and stereotypes in LLMs.
Anthology ID:
2026.findings-eacl.118
Volume:
Findings of the Association for Computational Linguistics: EACL 2026
Month:
March
Year:
2026
Address:
Rabat, Morocco
Editors:
Vera Demberg, Kentaro Inui, Lluís Marquez
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2235–2252
Language:
URL:
https://preview.aclanthology.org/ingest-eacl/2026.findings-eacl.118/
DOI:
Bibkey:
Cite (ACL):
Konrad Löhr, Shuzhou Yuan, and Michael Färber. 2026. The Hidden Bias: A Study on Explicit and Implicit Political Stereotypes in Large Language Models. In Findings of the Association for Computational Linguistics: EACL 2026, pages 2235–2252, Rabat, Morocco. Association for Computational Linguistics.
Cite (Informal):
The Hidden Bias: A Study on Explicit and Implicit Political Stereotypes in Large Language Models (Löhr et al., Findings 2026)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-eacl/2026.findings-eacl.118.pdf
Checklist:
 2026.findings-eacl.118.checklist.pdf