Are LLMs (Really) Ideological? An IRT-based Analysis and Alignment Tool for Perceived Socio-Economic Bias in LLMs
Jasmin Wachter, Michael Radloff, Maja Smolej, Katharina Kinder-Kurlanda
Abstract
We introduce an Item Response Theory (IRT)-based framework to detect and quantify ideological bias in large language models (LLMs) without relying on subjective human judgments. Unlike prior work, our two-stage approach distinguishes between response avoidance and expressed bias by modeling ‘Prefer Not to Answer’ (PNA) behaviors and calibrating ideological leanings based on open-ended responses. We fine-tune two LLM families to represent liberal and conservative baselines, and validate our approach using a 105-item ideological test inventory. Our results show that off-the-shelve LLMs frequently avoid engagement with ideological prompts, calling into question previous claims of partisan bias. This framework provides a statistically grounded and scalable tool for LLM alignment and fairness assessment. The general methodolody can also be applied to other forms of bias and languages.- Anthology ID:
- 2025.gem-1.9
- Volume:
- Proceedings of the Fourth Workshop on Generation, Evaluation and Metrics (GEM²)
- Month:
- July
- Year:
- 2025
- Address:
- Vienna, Austria and virtual meeting
- Editors:
- Kaustubh Dhole, Miruna Clinciu
- Venues:
- GEM | WS
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 99–120
- Language:
- URL:
- https://preview.aclanthology.org/corrections-2025-08/2025.gem-1.9/
- DOI:
- Cite (ACL):
- Jasmin Wachter, Michael Radloff, Maja Smolej, and Katharina Kinder-Kurlanda. 2025. Are LLMs (Really) Ideological? An IRT-based Analysis and Alignment Tool for Perceived Socio-Economic Bias in LLMs. In Proceedings of the Fourth Workshop on Generation, Evaluation and Metrics (GEM²), pages 99–120, Vienna, Austria and virtual meeting. Association for Computational Linguistics.
- Cite (Informal):
- Are LLMs (Really) Ideological? An IRT-based Analysis and Alignment Tool for Perceived Socio-Economic Bias in LLMs (Wachter et al., GEM 2025)
- PDF:
- https://preview.aclanthology.org/corrections-2025-08/2025.gem-1.9.pdf