Pia Wenzel Neves
2025
PolBiX: Detecting LLMs’ Political Bias in Fact-Checking through X-phemisms
Charlott Jakob
|
David Harbecke
|
Patrick Parschan
|
Pia Wenzel Neves
|
Vera Schmitt
Findings of the Association for Computational Linguistics: EMNLP 2025
Large Language Models are increasingly used in applications requiring objective assessment, which could be compromised by political bias. Many studies found preferences for left-leaning positions in LLMs, but downstream effects on tasks like fact-checking remain underexplored. In this study, we systematically investigate political bias through exchanging words with euphemisms or dysphemisms in German claims. We construct minimal pairs of factually equivalent claims that differ in political connotation, to assess the consistency of LLMs in classifying them as true or false. We evaluate six LLMs and find that, more than political leaning, the presence of judgmental words significantly influences truthfulness assessment. While a few models show tendencies of political bias, this is not mitigated by explicitly calling for objectivism in prompts. Warning: This paper contains content that may be offensive or upsetting.
Overview of the SustainEval 2025 Shared Task: Identifying the Topic and Verifiability of Sustainability Report Excerpts
Jakob Prange
|
Charlott Jakob
|
Patrick Göttfert
|
Raphael Huber
|
Pia Wenzel Neves
|
Annemarie Friedrich
Proceedings of the 21st Conference on Natural Language Processing (KONVENS 2025): Workshops
Search
Fix author
Co-authors
- Charlott Jakob 2
- Annemarie Friedrich 1
- Patrick Göttfert 1
- David Harbecke 1
- Raphael Huber 1
- show all...