Bias in the Mirror : Are LLMs opinions robust to their own adversarial attacks

Virgile Rennard, Christos Xypolopoulos, Michalis Vazirgiannis


Abstract
Large language models (LLMs) inherit biases from their training data and alignment processes, influencing their responses in subtle ways. While many studies have examined these biases, little work has explored their robustness during interactions. In this paper, we introduce a novel approach where two instances of an LLM engage in self-debate, arguing opposing viewpoints to persuade a neutral version of the model. Through this, we evaluate how firmly biases hold and whether models are susceptible to reinforcing misinformation or shifting to harmful viewpoints. Our experiments span multiple LLMs of varying sizes, origins, and languages, providing deeper insights into bias persistence and flexibility across linguistic and cultural contexts.
Anthology ID:
2025.acl-long.106
Volume:
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Wanxiang Che, Joyce Nabende, Ekaterina Shutova, Mohammad Taher Pilehvar
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2128–2143
Language:
URL:
https://preview.aclanthology.org/landing_page/2025.acl-long.106/
DOI:
Bibkey:
Cite (ACL):
Virgile Rennard, Christos Xypolopoulos, and Michalis Vazirgiannis. 2025. Bias in the Mirror : Are LLMs opinions robust to their own adversarial attacks. In Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2128–2143, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
Bias in the Mirror : Are LLMs opinions robust to their own adversarial attacks (Rennard et al., ACL 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/landing_page/2025.acl-long.106.pdf