CoBia: Constructed Conversations Can Trigger Otherwise Concealed Societal Biases in LLMs

Nafiseh Nikeghbal, Amir Hossein Kargaran, Jana Diesner


Abstract
Improvements in model construction, including fortified safety guardrails, allow Large language models (LLMs) to increasingly pass standard safety checks. However, LLMs sometimes slip into revealing harmful behavior, such as expressing racist viewpoints, during conversations. To analyze this systematically, we introduce CoBia, a suite of lightweight adversarial attacks that allow us to refine the scope of conditions under which LLMs depart from normative or ethical behavior in conversations. CoBia creates a constructed conversation where the model utters a biased claim about a social group. We then evaluate whether the model can recover from the fabricated bias claim and reject biased follow-up questions.We evaluate 11 open-source as well as proprietary LLMs for their outputs related to six socio-demographic categories that are relevant to individual safety and fair treatment, i.e., gender, race, religion, nationality, sex orientation, and others. Our evaluation is based on established LLM-based bias metrics, and we compare the results against human judgments to scope out the LLMs’ reliability and alignment. The results suggest that purposefully constructed conversations reliably reveal bias amplification and that LLMs often fail to reject biased follow-up questions during dialogue. This form of stress-testing highlights deeply embedded biases that can be surfaced through interaction. Code and artifacts are available at https://github.com/nafisenik/CoBia.
Anthology ID:
2025.emnlp-main.84
Volume:
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1618–1639
Language:
URL:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.84/
DOI:
Bibkey:
Cite (ACL):
Nafiseh Nikeghbal, Amir Hossein Kargaran, and Jana Diesner. 2025. CoBia: Constructed Conversations Can Trigger Otherwise Concealed Societal Biases in LLMs. In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, pages 1618–1639, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
CoBia: Constructed Conversations Can Trigger Otherwise Concealed Societal Biases in LLMs (Nikeghbal et al., EMNLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.84.pdf
Checklist:
 2025.emnlp-main.84.checklist.pdf