Conservative Bias in Large Language Models: Measuring Relation Predictions

Toyin Aguda, Erik Wilson, Allan Anzagira, Simerjot Kaur, Charese Smiley


Abstract
Large language models (LLMs) exhibit pronounced conservative bias in relation extraction tasks, frequently defaulting to no_relation label when an appropriate option is unavailable. While this behavior helps prevent incorrect relation assignments, our analysis reveals that it also leads to significant information loss when reasoning is not explicitly included in the output. We systematically evaluate this trade-off across multiple prompts, datasets, and relation types, introducing the concept of Hobson’s choice to capture scenarios where models opt for safe but uninformative labels over hallucinated ones. Our findings suggest that conservative bias occurs twice as often as hallucination. To quantify this effect, we use SBERT and LLM prompts to capture the semantic similarity between conservative bias behaviors in constrained prompts and labels generated from semi-constrained and open-ended prompts.
Anthology ID:
2025.findings-acl.973
Volume:
Findings of the Association for Computational Linguistics: ACL 2025
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Wanxiang Che, Joyce Nabende, Ekaterina Shutova, Mohammad Taher Pilehvar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
18989–18998
Language:
URL:
https://preview.aclanthology.org/corrections-2025-08/2025.findings-acl.973/
DOI:
10.18653/v1/2025.findings-acl.973
Bibkey:
Cite (ACL):
Toyin Aguda, Erik Wilson, Allan Anzagira, Simerjot Kaur, and Charese Smiley. 2025. Conservative Bias in Large Language Models: Measuring Relation Predictions. In Findings of the Association for Computational Linguistics: ACL 2025, pages 18989–18998, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
Conservative Bias in Large Language Models: Measuring Relation Predictions (Aguda et al., Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/corrections-2025-08/2025.findings-acl.973.pdf