What’s Not Said Still Hurts: A Description-Based Evaluation Framework for Measuring Social Bias in LLMs

Jinhao Pan, Chahat Raj, Ziyu Yao, Ziwei Zhu


Abstract
Large Language Models (LLMs) often exhibit social biases inherited from their training data. While existing benchmarks evaluate bias by term-based mode through direct term associations between demographic terms and bias terms, LLMs have become increasingly adept at avoiding biased responses, leading to seemingly low levels of bias. However, biases persist in subtler, contextually hidden forms that traditional benchmarks fail to capture. We introduce the Description-based Bias Benchmark (DBB), a novel dataset designed to assess bias at the semantic level that bias concepts are hidden within naturalistic, subtly framed contexts in real-world scenarios rather than superficial terms. We analyze six state-of-the-art LLMs, revealing that while models reduce bias in response at the term level, they continue to reinforce biases in nuanced settings. Data, code, and results are available at https://github.com/JP-25/Description-based-Bias-Benchmark.
Anthology ID:
2025.findings-emnlp.76
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2025
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1438–1459
Language:
URL:
https://preview.aclanthology.org/author-page-yu-wang-polytechnic/2025.findings-emnlp.76/
DOI:
10.18653/v1/2025.findings-emnlp.76
Bibkey:
Cite (ACL):
Jinhao Pan, Chahat Raj, Ziyu Yao, and Ziwei Zhu. 2025. What’s Not Said Still Hurts: A Description-Based Evaluation Framework for Measuring Social Bias in LLMs. In Findings of the Association for Computational Linguistics: EMNLP 2025, pages 1438–1459, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
What’s Not Said Still Hurts: A Description-Based Evaluation Framework for Measuring Social Bias in LLMs (Pan et al., Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/author-page-yu-wang-polytechnic/2025.findings-emnlp.76.pdf
Checklist:
 2025.findings-emnlp.76.checklist.pdf