Causally Testing Gender Bias in LLMs: A Case Study on Occupational Bias

Yuen Chen, Vethavikashini Chithrra Raghuram, Justus Mattern, Rada Mihalcea, Zhijing Jin


Abstract
Generated texts from large language models (LLMs) have been shown to exhibit a variety of harmful, human-like biases against various demographics. These findings motivate research efforts aiming to understand and measure such effects. This paper introduces a causal formulation for bias measurement in generative language models. Based on this theoretical foundation, we outline a list of desiderata for designing robust bias benchmarks. We then propose a benchmark called OccuGender, with a bias-measuring procedure to investigate occupational gender bias. We test several state-of-the-art open-source LLMs on OccuGender, including Llama, Mistral, and their instruction-tuned versions. The results show that these models exhibit substantial occupational gender bias. Lastly, we discuss prompting strategies for bias mitigation and an extension of our causal formulation to illustrate the generalizability of our framework.
Anthology ID:
2025.findings-naacl.281
Volume:
Findings of the Association for Computational Linguistics: NAACL 2025
Month:
April
Year:
2025
Address:
Albuquerque, New Mexico
Editors:
Luis Chiruzzo, Alan Ritter, Lu Wang
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
4984–5004
Language:
URL:
https://preview.aclanthology.org/moar-dois/2025.findings-naacl.281/
DOI:
10.18653/v1/2025.findings-naacl.281
Bibkey:
Cite (ACL):
Yuen Chen, Vethavikashini Chithrra Raghuram, Justus Mattern, Rada Mihalcea, and Zhijing Jin. 2025. Causally Testing Gender Bias in LLMs: A Case Study on Occupational Bias. In Findings of the Association for Computational Linguistics: NAACL 2025, pages 4984–5004, Albuquerque, New Mexico. Association for Computational Linguistics.
Cite (Informal):
Causally Testing Gender Bias in LLMs: A Case Study on Occupational Bias (Chen et al., Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/moar-dois/2025.findings-naacl.281.pdf