Intersectional Bias in Japanese Large Language Models from a Contextualized Perspective
Hitomi Yanaka, Xinqi He, Lu Jie, Namgi Han, Sunjin Oh, Ryoma Kumon, Yuma Matsuoka, Kazuhiko Watabe, Yuko Itatsu
Abstract
An growing number of studies have examined the social bias of rapidly developed large language models (LLMs). Although most of these studies have focused on bias occurring in a single social attribute, research in social science has shown that social bias often occurs in the form of intersectionality—the constitutive and contextualized perspective on bias aroused by social attributes. In this study, we construct the Japanese benchmark inter-JBBQ, designed to evaluate the intersectional bias in LLMs on the question-answering setting. Using inter-JBBQ to analyze GPT-4o and Swallow, we find that biased output varies according to its contexts even with the equal combination of social attributes.- Anthology ID:
- 2025.gebnlp-1.2
- Volume:
- Proceedings of the 6th Workshop on Gender Bias in Natural Language Processing (GeBNLP)
- Month:
- August
- Year:
- 2025
- Address:
- Vienna, Austria
- Editors:
- Agnieszka Faleńska, Christine Basta, Marta Costa-jussà, Karolina Stańczak, Debora Nozza
- Venues:
- GeBNLP | WS
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 18–32
- Language:
- URL:
- https://preview.aclanthology.org/landing_page/2025.gebnlp-1.2/
- DOI:
- 10.18653/v1/2025.gebnlp-1.2
- Cite (ACL):
- Hitomi Yanaka, Xinqi He, Lu Jie, Namgi Han, Sunjin Oh, Ryoma Kumon, Yuma Matsuoka, Kazuhiko Watabe, and Yuko Itatsu. 2025. Intersectional Bias in Japanese Large Language Models from a Contextualized Perspective. In Proceedings of the 6th Workshop on Gender Bias in Natural Language Processing (GeBNLP), pages 18–32, Vienna, Austria. Association for Computational Linguistics.
- Cite (Informal):
- Intersectional Bias in Japanese Large Language Models from a Contextualized Perspective (Yanaka et al., GeBNLP 2025)
- PDF:
- https://preview.aclanthology.org/landing_page/2025.gebnlp-1.2.pdf