Towards Region-aware Bias Evaluation Metrics

Angana Borah, Aparna Garimella, Rada Mihalcea


Abstract
When exposed to human-generated data, language models are known to learn and amplify societal biases. While previous works introduced metrics that can be used to assess the bias in these models, they rely on assumptions that may not be universally true. For instance, a gender bias dimension commonly used by these metrics is that of family–career, but this may not be the only common bias in certain regions of the world. In this paper, we identify topical differences in gender bias across different regions and propose a region-aware bottom-up approach for bias assessment. Several of our proposed region-aware gender bias dimensions are found to be aligned with the human perception of gender biases in these regions.
Anthology ID:
2025.c3nlp-1.9
Volume:
Proceedings of the 3rd Workshop on Cross-Cultural Considerations in NLP (C3NLP 2025)
Month:
May
Year:
2025
Address:
Albuquerque, New Mexico
Editors:
Vinodkumar Prabhakaran, Sunipa Dev, Luciana Benotti, Daniel Hershcovich, Yong Cao, Li Zhou, Laura Cabello, Ife Adebara
Venues:
C3NLP | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
108–131
Language:
URL:
https://preview.aclanthology.org/fix-sig-urls/2025.c3nlp-1.9/
DOI:
Bibkey:
Cite (ACL):
Angana Borah, Aparna Garimella, and Rada Mihalcea. 2025. Towards Region-aware Bias Evaluation Metrics. In Proceedings of the 3rd Workshop on Cross-Cultural Considerations in NLP (C3NLP 2025), pages 108–131, Albuquerque, New Mexico. Association for Computational Linguistics.
Cite (Informal):
Towards Region-aware Bias Evaluation Metrics (Borah et al., C3NLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/fix-sig-urls/2025.c3nlp-1.9.pdf