You reap what you sow: On the Challenges of Bias Evaluation Under Multilingual Settings

Zeerak Talat, Aurélie Névéol, Stella Biderman, Miruna Clinciu, Manan Dey, Shayne Longpre, Sasha Luccioni, Maraim Masoud, Margaret Mitchell, Dragomir Radev, Shanya Sharma, Arjun Subramonian, Jaesung Tae, Samson Tan, Deepak Tunuguntla, Oskar Van Der Wal


Abstract
Evaluating bias, fairness, and social impact in monolingual language models is a difficult task. This challenge is further compounded when language modeling occurs in a multilingual context. Considering the implication of evaluation biases for large multilingual language models, we situate the discussion of bias evaluation within a wider context of social scientific research with computational work. We highlight three dimensions of developing multilingual bias evaluation frameworks: (1) increasing transparency through documentation, (2) expanding targets of bias beyond gender, and (3) addressing cultural differences that exist between languages. We further discuss the power dynamics and consequences of training large language models and recommend that researchers remain cognizant of the ramifications of developing such technologies.
Anthology ID:
2022.bigscience-1.3
Volume:
Proceedings of BigScience Episode #5 -- Workshop on Challenges & Perspectives in Creating Large Language Models
Month:
May
Year:
2022
Address:
virtual+Dublin
Editors:
Angela Fan, Suzana Ilic, Thomas Wolf, Matthias Gallé
Venue:
BigScience
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
26–41
Language:
URL:
https://aclanthology.org/2022.bigscience-1.3
DOI:
10.18653/v1/2022.bigscience-1.3
Bibkey:
Cite (ACL):
Zeerak Talat, Aurélie Névéol, Stella Biderman, Miruna Clinciu, Manan Dey, Shayne Longpre, Sasha Luccioni, Maraim Masoud, Margaret Mitchell, Dragomir Radev, Shanya Sharma, Arjun Subramonian, Jaesung Tae, Samson Tan, Deepak Tunuguntla, and Oskar Van Der Wal. 2022. You reap what you sow: On the Challenges of Bias Evaluation Under Multilingual Settings. In Proceedings of BigScience Episode #5 -- Workshop on Challenges & Perspectives in Creating Large Language Models, pages 26–41, virtual+Dublin. Association for Computational Linguistics.
Cite (Informal):
You reap what you sow: On the Challenges of Bias Evaluation Under Multilingual Settings (Talat et al., BigScience 2022)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-1/2022.bigscience-1.3.pdf
Video:
 https://preview.aclanthology.org/nschneid-patch-1/2022.bigscience-1.3.mp4
Data
CrowS-Pairs