Bias assessment for experts in discrimination, not in computer science

Laura Alonso Alemany, Luciana Benotti, Hernán Maina, Lucía Gonzalez, Lautaro Martínez, Beatriz Busaniche, Alexia Halvorsen, Amanda Rojo, Mariela Rajngewerc


Abstract
Approaches to bias assessment usually require such technical skills that, by design, they leave discrimination experts out. In this paper we present EDIA, a tool that facilitates that experts in discrimination explore social biases in word embeddings and masked language models. Experts can then characterize those biases so that their presence can be assessed more systematically, and actions can be planned to address them. They can work interactively to assess the effects of different characterizations of bias in a given word embedding or language model, which helps to specify informal intuitions in concrete resources for systematic testing.
Anthology ID:
2023.c3nlp-1.10
Volume:
Proceedings of the First Workshop on Cross-Cultural Considerations in NLP (C3NLP)
Month:
May
Year:
2023
Address:
Dubrovnik, Croatia
Editors:
Sunipa Dev, Vinodkumar Prabhakaran, David Adelani, Dirk Hovy, Luciana Benotti
Venue:
C3NLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
91–106
Language:
URL:
https://aclanthology.org/2023.c3nlp-1.10
DOI:
10.18653/v1/2023.c3nlp-1.10
Bibkey:
Cite (ACL):
Laura Alonso Alemany, Luciana Benotti, Hernán Maina, Lucía Gonzalez, Lautaro Martínez, Beatriz Busaniche, Alexia Halvorsen, Amanda Rojo, and Mariela Rajngewerc. 2023. Bias assessment for experts in discrimination, not in computer science. In Proceedings of the First Workshop on Cross-Cultural Considerations in NLP (C3NLP), pages 91–106, Dubrovnik, Croatia. Association for Computational Linguistics.
Cite (Informal):
Bias assessment for experts in discrimination, not in computer science (Alonso Alemany et al., C3NLP 2023)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-2/2023.c3nlp-1.10.pdf
Video:
 https://preview.aclanthology.org/nschneid-patch-2/2023.c3nlp-1.10.mp4