Benchmarking Intersectional Biases in NLP

John Lalor, Yi Yang, Kendall Smith, Nicole Forsgren, Ahmed Abbasi


Abstract
There has been a recent wave of work assessing the fairness of machine learning models in general, and more specifically, on natural language processing (NLP) models built using machine learning techniques. While much work has highlighted biases embedded in state-of-the-art language models, and more recent efforts have focused on how to debias, research assessing the fairness and performance of biased/debiased models on downstream prediction tasks has been limited. Moreover, most prior work has emphasized bias along a single dimension such as gender or race. In this work, we benchmark multiple NLP models with regards to their fairness and predictive performance across a variety of NLP tasks. In particular, we assess intersectional bias - fairness across multiple demographic dimensions. The results show that while current debiasing strategies fare well in terms of the fairness-accuracy trade-off (generally preserving predictive power in debiased models), they are unable to effectively alleviate bias in downstream tasks. Furthermore, this bias is often amplified across dimensions (i.e., intersections). We conclude by highlighting possible causes and making recommendations for future NLP debiasing research.
Anthology ID:
2022.naacl-main.263
Volume:
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
Month:
July
Year:
2022
Address:
Seattle, United States
Editors:
Marine Carpuat, Marie-Catherine de Marneffe, Ivan Vladimir Meza Ruiz
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
3598–3609
Language:
URL:
https://aclanthology.org/2022.naacl-main.263
DOI:
10.18653/v1/2022.naacl-main.263
Bibkey:
Cite (ACL):
John Lalor, Yi Yang, Kendall Smith, Nicole Forsgren, and Ahmed Abbasi. 2022. Benchmarking Intersectional Biases in NLP. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3598–3609, Seattle, United States. Association for Computational Linguistics.
Cite (Informal):
Benchmarking Intersectional Biases in NLP (Lalor et al., NAACL 2022)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-acl-2023-videos/2022.naacl-main.263.pdf
Software:
 2022.naacl-main.263.software.zip
Video:
 https://preview.aclanthology.org/ingest-acl-2023-videos/2022.naacl-main.263.mp4
Code
 nd-hal/naacl-2022
Data
Psychometric NLP