Are fairness metric scores enough to assess discrimination biases in machine learning?
Fanny Jourdan, Laurent Risser, Jean-michel Loubes, Nicholas Asher
Abstract
This paper presents novel experiments shedding light on the shortcomings of current metrics for assessing biases of gender discrimination made by machine learning algorithms on textual data. We focus on the Bios dataset, and our learning task is to predict the occupation of individuals, based on their biography. Such prediction tasks are common in commercial Natural Language Processing (NLP) applications such as automatic job recommendations. We address an important limitation of theoretical discussions dealing with group-wise fairness metrics: they focus on large datasets, although the norm in many industrial NLP applications is to use small to reasonably large linguistic datasets for which the main practical constraint is to get a good prediction accuracy. We then question how reliable are different popular measures of bias when the size of the training set is simply sufficient to learn reasonably accurate predictions. Our experiments sample the Bios dataset and learn more than 200 models on different sample sizes. This allows us to statistically study our results and to confirm that common gender bias indices provide diverging and sometimes unreliable results when applied to relatively small training and test samples. This highlights the crucial importance of variance calculations for providing sound results in this field.- Anthology ID:
- 2023.trustnlp-1.15
- Volume:
- Proceedings of the 3rd Workshop on Trustworthy Natural Language Processing (TrustNLP 2023)
- Month:
- July
- Year:
- 2023
- Address:
- Toronto, Canada
- Editors:
- Anaelia Ovalle, Kai-Wei Chang, Ninareh Mehrabi, Yada Pruksachatkun, Aram Galystan, Jwala Dhamala, Apurv Verma, Trista Cao, Anoop Kumar, Rahul Gupta
- Venue:
- TrustNLP
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 163–174
- Language:
- URL:
- https://aclanthology.org/2023.trustnlp-1.15
- DOI:
- 10.18653/v1/2023.trustnlp-1.15
- Cite (ACL):
- Fanny Jourdan, Laurent Risser, Jean-michel Loubes, and Nicholas Asher. 2023. Are fairness metric scores enough to assess discrimination biases in machine learning?. In Proceedings of the 3rd Workshop on Trustworthy Natural Language Processing (TrustNLP 2023), pages 163–174, Toronto, Canada. Association for Computational Linguistics.
- Cite (Informal):
- Are fairness metric scores enough to assess discrimination biases in machine learning? (Jourdan et al., TrustNLP 2023)
- PDF:
- https://preview.aclanthology.org/nschneid-patch-5/2023.trustnlp-1.15.pdf