Defining and Evaluating Fair Natural Language Generation
Abstract
Our work focuses on the biases that emerge in the natural language generation (NLG) task of sentence completion. In this paper, we introduce a mathematical framework of fairness for NLG followed by an evaluation of gender biases in two state-of-the-art language models. Our analysis provides a theoretical formulation for biases in NLG and empirical evidence that existing language generation models embed gender bias.- Anthology ID:
- 2020.winlp-1.27
- Volume:
- Proceedings of the The Fourth Widening Natural Language Processing Workshop
- Month:
- July
- Year:
- 2020
- Address:
- Seattle, USA
- Venue:
- WiNLP
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 107–109
- Language:
- URL:
- https://aclanthology.org/2020.winlp-1.27
- DOI:
- 10.18653/v1/2020.winlp-1.27
- Cite (ACL):
- Catherine Yeo and Alyssa Chen. 2020. Defining and Evaluating Fair Natural Language Generation. In Proceedings of the The Fourth Widening Natural Language Processing Workshop, pages 107–109, Seattle, USA. Association for Computational Linguistics.
- Cite (Informal):
- Defining and Evaluating Fair Natural Language Generation (Yeo & Chen, WiNLP 2020)