Abstract
Using topic modeling and lexicon-based word similarity, we find that stories generated by GPT-3 exhibit many known gender stereotypes. Generated stories depict different topics and descriptions depending on GPT-3’s perceived gender of the character in a prompt, with feminine characters more likely to be associated with family and appearance, and described as less powerful than masculine characters, even when associated with high power verbs in a prompt. Our study raises questions on how one can avoid unintended social biases when using large language models for storytelling.- Anthology ID:
- 2021.nuse-1.5
- Volume:
- Proceedings of the Third Workshop on Narrative Understanding
- Month:
- June
- Year:
- 2021
- Address:
- Virtual
- Editors:
- Nader Akoury, Faeze Brahman, Snigdha Chaturvedi, Elizabeth Clark, Mohit Iyyer, Lara J. Martin
- Venues:
- NUSE | WNU
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 48–55
- Language:
- URL:
- https://aclanthology.org/2021.nuse-1.5
- DOI:
- 10.18653/v1/2021.nuse-1.5
- Cite (ACL):
- Li Lucy and David Bamman. 2021. Gender and Representation Bias in GPT-3 Generated Stories. In Proceedings of the Third Workshop on Narrative Understanding, pages 48–55, Virtual. Association for Computational Linguistics.
- Cite (Informal):
- Gender and Representation Bias in GPT-3 Generated Stories (Lucy & Bamman, NUSE-WNU 2021)
- PDF:
- https://preview.aclanthology.org/nschneid-patch-3/2021.nuse-1.5.pdf
- Code
- lucy3/gpt3_gender