Power(ful) Associations: Rethinking “Stereotype” for NLP

Hannah Devinney


Abstract
The tendency for Natural Language Processing (NLP) technologies to reproduce stereotypical associations, such as associating Black people with criminality or women with care professions, is a site of major concern and, therefore, much study. Stereotyping is a powerful tool of oppression, but the social and linguistic mechanisms behind it are largely ignored in the NLP field. Thus, we fail to effectively challenge stereotypes and the power asymmetries they reinforce. This opinion paper problematizes several common aspects of current work addressing stereotyping in NLP, and offers practicable suggestions for potential forward directions.
Anthology ID:
2025.gebnlp-1.4
Volume:
Proceedings of the 6th Workshop on Gender Bias in Natural Language Processing (GeBNLP)
Month:
August
Year:
2025
Address:
Vienna, Austria
Editors:
Agnieszka Faleńska, Christine Basta, Marta Costa-jussà, Karolina Stańczak, Debora Nozza
Venues:
GeBNLP | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
52–58
Language:
URL:
https://preview.aclanthology.org/landing_page/2025.gebnlp-1.4/
DOI:
10.18653/v1/2025.gebnlp-1.4
Bibkey:
Cite (ACL):
Hannah Devinney. 2025. Power(ful) Associations: Rethinking “Stereotype” for NLP. In Proceedings of the 6th Workshop on Gender Bias in Natural Language Processing (GeBNLP), pages 52–58, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
Power(ful) Associations: Rethinking “Stereotype” for NLP (Devinney, GeBNLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/landing_page/2025.gebnlp-1.4.pdf