When does a compliment become sexist? Analysis and classification of ambivalent sexism using twitter data

Akshita Jha, Radhika Mamidi


Abstract
Sexism is prevalent in today’s society, both offline and online, and poses a credible threat to social equality with respect to gender. According to ambivalent sexism theory (Glick and Fiske, 1996), it comes in two forms: Hostile and Benevolent. While hostile sexism is characterized by an explicitly negative attitude, benevolent sexism is more subtle. Previous works on computationally detecting sexism present online are restricted to identifying the hostile form. Our objective is to investigate the less pronounced form of sexism demonstrated online. We achieve this by creating and analyzing a dataset of tweets that exhibit benevolent sexism. By using Support Vector Machines (SVM), sequence-to-sequence models and FastText classifier, we classify tweets into ‘Hostile’, ‘Benevolent’ or ‘Others’ class depending on the kind of sexism they exhibit. We have been able to achieve an F1-score of 87.22% using FastText classifier. Our work helps analyze and understand the much prevalent ambivalent sexism in social media.
Anthology ID:
W17-2902
Volume:
Proceedings of the Second Workshop on NLP and Computational Social Science
Month:
August
Year:
2017
Address:
Vancouver, Canada
Editors:
Dirk Hovy, Svitlana Volkova, David Bamman, David Jurgens, Brendan O’Connor, Oren Tsur, A. Seza Doğruöz
Venue:
NLP+CSS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
7–16
Language:
URL:
https://aclanthology.org/W17-2902
DOI:
10.18653/v1/W17-2902
Bibkey:
Cite (ACL):
Akshita Jha and Radhika Mamidi. 2017. When does a compliment become sexist? Analysis and classification of ambivalent sexism using twitter data. In Proceedings of the Second Workshop on NLP and Computational Social Science, pages 7–16, Vancouver, Canada. Association for Computational Linguistics.
Cite (Informal):
When does a compliment become sexist? Analysis and classification of ambivalent sexism using twitter data (Jha & Mamidi, NLP+CSS 2017)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-acl-2023-videos/W17-2902.pdf
Code
 AkshitaJha/NLP_CSS_2017