Change My Mind: How Syntax-based Hate Speech Recognizer Can Uncover Hidden Motivations Based on Different Viewpoints

Michele Mastromattei, Valerio Basile, Fabio Massimo Zanzotto


Abstract
Hate speech recognizers may mislabel sentences by not considering the different opinions that society has on selected topics. In this paper, we show how explainable machine learning models based on syntax can help to understand the motivations that induce a sentence to be offensive to a certain demographic group. By comparing and contrasting the results, we show the key points that make a sentence labeled as hate speech and how this varies across different ethnic groups.
Anthology ID:
2022.nlperspectives-1.15
Volume:
Proceedings of the 1st Workshop on Perspectivist Approaches to NLP @LREC2022
Month:
June
Year:
2022
Address:
Marseille, France
Editors:
Gavin Abercrombie, Valerio Basile, Sara Tonelli, Verena Rieser, Alexandra Uma
Venue:
NLPerspectives
SIG:
Publisher:
European Language Resources Association
Note:
Pages:
117–125
Language:
URL:
https://aclanthology.org/2022.nlperspectives-1.15
DOI:
Bibkey:
Cite (ACL):
Michele Mastromattei, Valerio Basile, and Fabio Massimo Zanzotto. 2022. Change My Mind: How Syntax-based Hate Speech Recognizer Can Uncover Hidden Motivations Based on Different Viewpoints. In Proceedings of the 1st Workshop on Perspectivist Approaches to NLP @LREC2022, pages 117–125, Marseille, France. European Language Resources Association.
Cite (Informal):
Change My Mind: How Syntax-based Hate Speech Recognizer Can Uncover Hidden Motivations Based on Different Viewpoints (Mastromattei et al., NLPerspectives 2022)
Copy Citation:
PDF:
https://preview.aclanthology.org/dois-2013-emnlp/2022.nlperspectives-1.15.pdf