Beyond Negative Stereotypes – Non-Negative Abusive Utterances about Identity Groups and Their Semantic Variants

Tina Lommel, Elisabeth Eder, Josef Ruppenhofer, Michael Wiegand


Abstract
We study a subtype of implicitly abusive language, namely non-negative sentences about identity groups (e.g. “Women make good cooks”), and introduce a novel dataset of such utterances. Not only do we profile such abusive sentences, but since our dataset includes different semantic variants of the same characteristic attributed to an identity group, we can also systematically study the impact of varying degrees of generalization and perspective framing. Similarly, we switch identity groups to assess whether the characteristic described in a sentence is inherently abusive. We also report on classification experiments.
Anthology ID:
2025.acl-long.1363
Volume:
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Wanxiang Che, Joyce Nabende, Ekaterina Shutova, Mohammad Taher Pilehvar
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
28102–28120
Language:
URL:
https://preview.aclanthology.org/ingestion-acl-25/2025.acl-long.1363/
DOI:
Bibkey:
Cite (ACL):
Tina Lommel, Elisabeth Eder, Josef Ruppenhofer, and Michael Wiegand. 2025. Beyond Negative Stereotypes – Non-Negative Abusive Utterances about Identity Groups and Their Semantic Variants. In Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 28102–28120, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
Beyond Negative Stereotypes – Non-Negative Abusive Utterances about Identity Groups and Their Semantic Variants (Lommel et al., ACL 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingestion-acl-25/2025.acl-long.1363.pdf