Don’t be a Fool: Pooling Strategies in Offensive Language Detection from User-Intended Adversarial Attacks

Seunguk Yu, Juhwan Choi, YoungBin Kim


Abstract
Offensive language detection is an important task for filtering out abusive expressions and improving online user experiences. However, malicious users often attempt to avoid filtering systems through the involvement of textual noises. In this paper, we propose these evasions as user-intended adversarial attacks that insert special symbols or leverage the distinctive features of the Korean language. Furthermore, we introduce simple yet effective pooling strategies in a layer-wise manner to defend against the proposed attacks, focusing on the preceding layers not just the last layer to capture both offensiveness and token embeddings. We demonstrate that these pooling strategies are more robust to performance degradation even when the attack rate is increased, without directly training of such patterns. Notably, we found that models pre-trained on clean texts could achieve a comparable performance in detecting attacked offensive language, to models pre-trained on noisy texts by employing these pooling strategies.
Anthology ID:
2024.findings-naacl.219
Volume:
Findings of the Association for Computational Linguistics: NAACL 2024
Month:
June
Year:
2024
Address:
Mexico City, Mexico
Editors:
Kevin Duh, Helena Gomez, Steven Bethard
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
3456–3467
Language:
URL:
https://aclanthology.org/2024.findings-naacl.219
DOI:
10.18653/v1/2024.findings-naacl.219
Bibkey:
Cite (ACL):
Seunguk Yu, Juhwan Choi, and YoungBin Kim. 2024. Don’t be a Fool: Pooling Strategies in Offensive Language Detection from User-Intended Adversarial Attacks. In Findings of the Association for Computational Linguistics: NAACL 2024, pages 3456–3467, Mexico City, Mexico. Association for Computational Linguistics.
Cite (Informal):
Don’t be a Fool: Pooling Strategies in Offensive Language Detection from User-Intended Adversarial Attacks (Yu et al., Findings 2024)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-4/2024.findings-naacl.219.pdf