RedHerring Attack: Testing the Reliability of Attack Detection

Jonathan Rusert


Abstract
In response to adversarial text attacks, attack detection models have been proposed and shown to successfully identify text modified by adversaries. Attack detection models can be leveraged to provide an additional check for NLP models and give signals for human input. However, the reliability of these models has not yet been thoroughly explored. Thus, we propose and test a novel attack setting and attack, RedHerring. RedHerring aims to make attack detection models unreliable by modifying a text to cause the detection model to predict an attack, while keeping the classifier correct. This creates a tension between the classifier and detector. If a human sees that the detector is giving an “incorrect” prediction, but the classifier a correct one, then the human will see the detector as unreliable. We test this novel threat model on 4 datasets against 3 detectors defending 4 classifiers. We find that RedHerring is able to drop detection accuracy between 20 - 71 points, while maintaining (or improving) classifier accuracy. As an initial defense, we propose a simple confidence check which requires no retraining of the classifier or detector and increases detection accuracy greatly. This novel threat model offers new insights into how adversaries may target detection models.
Anthology ID:
2025.emnlp-main.591
Volume:
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
11704–11719
Language:
URL:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.591/
DOI:
Bibkey:
Cite (ACL):
Jonathan Rusert. 2025. RedHerring Attack: Testing the Reliability of Attack Detection. In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, pages 11704–11719, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
RedHerring Attack: Testing the Reliability of Attack Detection (Rusert, EMNLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.591.pdf
Checklist:
 2025.emnlp-main.591.checklist.pdf