On the Transferability of Adversarial Attacks against Neural Text Classifier

Liping Yuan, Xiaoqing Zheng, Yi Zhou, Cho-Jui Hsieh, Kai-Wei Chang


Abstract
Deep neural networks are vulnerable to adversarial attacks, where a small perturbation to an input alters the model prediction. In many cases, malicious inputs intentionally crafted for one model can fool another model. In this paper, we present the first study to systematically investigate the transferability of adversarial examples for text classification models and explore how various factors, including network architecture, tokenization scheme, word embedding, and model capacity, affect the transferability of adversarial examples. Based on these studies, we propose a genetic algorithm to find an ensemble of models that can be used to induce adversarial examples to fool almost all existing models. Such adversarial examples reflect the defects of the learning process and the data bias in the training set. Finally, we derive word replacement rules that can be used for model diagnostics from these adversarial examples.
Anthology ID:
2021.emnlp-main.121
Volume:
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2021
Address:
Online and Punta Cana, Dominican Republic
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1612–1625
Language:
URL:
https://aclanthology.org/2021.emnlp-main.121
DOI:
10.18653/v1/2021.emnlp-main.121
Bibkey:
Cite (ACL):
Liping Yuan, Xiaoqing Zheng, Yi Zhou, Cho-Jui Hsieh, and Kai-Wei Chang. 2021. On the Transferability of Adversarial Attacks against Neural Text Classifier. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 1612–1625, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Cite (Informal):
On the Transferability of Adversarial Attacks against Neural Text Classifier (Yuan et al., EMNLP 2021)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingestion-script-update/2021.emnlp-main.121.pdf
Video:
 https://preview.aclanthology.org/ingestion-script-update/2021.emnlp-main.121.mp4
Data
AG NewsSNLI