Exploring the Vulnerability of Natural Language Processing Models via Universal Adversarial Texts

Xinzhe Li, Ming Liu, Xingjun Ma, Longxiang Gao


Abstract
Universal adversarial texts (UATs) refer to short pieces of text units that can largely affect the predictions of NLP models. Recent studies on universal adversarial attacks assume the accessibility of datasets for the task, which is not realistic. We propose two types of Data-Free Adjusted Gradient (DFAG) attacks to show that it is possible to generate effective UATs with only one arbitrary example which could be manually crafted. Based on the proposed DFAG attacks, this paper explores the vulnerability of commonly used NLP models in terms of two factors: network architectures and pre-trained embeddings. Our empirical studies on three text classification datasets reveal that: 1) CNN based models are more extremely vulnerable to UATs while self-attention models show the most robustness, 2) the vulnerability of CNN and LSTM models and robustness of self-attention models could be attributed to whether they rely on training data artifacts for their predictions, and 3) the pre-trained embeddings could expose vulnerability to both universal adversarial attack and the UAT transfer attack.
Anthology ID:
2021.alta-1.14
Volume:
Proceedings of the The 19th Annual Workshop of the Australasian Language Technology Association
Month:
December
Year:
2021
Address:
Online
Venue:
ALTA
SIG:
Publisher:
Australasian Language Technology Association
Note:
Pages:
138–148
Language:
URL:
https://aclanthology.org/2021.alta-1.14
DOI:
Bibkey:
Cite (ACL):
Xinzhe Li, Ming Liu, Xingjun Ma, and Longxiang Gao. 2021. Exploring the Vulnerability of Natural Language Processing Models via Universal Adversarial Texts. In Proceedings of the The 19th Annual Workshop of the Australasian Language Technology Association, pages 138–148, Online. Australasian Language Technology Association.
Cite (Informal):
Exploring the Vulnerability of Natural Language Processing Models via Universal Adversarial Texts (Li et al., ALTA 2021)
Copy Citation:
PDF:
https://preview.aclanthology.org/auto-file-uploads/2021.alta-1.14.pdf
Code
 xinzhel/attack_alta
Data
AG NewsSST