Bridge the Gap Between CV and NLP! A Gradient-based Textual Adversarial Attack Framework

Lifan Yuan, YiChi Zhang, Yangyi Chen, Wei Wei


Abstract
Despite recent success on various tasks, deep learning techniques still perform poorly on adversarial examples with small perturbations. While optimization-based methods for adversarial attacks are well-explored in the field of computer vision, it is impractical to directly apply them in natural language processing due to the discrete nature of the text. To address the problem, we propose a unified framework to extend the existing optimization-based adversarial attack methods in the vision domain to craft textual adversarial samples. In this framework, continuously optimized perturbations are added to the embedding layer and amplified in the forward propagation process. Then the final perturbed latent representations are decoded with a masked language model head to obtain potential adversarial samples. In this paper, we instantiate our framework with an attack algorithm named Textual Projected Gradient Descent (T-PGD). We find our algorithm effective even using proxy gradient information. Therefore, we perform the more challenging transfer black-box attack and conduct comprehensive experiments to evaluate our attack algorithm with several models on three benchmark datasets. Experimental results demonstrate that our method achieves overall better performance and produces more fluent and grammatical adversarial samples compared to strong baseline methods. The code and data are available at https://github.com/Phantivia/T-PGD.
Anthology ID:
2023.findings-acl.446
Volume:
Findings of the Association for Computational Linguistics: ACL 2023
Month:
July
Year:
2023
Address:
Toronto, Canada
Editors:
Anna Rogers, Jordan Boyd-Graber, Naoaki Okazaki
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
7132–7146
Language:
URL:
https://aclanthology.org/2023.findings-acl.446
DOI:
10.18653/v1/2023.findings-acl.446
Bibkey:
Cite (ACL):
Lifan Yuan, YiChi Zhang, Yangyi Chen, and Wei Wei. 2023. Bridge the Gap Between CV and NLP! A Gradient-based Textual Adversarial Attack Framework. In Findings of the Association for Computational Linguistics: ACL 2023, pages 7132–7146, Toronto, Canada. Association for Computational Linguistics.
Cite (Informal):
Bridge the Gap Between CV and NLP! A Gradient-based Textual Adversarial Attack Framework (Yuan et al., Findings 2023)
Copy Citation:
PDF:
https://preview.aclanthology.org/dois-2013-emnlp/2023.findings-acl.446.pdf