A Classification-Guided Approach for Adversarial Attacks against Neural Machine Translation

Sahar Sadrizadeh, Ljiljana Dolamic, Pascal Frossard


Abstract
Neural Machine Translation (NMT) models have been shown to be vulnerable to adversarial attacks, wherein carefully crafted perturbations of the input can mislead the target model. In this paper, we introduce ACT, a novel adversarial attack framework against NMT systems guided by a classifier. In our attack, the adversary aims to craft meaning-preserving adversarial examples whose translations in the target language by the NMT model belong to a different class than the original translations. Unlike previous attacks, our new approach has a more substantial effect on the translation by altering the overall meaning, which then leads to a different class determined by an oracle classifier. To evaluate the robustness of NMT models to our attack, we propose enhancements to existing black-box word-replacement-based attacks by incorporating output translations of the target NMT model and the output logits of a classifier within the attack process. Extensive experiments, including a comparison with existing untargeted attacks, show that our attack is considerably more successful in altering the class of the output translation and has more effect on the translation. This new paradigm can reveal the vulnerabilities of NMT systems by focusing on the class of translation rather than the mere translation quality as studied traditionally.
Anthology ID:
2024.eacl-long.70
Volume:
Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
March
Year:
2024
Address:
St. Julian’s, Malta
Editors:
Yvette Graham, Matthew Purver
Venue:
EACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1160–1177
Language:
URL:
https://aclanthology.org/2024.eacl-long.70
DOI:
Bibkey:
Cite (ACL):
Sahar Sadrizadeh, Ljiljana Dolamic, and Pascal Frossard. 2024. A Classification-Guided Approach for Adversarial Attacks against Neural Machine Translation. In Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1160–1177, St. Julian’s, Malta. Association for Computational Linguistics.
Cite (Informal):
A Classification-Guided Approach for Adversarial Attacks against Neural Machine Translation (Sadrizadeh et al., EACL 2024)
Copy Citation:
PDF:
https://preview.aclanthology.org/emnlp-22-attachments/2024.eacl-long.70.pdf