Model-tuning Via Prompts Makes NLP Models Adversarially Robust
Mrigank Raman, Pratyush Maini, J Kolter, Zachary Lipton, Danish Pruthi
Abstract
In recent years, NLP practitioners have converged on the following practice: (i) import an off-the-shelf pretrained (masked) language model; (ii) append a multilayer perceptron atop the CLS token’s hidden representation (with randomly initialized weights); and (iii) fine-tune the entire model on a downstream task (MLP-FT). This procedure has produced massive gains on standard NLP benchmarks, but these models remain brittle, even to mild adversarial perturbations. In this work, we demonstrate surprising gains in adversarial robustness enjoyed by Model-tuning Via Prompts (MVP), an alternative method of adapting to downstream tasks. Rather than appending an MLP head to make output prediction, MVP appends a prompt template to the input, and makes prediction via text infilling/completion. Across 5 NLP datasets, 4 adversarial attacks, and 3 different models, MVP improves performance against adversarial substitutions by an average of 8% over standard methods and even outperforms adversarial training-based state-of-art defenses by 3.5%. By combining MVP with adversarial training, we achieve further improvements in adversarial robustness while maintaining performance on unperturbed examples. Finally, we conduct ablations to investigate the mechanism underlying these gains. Notably, we find that the main causes of vulnerability of MLP-FT can be attributed to the misalignment between pre-training and fine-tuning tasks, and the randomly initialized MLP parameters.- Anthology ID:
- 2023.emnlp-main.576
- Volume:
- Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
- Month:
- December
- Year:
- 2023
- Address:
- Singapore
- Editors:
- Houda Bouamor, Juan Pino, Kalika Bali
- Venue:
- EMNLP
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 9266–9286
- Language:
- URL:
- https://aclanthology.org/2023.emnlp-main.576
- DOI:
- 10.18653/v1/2023.emnlp-main.576
- Cite (ACL):
- Mrigank Raman, Pratyush Maini, J Kolter, Zachary Lipton, and Danish Pruthi. 2023. Model-tuning Via Prompts Makes NLP Models Adversarially Robust. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 9266–9286, Singapore. Association for Computational Linguistics.
- Cite (Informal):
- Model-tuning Via Prompts Makes NLP Models Adversarially Robust (Raman et al., EMNLP 2023)
- PDF:
- https://preview.aclanthology.org/nschneid-patch-5/2023.emnlp-main.576.pdf