Virtual Data Augmentation: A Robust and General Framework for Fine-tuning Pre-trained Models

Kun Zhou, Wayne Xin Zhao, Sirui Wang, Fuzheng Zhang, Wei Wu, Ji-Rong Wen


Abstract
Recent works have shown that powerful pre-trained language models (PLM) can be fooled by small perturbations or intentional attacks. To solve this issue, various data augmentation techniques are proposed to improve the robustness of PLMs. However, it is still challenging to augment semantically relevant examples with sufficient diversity. In this work, we present Virtual Data Augmentation (VDA), a general framework for robustly fine-tuning PLMs. Based on the original token embeddings, we construct a multinomial mixture for augmenting virtual data embeddings, where a masked language model guarantees the semantic relevance and the Gaussian noise provides the augmentation diversity. Furthermore, a regularized training strategy is proposed to balance the two aspects. Extensive experiments on six datasets show that our approach is able to improve the robustness of PLMs and alleviate the performance degradation under adversarial attacks. Our codes and data are publicly available at bluehttps://github.com/RUCAIBox/VDA.
Anthology ID:
2021.emnlp-main.315
Volume:
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2021
Address:
Online and Punta Cana, Dominican Republic
Editors:
Marie-Francine Moens, Xuanjing Huang, Lucia Specia, Scott Wen-tau Yih
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
3875–3887
Language:
URL:
https://aclanthology.org/2021.emnlp-main.315
DOI:
10.18653/v1/2021.emnlp-main.315
Bibkey:
Cite (ACL):
Kun Zhou, Wayne Xin Zhao, Sirui Wang, Fuzheng Zhang, Wei Wu, and Ji-Rong Wen. 2021. Virtual Data Augmentation: A Robust and General Framework for Fine-tuning Pre-trained Models. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 3875–3887, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Cite (Informal):
Virtual Data Augmentation: A Robust and General Framework for Fine-tuning Pre-trained Models (Zhou et al., EMNLP 2021)
Copy Citation:
PDF:
https://preview.aclanthology.org/naacl24-info/2021.emnlp-main.315.pdf
Video:
 https://preview.aclanthology.org/naacl24-info/2021.emnlp-main.315.mp4
Code
 rucaibox/vda
Data
GLUEMRPCQNLI