Multi-Layer Random Perturbation Training for improving Model Generalization Efficiently

Lis Kanashiro Pereira, Yuki Taya, Ichiro Kobayashi


Abstract
We propose a simple yet effective Multi-Layer RAndom Perturbation Training algorithm (RAPT) to enhance model robustness and generalization. The key idea is to apply randomly sampled noise to each input to generate label-preserving artificial input points. To encourage the model to generate more diverse examples, the noise is added to a combination of the model layers. Then, our model regularizes the posterior difference between clean and noisy inputs. We apply RAPT towards robust and efficient BERT training, and conduct comprehensive fine-tuning experiments on GLUE tasks. Our results show that RAPT outperforms the standard fine-tuning approach, and adversarial training method, yet with 22% less training time.
Anthology ID:
2021.blackboxnlp-1.23
Volume:
Proceedings of the Fourth BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP
Month:
November
Year:
2021
Address:
Punta Cana, Dominican Republic
Venue:
BlackboxNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
303–310
Language:
URL:
https://aclanthology.org/2021.blackboxnlp-1.23
DOI:
10.18653/v1/2021.blackboxnlp-1.23
Bibkey:
Cite (ACL):
Lis Kanashiro Pereira, Yuki Taya, and Ichiro Kobayashi. 2021. Multi-Layer Random Perturbation Training for improving Model Generalization Efficiently. In Proceedings of the Fourth BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP, pages 303–310, Punta Cana, Dominican Republic. Association for Computational Linguistics.
Cite (Informal):
Multi-Layer Random Perturbation Training for improving Model Generalization Efficiently (Kanashiro Pereira et al., BlackboxNLP 2021)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingestion-script-update/2021.blackboxnlp-1.23.pdf
Data
CoLAMRPCMultiNLISST