Abstract
Effectively finetuning pretrained language models (PLMs) is critical for their success in downstream tasks. However, PLMs may have risks in overfitting the pretraining tasks and data, which usually have gap with the target downstream tasks. Such gap may be difficult for existing PLM finetuning methods to overcome and lead to suboptimal performance. In this paper, we propose a very simple yet effective method named NoisyTune to help better finetune PLMs on downstream tasks by adding some noise to the parameters of PLMs before fine-tuning. More specifically, we propose a matrix-wise perturbing method which adds different uniform noises to different parameter matrices based on their standard deviations. In this way, the varied characteristics of different types of parameters in PLMs can be considered. Extensive experiments on both GLUE English benchmark and XTREME multilingual benchmark show NoisyTune can consistently empower the finetuning of different PLMs on different downstream tasks.- Anthology ID:
- 2022.acl-short.76
- Volume:
- Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)
- Month:
- May
- Year:
- 2022
- Address:
- Dublin, Ireland
- Venue:
- ACL
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 680–685
- Language:
- URL:
- https://aclanthology.org/2022.acl-short.76
- DOI:
- 10.18653/v1/2022.acl-short.76
- Cite (ACL):
- Chuhan Wu, Fangzhao Wu, Tao Qi, and Yongfeng Huang. 2022. NoisyTune: A Little Noise Can Help You Finetune Pretrained Language Models Better. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 680–685, Dublin, Ireland. Association for Computational Linguistics.
- Cite (Informal):
- NoisyTune: A Little Noise Can Help You Finetune Pretrained Language Models Better (Wu et al., ACL 2022)
- PDF:
- https://preview.aclanthology.org/paclic-22-ingestion/2022.acl-short.76.pdf
- Data
- GLUE, XTREME