Abstract
Parameter-efficient fine-tuning has garnered lots of attention in recent studies. On this subject, we investigate the capability of different transformer modules in transferring knowledge from a pre-trained model to a downstream task. Our empirical results suggest that every transformer module is a winning ticket such that fine-tuning the specific module while the rest of the network is frozen achieves a comparable performance to the full fine-tuning case. Among different modules in LMs, LayerNorms exhibit a significant capacity for transfer learning to the extent that with only 0.003% updateable parameters in the layer-wise analysis, they can show acceptable performance on various target tasks. We argue that the performance of LayerNorms could be attributed to their high-magnitude weights compared to other components in a pre-trained model.- Anthology ID:
- 2022.emnlp-main.726
- Volume:
- Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
- Month:
- December
- Year:
- 2022
- Address:
- Abu Dhabi, United Arab Emirates
- Editors:
- Yoav Goldberg, Zornitsa Kozareva, Yue Zhang
- Venue:
- EMNLP
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 10617–10625
- Language:
- URL:
- https://aclanthology.org/2022.emnlp-main.726
- DOI:
- 10.18653/v1/2022.emnlp-main.726
- Cite (ACL):
- Mohammad AkbarTajari, Sara Rajaee, and Mohammad Taher Pilehvar. 2022. An Empirical Study on the Transferability of Transformer Modules in Parameter-efficient Fine-tuning. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 10617–10625, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
- Cite (Informal):
- An Empirical Study on the Transferability of Transformer Modules in Parameter-efficient Fine-tuning (AkbarTajari et al., EMNLP 2022)
- PDF:
- https://preview.aclanthology.org/emnlp22-frontmatter/2022.emnlp-main.726.pdf