Hidden State Variability of Pretrained Language Models Can Guide Computation Reduction for Transfer Learning

Shuo Xie, Jiahao Qiu, Ankita Pasad, Li Du, Qing Qu, Hongyuan Mei


Abstract
While transferring a pretrained language model, common approaches conventionally attach their task-specific classifiers to the top layer and adapt all the pretrained layers. We investigate whether one could make a task-specific selection on which subset of the layers to adapt and where to place the classifier. The goal is to reduce the computation cost of transfer learning methods (e.g. fine-tuning or adapter-tuning) without sacrificing its performance.We propose to select layers based on the variability of their hidden states given a task-specific corpus. We say a layer is already “well-specialized” in a task if the within-class variability of its hidden states is low relative to the between-class variability. Our variability metric is cheap to compute and doesn’t need any training or hyperparameter tuning. It is robust to data imbalance and data scarcity. Extensive experiments on the GLUE benchmark demonstrate that selecting layers based on our metric can yield significantly stronger performance than using the same number of top layers and often match the performance of fine-tuning or adapter-tuning the entire language model.
Anthology ID:
2022.findings-emnlp.422
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2022
Month:
December
Year:
2022
Address:
Abu Dhabi, United Arab Emirates
Editors:
Yoav Goldberg, Zornitsa Kozareva, Yue Zhang
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
5750–5768
Language:
URL:
https://aclanthology.org/2022.findings-emnlp.422
DOI:
10.18653/v1/2022.findings-emnlp.422
Bibkey:
Cite (ACL):
Shuo Xie, Jiahao Qiu, Ankita Pasad, Li Du, Qing Qu, and Hongyuan Mei. 2022. Hidden State Variability of Pretrained Language Models Can Guide Computation Reduction for Transfer Learning. In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 5750–5768, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Cite (Informal):
Hidden State Variability of Pretrained Language Models Can Guide Computation Reduction for Transfer Learning (Xie et al., Findings 2022)
Copy Citation:
PDF:
https://preview.aclanthology.org/naacl-24-ws-corrections/2022.findings-emnlp.422.pdf