AdapterBias: Parameter-efficient Token-dependent Representation Shift for Adapters in NLP Tasks

Chin-Lun Fu, Zih-Ching Chen, Yun-Ru Lee, Hung-yi Lee


Abstract
Transformer-based pre-trained models with millions of parameters require large storage. Recent approaches tackle this shortcoming by training adapters, but these approaches still require a relatively large number of parameters. In this study, AdapterBias, a surprisingly simple yet effective adapter architecture, is proposed. AdapterBias adds a token-dependent shift to the hidden output of transformer layers to adapt to downstream tasks with only a vector and a linear layer. Extensive experiments are conducted to demonstrate the effectiveness of AdapterBias. The experiments show that our proposed method can dramatically reduce the trainable parameters compared to the previous works with a minimal decrease in task performances compared with fine-tuned pre-trained models. We further find that AdapterBias automatically learns to assign more significant representation shifts to the tokens related to the task in consideration.
Anthology ID:
2022.findings-naacl.199
Volume:
Findings of the Association for Computational Linguistics: NAACL 2022
Month:
July
Year:
2022
Address:
Seattle, United States
Editors:
Marine Carpuat, Marie-Catherine de Marneffe, Ivan Vladimir Meza Ruiz
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2608–2621
Language:
URL:
https://aclanthology.org/2022.findings-naacl.199
DOI:
10.18653/v1/2022.findings-naacl.199
Bibkey:
Cite (ACL):
Chin-Lun Fu, Zih-Ching Chen, Yun-Ru Lee, and Hung-yi Lee. 2022. AdapterBias: Parameter-efficient Token-dependent Representation Shift for Adapters in NLP Tasks. In Findings of the Association for Computational Linguistics: NAACL 2022, pages 2608–2621, Seattle, United States. Association for Computational Linguistics.
Cite (Informal):
AdapterBias: Parameter-efficient Token-dependent Representation Shift for Adapters in NLP Tasks (Fu et al., Findings 2022)
Copy Citation:
PDF:
https://preview.aclanthology.org/naacl-24-ws-corrections/2022.findings-naacl.199.pdf
Video:
 https://preview.aclanthology.org/naacl-24-ws-corrections/2022.findings-naacl.199.mp4
Code
 Allen0307/AdapterBias
Data
CoLAGLUEMultiNLIQNLISQuADSSTSST-2