Empowering parameter-efficient transfer learning by recognizing the kernel structure in self-attention

Yifan Chen, Devamanyu Hazarika, Mahdi Namazifar, Yang Liu, Di Jin, Dilek Hakkani-Tur


Abstract
The massive amount of trainable parameters in the pre-trained language models (PLMs) makes them hard to be deployed to multiple downstream tasks. To address this issue, parameter-efficient transfer learning methods have been proposed to tune only a few parameters during fine-tuning while freezing the rest. This paper looks at existing methods along this line through the kernel lens. Motivated by the connection between self-attention in transformer-based PLMs and kernel learning, we propose kernel-wise adapters, namely Kernel-mix, that utilize the kernel structure in self-attention to guide the assignment of the tunable parameters. These adapters use guidelines found in classical kernel learning and enable separate parameter tuning for each attention head. Our empirical results, over a diverse set of natural language generation and understanding tasks, show that our proposed adapters can attain or improve the strong performance of existing baselines.
Anthology ID:
2022.findings-naacl.102
Volume:
Findings of the Association for Computational Linguistics: NAACL 2022
Month:
July
Year:
2022
Address:
Seattle, United States
Editors:
Marine Carpuat, Marie-Catherine de Marneffe, Ivan Vladimir Meza Ruiz
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1375–1388
Language:
URL:
https://aclanthology.org/2022.findings-naacl.102
DOI:
10.18653/v1/2022.findings-naacl.102
Bibkey:
Cite (ACL):
Yifan Chen, Devamanyu Hazarika, Mahdi Namazifar, Yang Liu, Di Jin, and Dilek Hakkani-Tur. 2022. Empowering parameter-efficient transfer learning by recognizing the kernel structure in self-attention. In Findings of the Association for Computational Linguistics: NAACL 2022, pages 1375–1388, Seattle, United States. Association for Computational Linguistics.
Cite (Informal):
Empowering parameter-efficient transfer learning by recognizing the kernel structure in self-attention (Chen et al., Findings 2022)
Copy Citation:
PDF:
https://preview.aclanthology.org/naacl-24-ws-corrections/2022.findings-naacl.102.pdf
Video:
 https://preview.aclanthology.org/naacl-24-ws-corrections/2022.findings-naacl.102.mp4
Code
 ychen-stat-ml/kernel-adapters
Data
CoQAGLUEMultiNLISSTSST-2