Abstract
Parameter-efficient fine-tuning (PEFT) has shown its effectiveness in adapting the pre-trained language models to downstream tasks while only updating a small number of parameters. Despite the success, most existing methods independently adapt to each task without considering knowledge transfer between tasks and are limited to low-data regimes. To overcome this issue, we propose Prototype-based HyperAdapter (PHA), a novel framework built on the adapter-tuning and hypernetwork. It introduces an instance-dense retriever and a prototypical hypernetwork to generate the conditional modules in a sample-efficient manner. This leads to comparable performance improvements against existing PEFT methods on multi-task learning and few-shot transfer learning. More importantly, when the available data size gets smaller, our method outperforms other strong baselines by a large margin. Based on our extensive empirical experiments across various datasets, we demonstrate that PHA strikes a better trade-off between trainable parameters, accuracy on stream tasks, and sample efficiency. Our code is publicly available at https://github.com/Bumble666/PHA- Anthology ID:
- 2023.emnlp-main.280
- Volume:
- Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
- Month:
- December
- Year:
- 2023
- Address:
- Singapore
- Editors:
- Houda Bouamor, Juan Pino, Kalika Bali
- Venue:
- EMNLP
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 4603–4615
- Language:
- URL:
- https://aclanthology.org/2023.emnlp-main.280
- DOI:
- 10.18653/v1/2023.emnlp-main.280
- Cite (ACL):
- Hao Zhao, Jie Fu, and Zhaofeng He. 2023. Prototype-based HyperAdapter for Sample-Efficient Multi-task Tuning. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 4603–4615, Singapore. Association for Computational Linguistics.
- Cite (Informal):
- Prototype-based HyperAdapter for Sample-Efficient Multi-task Tuning (Zhao et al., EMNLP 2023)
- PDF:
- https://preview.aclanthology.org/corrections-2024-07/2023.emnlp-main.280.pdf