Abstract
NLP has advanced greatly together with the proliferation of Transformer-based pre-trained language models. To adapt to a downstream task, the pre-trained language models need to be fine-tuned with a sufficient supply of annotated examples. In recent years, Adapter-based fine-tuning methods have expanded the applicability of pre-trained language models by substantially lowering the required amount of annotated examples. However, existing Adapter-based methods still fail to yield meaningful results in the few-shot regime where only a few annotated examples are provided. In this study, we present a meta-learning-driven low-rank adapter pooling method, called AMAL, for leveraging pre-trained language models even with just a few data points. We evaluate our method on five text classification benchmark datasets. The results show that AMAL significantly outperforms previous few-shot learning methods and achieves a new state-of-the-art.- Anthology ID:
- 2022.emnlp-main.709
- Volume:
- Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
- Month:
- December
- Year:
- 2022
- Address:
- Abu Dhabi, United Arab Emirates
- Editors:
- Yoav Goldberg, Zornitsa Kozareva, Yue Zhang
- Venue:
- EMNLP
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 10381–10389
- Language:
- URL:
- https://aclanthology.org/2022.emnlp-main.709
- DOI:
- 10.18653/v1/2022.emnlp-main.709
- Cite (ACL):
- S. K. Hong and Tae Young Jang. 2022. AMAL: Meta Knowledge-Driven Few-Shot Adapter Learning. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 10381–10389, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
- Cite (Informal):
- AMAL: Meta Knowledge-Driven Few-Shot Adapter Learning (Hong & Jang, EMNLP 2022)
- PDF:
- https://preview.aclanthology.org/nschneid-patch-2/2022.emnlp-main.709.pdf