LEA: Meta Knowledge-Driven Self-Attentive Document Embedding for Few-Shot Text Classification

S. K. Hong, Tae Young Jang


Abstract
Text classification has achieved great success with the prosperity of deep learning and pre-trained language models. However, we often encounter labeled data deficiency problems in real-world text-classification tasks. To overcome such challenging scenarios, interest in few-shot learning has increased, whereas most few-shot text classification studies suffer from a difficulty of utilizing pre-trained language models. In the study, we propose a novel learning method for learning how to attend, called LEA, through which meta-level attention aspects are derived based on our meta-learning strategy. This enables the generation of task-specific document embedding with leveraging pre-trained language models even though a few labeled data instances are given. We evaluate our proposed learning method on five benchmark datasets. The results show that the novel method robustly provides the competitive performance compared to recent few-shot learning methods for all the datasets.
Anthology ID:
2022.naacl-main.7
Volume:
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
Month:
July
Year:
2022
Address:
Seattle, United States
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
99–106
Language:
URL:
https://aclanthology.org/2022.naacl-main.7
DOI:
10.18653/v1/2022.naacl-main.7
Bibkey:
Cite (ACL):
S. K. Hong and Tae Young Jang. 2022. LEA: Meta Knowledge-Driven Self-Attentive Document Embedding for Few-Shot Text Classification. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 99–106, Seattle, United States. Association for Computational Linguistics.
Cite (Informal):
LEA: Meta Knowledge-Driven Self-Attentive Document Embedding for Few-Shot Text Classification (Hong & Jang, NAACL 2022)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingestion-script-update/2022.naacl-main.7.pdf
Video:
 https://preview.aclanthology.org/ingestion-script-update/2022.naacl-main.7.mp4
Data
RCV1