Yutian Luo
2022
Visual Prompt Tuning for Few-Shot Text Classification
Jingyuan Wen
|
Yutian Luo
|
Nanyi Fei
|
Guoxing Yang
|
Zhiwu Lu
|
Hao Jiang
|
Jie Jiang
|
Zhao Cao
Proceedings of the 29th International Conference on Computational Linguistics
Deploying large-scale pre-trained models in the prompt-tuning paradigm has demonstrated promising performance in few-shot learning. Particularly, vision-language pre-training models (VL-PTMs) have been intensively explored in various few-shot downstream tasks. However, most existing works only apply VL-PTMs to visual tasks like image classification, with few attempts being made on language tasks like text classification. In few-shot text classification, a feasible paradigm for deploying VL-PTMs is to align the input samples and their category names via the text encoders. However, it leads to the waste of visual information learned by the image encoders of VL-PTMs. To overcome this drawback, we propose a novel method named Visual Prompt Tuning (VPT). To our best knowledge, this method is the first attempt to deploy VL-PTM in few-shot text classification task. The main idea is to generate the image embeddings w.r.t. category names as visual prompt and then add them to the aligning process. Extensive experiments show that our VPT can achieve significant improvements under both zero-shot and few-shot settings. Importantly, our VPT even outperforms the most recent prompt-tuning methods on five public text classification datasets.
Search
Co-authors
- Jingyuan Wen 1
- Nanyi Fei 1
- Guoxing Yang 1
- Zhiwu Lu 1
- Hao Jiang 1
- show all...