CLIP Models are Few-Shot Learners: Empirical Studies on VQA and Visual Entailment

Haoyu Song, Li Dong, Weinan Zhang, Ting Liu, Furu Wei


Abstract
CLIP has shown a remarkable zero-shot capability on a wide range of vision tasks. Previously, CLIP is only regarded as a powerful visual encoder. However, after being pre-trained by language supervision from a large amount of image-caption pairs, CLIP itself should also have acquired some few-shot abilities for vision-language tasks. In this work, we empirically show that CLIP can be a strong vision-language few-shot learner by leveraging the power of language. We first evaluate CLIP’s zero-shot performance on a typical visual question answering task and demonstrate a zero-shot cross-modality transfer capability of CLIP on the visual entailment task. Then we propose a parameter-efficient fine-tuning strategy to boost the few-shot performance on the vqa task. We achieve competitive zero/few-shot results on the visual question answering and visual entailment tasks without introducing any additional pre-training procedure.
Anthology ID:
2022.acl-long.421
Volume:
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
May
Year:
2022
Address:
Dublin, Ireland
Editors:
Smaranda Muresan, Preslav Nakov, Aline Villavicencio
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
6088–6100
Language:
URL:
https://preview.aclanthology.org/build-pipeline-with-new-library/2022.acl-long.421/
DOI:
10.18653/v1/2022.acl-long.421
Bibkey:
Cite (ACL):
Haoyu Song, Li Dong, Weinan Zhang, Ting Liu, and Furu Wei. 2022. CLIP Models are Few-Shot Learners: Empirical Studies on VQA and Visual Entailment. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 6088–6100, Dublin, Ireland. Association for Computational Linguistics.
Cite (Informal):
CLIP Models are Few-Shot Learners: Empirical Studies on VQA and Visual Entailment (Song et al., ACL 2022)
Copy Citation:
PDF:
https://preview.aclanthology.org/build-pipeline-with-new-library/2022.acl-long.421.pdf
Software:
 2022.acl-long.421.software.zip
Data
SNLI-VEVisual Question Answering