Varun Jampani
2022
CPL: Counterfactual Prompt Learning for Vision and Language Models
Xuehai He
|
Diji Yang
|
Weixi Feng
|
Tsu-Jui Fu
|
Arjun Akula
|
Varun Jampani
|
Pradyumna Narayana
|
Sugato Basu
|
William Yang Wang
|
Xin Wang
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
Prompt tuning is a new few-shot transfer learning technique that only tunes the learnable prompt for pre-trained vision and language models such as CLIP. However, existing prompt tuning methods tend to learn spurious or entangled representations, which leads to poor generalization to unseen concepts.Towards non-spurious and efficient prompt learning from limited examples, this paper presents a novel Counterfactual Prompt Learning (CPL) method for vision and language models, which simultaneously employs counterfactual generation and contrastive learning in a joint optimization framework.Particularly, CPL constructs counterfactual by identifying minimal non-spurious feature change between semantically-similar positive and negative samples that causes concept change, and learns more generalizable prompt representation from both factual and counterfactual examples via contrastive learning. Extensive experiments demonstrate that CPL can obtain superior few-shot performance on different vision and language tasks than previous prompt tuning methods on CLIP. On image classification, we achieve 3.55% average relative improvement on unseen classes across seven datasets; on image-text retrieval and visual question answering, we gain up to 4.09% and 25.08% relative improvements across three few-shot scenarios on unseen test sets respectively.
Search
Co-authors
- Xuehai He 1
- Diji Yang 1
- Weixi Feng 1
- Tsu-Jui Fu 1
- Arjun Akula 1
- show all...