UniFine: A Unified and Fine-grained Approach for Zero-shot Vision-Language Understanding

Rui Sun, Zhecan Wang, Haoxuan You, Noel Codella, Kai-Wei Chang, Shih-Fu Chang


Abstract
Vision-language tasks, such as VQA, SNLI-VE, and VCR are challenging because they require the model’s reasoning ability to understand the semantics of the visual world and natural language. Supervised methods working for vision-language tasks have been well-studied. However, solving these tasks in a zero-shot setting is less explored. Since Contrastive Language-Image Pre-training (CLIP) has shown remarkable zero-shot performance on image-text matching, previous works utilized its strong zero-shot ability by converting vision-language tasks into an image-text matching problem, and they mainly consider global-level matching (e.g., the whole image or sentence). However, we find visual and textual fine-grained information, e.g., keywords in the sentence and objects in the image, can be fairly informative for semantics understanding. Inspired by this, we propose a unified framework to take advantage of the fine-grained information for zero-shot vision-language learning, covering multiple tasks such as VQA, SNLI-VE, and VCR. Our experiments show that our framework outperforms former zero-shot methods on VQA and achieves substantial improvement on SNLI-VE and VCR. Furthermore, our ablation studies confirm the effectiveness and generalizability of our proposed method.
Anthology ID:
2023.findings-acl.49
Volume:
Findings of the Association for Computational Linguistics: ACL 2023
Month:
July
Year:
2023
Address:
Toronto, Canada
Editors:
Anna Rogers, Jordan Boyd-Graber, Naoaki Okazaki
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
778–793
Language:
URL:
https://aclanthology.org/2023.findings-acl.49
DOI:
10.18653/v1/2023.findings-acl.49
Bibkey:
Cite (ACL):
Rui Sun, Zhecan Wang, Haoxuan You, Noel Codella, Kai-Wei Chang, and Shih-Fu Chang. 2023. UniFine: A Unified and Fine-grained Approach for Zero-shot Vision-Language Understanding. In Findings of the Association for Computational Linguistics: ACL 2023, pages 778–793, Toronto, Canada. Association for Computational Linguistics.
Cite (Informal):
UniFine: A Unified and Fine-grained Approach for Zero-shot Vision-Language Understanding (Sun et al., Findings 2023)
Copy Citation:
PDF:
https://preview.aclanthology.org/improve-issue-templates/2023.findings-acl.49.pdf
Video:
 https://preview.aclanthology.org/improve-issue-templates/2023.findings-acl.49.mp4