VerbCLIP: Improving Verb Understanding in Vision-Language Models with Compositional Structures

Hadi Wazni, Kin Ian Lo, Mehrnoosh Sadrzadeh


Abstract
Verbs describe the dynamics of interactions between people, objects, and their environments. They play a crucial role in language formation and understanding. Nonetheless, recent vision-language models like CLIP predominantly rely on nouns and have a limited account of verbs. This limitation affects their performance in tasks requiring action recognition and scene understanding. In this work, we introduce VerbCLIP, a verb-centric vision-language model which learns meanings of verbs based on a compositional approach to statistical machine learning. Our methods significantly outperform CLIP in zero-shot performance on the VALSE, VL-Checklist, and SVO-Probes datasets, with improvements of +2.38%, +3.14%, and +1.47%, without fine-tuning. Fine-tuning resulted in further improvements, with gains of +2.85% and +9.2% on the VALSE and VL-Checklist datasets.
Anthology ID:
2024.alvr-1.17
Volume:
Proceedings of the 3rd Workshop on Advances in Language and Vision Research (ALVR)
Month:
August
Year:
2024
Address:
Bangkok, Thailand
Editors:
Jing Gu, Tsu-Jui (Ray) Fu, Drew Hudson, Asli Celikyilmaz, William Wang
Venues:
ALVR | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
195–201
Language:
URL:
https://aclanthology.org/2024.alvr-1.17
DOI:
10.18653/v1/2024.alvr-1.17
Bibkey:
Cite (ACL):
Hadi Wazni, Kin Ian Lo, and Mehrnoosh Sadrzadeh. 2024. VerbCLIP: Improving Verb Understanding in Vision-Language Models with Compositional Structures. In Proceedings of the 3rd Workshop on Advances in Language and Vision Research (ALVR), pages 195–201, Bangkok, Thailand. Association for Computational Linguistics.
Cite (Informal):
VerbCLIP: Improving Verb Understanding in Vision-Language Models with Compositional Structures (Wazni et al., ALVR-WS 2024)
Copy Citation:
PDF:
https://preview.aclanthology.org/dois-2013-emnlp/2024.alvr-1.17.pdf