Abstract
Vision-language models (VLMs) have shown remarkable performance on visual reasoning tasks (e.g. attributes, location). While such tasks measure the requisite knowledge to ground and reason over a given visual instance, they do not, however, measure the ability of VLMs to retain and generalize such knowledge. In this work, we evaluate VLMs’ ability to acquire “visible” physical knowledge – the information that is easily accessible from images of static scenes, particularly along the dimensions of object color, size, and space. We build an automatic pipeline to derive a comprehensive knowledge resource for calibrating and probing these models. Our results indicate a severe gap between model and human performance across all three dimensions. Furthermore, we demonstrate that a caption pretrained LM significantly outperforms VLMs on both size and spatial tasks – highlighting that despite sufficient access to ground language with visual modality, they struggle to retain such knowledge.- Anthology ID:
- 2023.findings-emnlp.473
- Volume:
- Findings of the Association for Computational Linguistics: EMNLP 2023
- Month:
- December
- Year:
- 2023
- Address:
- Singapore
- Editors:
- Houda Bouamor, Juan Pino, Kalika Bali
- Venue:
- Findings
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 7113–7128
- Language:
- URL:
- https://aclanthology.org/2023.findings-emnlp.473
- DOI:
- 10.18653/v1/2023.findings-emnlp.473
- Cite (ACL):
- Shikhar Singh, Ehsan Qasemi, and Muhao Chen. 2023. VIPHY: Probing “Visible” Physical Commonsense Knowledge. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 7113–7128, Singapore. Association for Computational Linguistics.
- Cite (Informal):
- VIPHY: Probing “Visible” Physical Commonsense Knowledge (Singh et al., Findings 2023)
- PDF:
- https://preview.aclanthology.org/jeptaln-2024-ingestion/2023.findings-emnlp.473.pdf