VIBE: Can a VLM Read the Room?

Tania Chakraborty, Eylon Caplan, Dan Goldwasser


Abstract
Understanding human social behavior such as recognizing emotions and the social dynamics causing them is an important and challenging problem. While LLMs have made remarkable advances, they are limited to the textual domain and cannot account for the major role that non-verbal cues play in understanding social situations. Vision Language Models (VLMs) can potentially account for this gap, however their ability to make correct inferences over such social cues has received little attention. In this paper, we explore the capabilities of VLMs at social reasoning. We identify a previously overlooked limitation in VLMs: the Visual Social-Pragmatic Inference gap. To target this gap, we propose a new task for VLMs: Visual Social-Pragmatic Inference. We construct a high quality dataset to test the abilities of a VLM for this task and benchmark the performance of several VLMs on it.
Anthology ID:
2025.findings-emnlp.1252
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2025
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
22992–23008
Language:
URL:
https://preview.aclanthology.org/ingest-luhme/2025.findings-emnlp.1252/
DOI:
10.18653/v1/2025.findings-emnlp.1252
Bibkey:
Cite (ACL):
Tania Chakraborty, Eylon Caplan, and Dan Goldwasser. 2025. VIBE: Can a VLM Read the Room?. In Findings of the Association for Computational Linguistics: EMNLP 2025, pages 22992–23008, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
VIBE: Can a VLM Read the Room? (Chakraborty et al., Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-luhme/2025.findings-emnlp.1252.pdf
Checklist:
 2025.findings-emnlp.1252.checklist.pdf