Probing Vision-Language Understanding through the Visual Entailment Task: promises and pitfalls

Elena Pitta, Tom Kouwenhoven, Tessa Verhoef


Abstract
This study investigates the extent to which the Visual Entailment (VE) task serves as a reliable probe of vision-language understanding in multimodal language models, using the LLaMA 3.2 11B Vision model as a test case. Beyond reporting performance metrics, we aim to interpret what these results reveal about the underlying possibilities and limitations of the VE task. We conduct a series of experiments across zero-shot, few-shot, and fine-tuning settings, exploring how factors such as prompt design, the number and order of in-context examples and access to visual information might affect VE performance. To further probe the reasoning processes of the model, we used explanation-based evaluations. Results indicate that three-shot inference outperforms the zero-shot baselines. However, additional examples introduce more noise than they provide benefits. Additionally, the order of the labels in the prompt is a critical factor that influences the predictions. In the absence of visual information, the model has a strong tendency to hallucinate and imagine content, raising questions about the model’s over-reliance on linguistic priors. Fine-tuning yields strong results, achieving an accuracy of 83.3% on the e-SNLI-VE dataset and outperforming the state-of-the-art OFA-X model. Additionally, the explanation evaluation demonstrates that the fine-tuned model provides semantically meaningful explanations similar to those of humans, with a BERTScore F1-score of 89.2%. We do, however, find comparable BERTScore results in experiments with limited vision, questioning the visual grounding of this task. Overall, our results highlight both the utility and limitations of VE as a diagnostic task for vision-language understanding and point to directions for refining multimodal evaluation methods.
Anthology ID:
2025.luhme-1.8
Volume:
Proceedings of the 2nd LUHME Workshop
Month:
October
Year:
2025
Address:
Bologna, Italy
Editors:
Henrique Lopes Cardoso, Rui Sousa-Silva, Maarit Koponen, Antonio Pareja-Lora
Venue:
LUHME
SIG:
Publisher:
LUHME
Note:
Pages:
74–83
Language:
URL:
https://preview.aclanthology.org/ingest-luhme/2025.luhme-1.8/
DOI:
Bibkey:
Cite (ACL):
Elena Pitta, Tom Kouwenhoven, and Tessa Verhoef. 2025. Probing Vision-Language Understanding through the Visual Entailment Task: promises and pitfalls. In Proceedings of the 2nd LUHME Workshop, pages 74–83, Bologna, Italy. LUHME.
Cite (Informal):
Probing Vision-Language Understanding through the Visual Entailment Task: promises and pitfalls (Pitta et al., LUHME 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-luhme/2025.luhme-1.8.pdf