CausalVLBench: Benchmarking Visual Causal Reasoning in Large Vision-Language Models

Aneesh Komanduri, Karuna Bhaila, Xintao Wu


Abstract
Large language models (LLMs) have shown remarkable ability in various language tasks, especially with their emergent in-context learning capability. Extending LLMs to incorporate visual inputs, large vision-language models (LVLMs) have shown impressive performance in tasks such as recognition and visual question answering (VQA). Despite increasing interest in the utility of LLMs in causal reasoning tasks such as causal discovery and counterfactual reasoning, there has been relatively little work showcasing the abilities of LVLMs on visual causal reasoning tasks. We take this opportunity to formally introduce a comprehensive causal reasoning benchmark for multi-modal in-context learning from LVLMs. Our CausalVLBench encompasses three representative tasks: causal structure inference, intervention target prediction, and counterfactual prediction. We evaluate the ability of state-of-the-art open-source LVLMs on our causal reasoning tasks across three causal representation learning datasets and demonstrate their fundamental strengths and weaknesses. We hope that our benchmark elucidates the drawbacks of existing vision-language models and motivates new directions and paradigms in improving the visual causal reasoning abilities of LVLMs.
Anthology ID:
2025.emnlp-main.1561
Volume:
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
30648–30668
Language:
URL:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.1561/
DOI:
Bibkey:
Cite (ACL):
Aneesh Komanduri, Karuna Bhaila, and Xintao Wu. 2025. CausalVLBench: Benchmarking Visual Causal Reasoning in Large Vision-Language Models. In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, pages 30648–30668, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
CausalVLBench: Benchmarking Visual Causal Reasoning in Large Vision-Language Models (Komanduri et al., EMNLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.1561.pdf
Checklist:
 2025.emnlp-main.1561.checklist.pdf