Probing Logical Reasoning of MLLMs in Scientific Diagrams

Yufei Wang, Adriana Kovashka


Abstract
We examine how multimodal large language models (MLLMs) perform logical inference grounded in visual information. We first construct a dataset of food web/chain images, along with questions that follow seven structured templates with progressively more complex reasoning involved. We show that complex reasoning about entities in the images remains challenging (even with elaborate prompts) and that visual information is underutilized.
Anthology ID:
2025.emnlp-main.542
Volume:
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
10717–10729
Language:
URL:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.542/
DOI:
Bibkey:
Cite (ACL):
Yufei Wang and Adriana Kovashka. 2025. Probing Logical Reasoning of MLLMs in Scientific Diagrams. In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, pages 10717–10729, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
Probing Logical Reasoning of MLLMs in Scientific Diagrams (Wang & Kovashka, EMNLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.542.pdf
Checklist:
 2025.emnlp-main.542.checklist.pdf