GeoChain: Multimodal Chain-of-Thought for Geographic Reasoning

Sahiti Yerramilli, Nilay Pande, Rynaa Grover, Jayant Sravan Tamarapalli


Abstract
This paper introduces GeoChain, a large-scale benchmark for evaluating step-by-step geographic reasoning in multimodal large language models (MLLMs). Leveraging 1.46 million Mapillary street-level images, GeoChain pairs each image with a 21-step chain-of-thought (CoT) question sequence (over 30 million Q&A pairs). These sequences guide models from coarse attributes to fine-grained localization across four reasoning categories - visual, spatial, cultural, and precise geolocation - annotated by difficulty. Images are also enriched with semantic segmentation (150 classes) and a visual locatability score. Our benchmarking of frontier MLLMs on a diverse 2,088-image subset reveals consistent challenges: models frequently exhibit weaknesses in visual grounding, display erratic reasoning, and struggle to achieve accurate localization, especially as the reasoning complexity escalates. GeoChain offers a robust diagnostic methodology, critical for fostering significant advancements in complex geographic reasoning within MLLMs.
Anthology ID:
2025.findings-emnlp.1284
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2025
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
23624–23639
Language:
URL:
https://preview.aclanthology.org/author-page-yu-wang-polytechnic/2025.findings-emnlp.1284/
DOI:
10.18653/v1/2025.findings-emnlp.1284
Bibkey:
Cite (ACL):
Sahiti Yerramilli, Nilay Pande, Rynaa Grover, and Jayant Sravan Tamarapalli. 2025. GeoChain: Multimodal Chain-of-Thought for Geographic Reasoning. In Findings of the Association for Computational Linguistics: EMNLP 2025, pages 23624–23639, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
GeoChain: Multimodal Chain-of-Thought for Geographic Reasoning (Yerramilli et al., Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/author-page-yu-wang-polytechnic/2025.findings-emnlp.1284.pdf
Checklist:
 2025.findings-emnlp.1284.checklist.pdf