ViGiL3D: A Linguistically Diverse Dataset for 3D Visual Grounding

Austin Wang, ZeMing Gong, Angel X Chang


Abstract
3D visual grounding (3DVG) involves localizing entities in a 3D scene referred to by natural language text. Such models are useful for embodied AI and scene retrieval applications, which involve searching for objects or patterns using natural language descriptions. While recent works have focused on LLM-based scaling of 3DVG datasets, these datasets do not capture the full range of potential prompts which could be specified in the English language. To ensure that we are scaling up and testing against a useful and representative set of prompts, we propose a framework for linguistically analyzing 3DVG prompts and introduce Visual Grounding with Diverse Language in 3D (ViGiL3D), a diagnostic dataset for evaluating visual grounding methods against a diverse set of language patterns. We evaluate existing open-vocabulary 3DVG methods to demonstrate that these methods are not yet proficient in understanding and identifying the targets of more challenging, out-of-distribution prompts, toward real-world applications.
Anthology ID:
2025.acl-long.1470
Volume:
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Wanxiang Che, Joyce Nabende, Ekaterina Shutova, Mohammad Taher Pilehvar
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
30453–30475
Language:
URL:
https://preview.aclanthology.org/ingestion-acl-25/2025.acl-long.1470/
DOI:
Bibkey:
Cite (ACL):
Austin Wang, ZeMing Gong, and Angel X Chang. 2025. ViGiL3D: A Linguistically Diverse Dataset for 3D Visual Grounding. In Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 30453–30475, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
ViGiL3D: A Linguistically Diverse Dataset for 3D Visual Grounding (Wang et al., ACL 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingestion-acl-25/2025.acl-long.1470.pdf