Angel X Chang
2025
ViGiL3D: A Linguistically Diverse Dataset for 3D Visual Grounding
Austin Wang
|
ZeMing Gong
|
Angel X Chang
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
3D visual grounding (3DVG) involves localizing entities in a 3D scene referred to by natural language text. Such models are useful for embodied AI and scene retrieval applications, which involve searching for objects or patterns using natural language descriptions. While recent works have focused on LLM-based scaling of 3DVG datasets, these datasets do not capture the full range of potential prompts which could be specified in the English language. To ensure that we are scaling up and testing against a useful and representative set of prompts, we propose a framework for linguistically analyzing 3DVG prompts and introduce Visual Grounding with Diverse Language in 3D (ViGiL3D), a diagnostic dataset for evaluating visual grounding methods against a diverse set of language patterns. We evaluate existing open-vocabulary 3DVG methods to demonstrate that these methods are not yet proficient in understanding and identifying the targets of more challenging, out-of-distribution prompts, toward real-world applications.
2018
Linking WordNet to 3D Shapes
Angel X Chang
|
Rishi Mago
|
Pranav Krishna
|
Manolis Savva
|
Christiane Fellbaum
Proceedings of the 9th Global Wordnet Conference
We describe a project to link the Princeton WordNet to 3D representations of real objects and scenes. The goal is to establish a dataset that helps us to understand how people categorize everyday common objects via their parts, attributes, and context. This paper describes the annotation and data collection effort so far as well as ideas for future work.
Search
Fix author
Co-authors
- Christiane Fellbaum 1
- ZeMing Gong 1
- Pranav Krishna 1
- Rishi Mago 1
- Manolis Savva 1
- show all...