CLEVR-3D-DeRef

Mary Lynn Martin, Martha Palmer, Maria Leonor Pacheco


Abstract
Vision-language models (VLMs) often struggle to interpret spatial referring expressions that require relational reasoning rather than reliance on surface-level cues. These models frequently identify referents through explicit visual attributes such as color or shape, rather than understanding spatial relationships (e.g., "to the left of the red cube”). To systematically analyze these limitations, we introduce CLEVR-3D-DeRef, a synthetic and extensible benchmark dataset modeled after CLEVR-Ref+, designed to evaluate spatial reasoning in multi-modal systems. CLEVR-3D-DeRef extends the original framework by incorporating depth information for 3D spatial reasoning, introducing de-identified context-dependent referring expressions that require relational inference to disambiguate referent objects, and expanding the range of spatial relations beyond the original four. We further extend our dataset by producing expressions with and without ordinal language and diversifying the language and structure of expressions while preserving meaning.
Anthology ID:
2026.lrec-main.745
Volume:
Proceedings of the Fifteenth Language Resources and Evaluation Conference
Month:
May
Year:
2026
Address:
Palma de Mallorca, Spain
Editors:
Stelios Piperidis, Núria Bel, Henk van den Heuvel, Nancy Ide, Simon Krek, Antonio Toral
Venue:
LREC
SIG:
Publisher:
ELRA Language Resource Association
Note:
Pages:
9490–9503
Language:
URL:
https://preview.aclanthology.org/ingest-lrec/2026.lrec-main.745/
DOI:
Bibkey:
Cite (ACL):
Mary Lynn Martin, Martha Palmer, and Maria Leonor Pacheco. 2026. CLEVR-3D-DeRef. International Conference on Language Resources and Evaluation, main:9490–9503.
Cite (Informal):
CLEVR-3D-DeRef (Martin et al., LREC 2026)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-lrec/2026.lrec-main.745.pdf