MM-Conv: A Multimodal Dataset and Benchmark for Context-Aware Grounding in 3D Dialogue

Anna Deichler, Jim O'Regan, Fethiye Irmak Dogan, Anna Klezovich, Lubos Marcinek, Iolanda Leite, Jonas Beskow


Abstract
Grounding language in the physical world requires AI systems to interpret references that emerge dynamically during conversation. While current vision-language models (VLMs) excel at static image tasks, they struggle to resolve ambiguous expressions in spontaneous, multi-turn dialogue. We address this gap by introducing MM-Conv—speak, point, look—a benchmark for referential communication in dynamic 3D environments, built from 6.7 hours of egocentric VR interaction with synchronized speech, motion, gaze, and 3D scene geometry. The benchmark includes over 4,200 manually verified referring expressions spanning full, partitive, and pronominal types, enabling systematic evaluation of multimodal reference resolution.
Anthology ID:
2026.lrec-main.726
Volume:
Proceedings of the Fifteenth Language Resources and Evaluation Conference
Month:
May
Year:
2026
Address:
Palma de Mallorca, Spain
Editors:
Stelios Piperidis, Núria Bel, Henk van den Heuvel, Nancy Ide, Simon Krek, Antonio Toral
Venue:
LREC
SIG:
Publisher:
ELRA Language Resource Association
Note:
Pages:
9240–9253
Language:
URL:
https://preview.aclanthology.org/ingest-lrec/2026.lrec-main.726/
DOI:
Bibkey:
Cite (ACL):
Anna Deichler, Jim O'Regan, Fethiye Irmak Dogan, Anna Klezovich, Lubos Marcinek, Iolanda Leite, and Jonas Beskow. 2026. MM-Conv: A Multimodal Dataset and Benchmark for Context-Aware Grounding in 3D Dialogue. International Conference on Language Resources and Evaluation, main:9240–9253.
Cite (Informal):
MM-Conv: A Multimodal Dataset and Benchmark for Context-Aware Grounding in 3D Dialogue (Deichler et al., LREC 2026)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-lrec/2026.lrec-main.726.pdf