Revisiting 3D LLM Benchmarks: Are We Really Testing 3D Capabilities?

Jiahe Jin, Yanheng He, Mingyan Yang


Abstract
In this work, we identify the “2D-Cheating” problem in 3D LLM evaluation, where these tasks might be easily solved by VLMs with rendered images of point clouds, exposing ineffective evaluation of 3D LLMs’ unique 3D capabilities. We test VLM performance across multiple 3D LLM benchmarks and, using this as a reference, propose principles for better assessing genuine 3D understanding. We also advocate explicitly separating 3D abilities from 1D or 2D aspects when evaluating 3D LLMs.
Anthology ID:
2025.findings-acl.1222
Volume:
Findings of the Association for Computational Linguistics: ACL 2025
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Wanxiang Che, Joyce Nabende, Ekaterina Shutova, Mohammad Taher Pilehvar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
23858–23869
Language:
URL:
https://preview.aclanthology.org/corrections-2025-08/2025.findings-acl.1222/
DOI:
10.18653/v1/2025.findings-acl.1222
Bibkey:
Cite (ACL):
Jiahe Jin, Yanheng He, and Mingyan Yang. 2025. Revisiting 3D LLM Benchmarks: Are We Really Testing 3D Capabilities?. In Findings of the Association for Computational Linguistics: ACL 2025, pages 23858–23869, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
Revisiting 3D LLM Benchmarks: Are We Really Testing 3D Capabilities? (Jin et al., Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/corrections-2025-08/2025.findings-acl.1222.pdf