A Visuospatial Dataset for Naturalistic Verb Learning

Dylan Ebert, Ellie Pavlick


Abstract
We introduce a new dataset for training and evaluating grounded language models. Our data is collected within a virtual reality environment and is designed to emulate the quality of language data to which a pre-verbal child is likely to have access: That is, naturalistic, spontaneous speech paired with richly grounded visuospatial context. We use the collected data to compare several distributional semantics models for verb learning. We evaluate neural models based on 2D (pixel) features as well as feature-engineered models based on 3D (symbolic, spatial) features, and show that neither modeling approach achieves satisfactory performance. Our results are consistent with evidence from child language acquisition that emphasizes the difficulty of learning verbs from naive distributional data. We discuss avenues for future work on cognitively-inspired grounded language learning, and release our corpus with the intent of facilitating research on the topic.
Anthology ID:
2020.starsem-1.16
Volume:
Proceedings of the Ninth Joint Conference on Lexical and Computational Semantics
Month:
December
Year:
2020
Address:
Barcelona, Spain (Online)
Venue:
*SEM
SIG:
SIGLEX
Publisher:
Association for Computational Linguistics
Note:
Pages:
143–153
Language:
URL:
https://aclanthology.org/2020.starsem-1.16
DOI:
Bibkey:
Cite (ACL):
Dylan Ebert and Ellie Pavlick. 2020. A Visuospatial Dataset for Naturalistic Verb Learning. In Proceedings of the Ninth Joint Conference on Lexical and Computational Semantics, pages 143–153, Barcelona, Spain (Online). Association for Computational Linguistics.
Cite (Informal):
A Visuospatial Dataset for Naturalistic Verb Learning (Ebert & Pavlick, *SEM 2020)
Copy Citation:
PDF:
https://preview.aclanthology.org/remove-xml-comments/2020.starsem-1.16.pdf
Code
 dylanebert/nbc_starsem
Data
New Brown Corpus