Seeing the advantage: visually grounding word embeddings to better capture human semantic knowledge

Danny Merkx, Stefan Frank, Mirjam Ernestus


Abstract
Distributional semantic models capture word-level meaning that is useful in many natural language processing tasks and have even been shown to capture cognitive aspects of word meaning. The majority of these models are purely text based, even though the human sensory experience is much richer. In this paper we create visually grounded word embeddings by combining English text and images and compare them to popular text-based methods, to see if visual information allows our model to better capture cognitive aspects of word meaning. Our analysis shows that visually grounded embedding similarities are more predictive of the human reaction times in a large priming experiment than the purely text-based embeddings. The visually grounded embeddings also correlate well with human word similarity ratings. Importantly, in both experiments we show that the grounded embeddings account for a unique portion of explained variance, even when we include text-based embeddings trained on huge corpora. This shows that visual grounding allows our model to capture information that cannot be extracted using text as the only source of information.
Anthology ID:
2022.cmcl-1.1
Volume:
Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics
Month:
May
Year:
2022
Address:
Dublin, Ireland
Editors:
Emmanuele Chersoni, Nora Hollenstein, Cassandra Jacobs, Yohei Oseki, Laurent Prévot, Enrico Santus
Venue:
CMCL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1–11
Language:
URL:
https://aclanthology.org/2022.cmcl-1.1
DOI:
10.18653/v1/2022.cmcl-1.1
Bibkey:
Cite (ACL):
Danny Merkx, Stefan Frank, and Mirjam Ernestus. 2022. Seeing the advantage: visually grounding word embeddings to better capture human semantic knowledge. In Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics, pages 1–11, Dublin, Ireland. Association for Computational Linguistics.
Cite (Informal):
Seeing the advantage: visually grounding word embeddings to better capture human semantic knowledge (Merkx et al., CMCL 2022)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-3/2022.cmcl-1.1.pdf
Video:
 https://preview.aclanthology.org/nschneid-patch-3/2022.cmcl-1.1.mp4
Code
 DannyMerkx/speech2image
Data
ImageNetMS COCO