Using Grounded Word Representations to Study Theories of Lexical Concepts

Dylan Ebert, Ellie Pavlick


Abstract
The fields of cognitive science and philosophy have proposed many different theories for how humans represent “concepts”. Multiple such theories are compatible with state-of-the-art NLP methods, and could in principle be operationalized using neural networks. We focus on two particularly prominent theories–Classical Theory and Prototype Theory–in the context of visually-grounded lexical representations. We compare when and how the behavior of models based on these theories differs in terms of categorization and entailment tasks. Our preliminary results suggest that Classical-based representations perform better for entailment and Prototype-based representations perform better for categorization. We discuss plans for additional experiments needed to confirm these initial observations.
Anthology ID:
W19-2918
Volume:
Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics
Month:
June
Year:
2019
Address:
Minneapolis, Minnesota
Editors:
Emmanuele Chersoni, Cassandra Jacobs, Alessandro Lenci, Tal Linzen, Laurent Prévot, Enrico Santus
Venue:
CMCL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
160–169
Language:
URL:
https://aclanthology.org/W19-2918
DOI:
10.18653/v1/W19-2918
Bibkey:
Cite (ACL):
Dylan Ebert and Ellie Pavlick. 2019. Using Grounded Word Representations to Study Theories of Lexical Concepts. In Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics, pages 160–169, Minneapolis, Minnesota. Association for Computational Linguistics.
Cite (Informal):
Using Grounded Word Representations to Study Theories of Lexical Concepts (Ebert & Pavlick, CMCL 2019)
Copy Citation:
PDF:
https://preview.aclanthology.org/naacl-24-ws-corrections/W19-2918.pdf
Data
HyperLexImageNet