Abstract
Multi-modal models that learn semantic representations from both linguistic and perceptual input outperform language-only models on a range of evaluations, and better reflect human concept acquisition. Most perceptual input to such models corresponds to concrete noun concepts and the superiority of the multi-modal approach has only been established when evaluating on such concepts. We therefore investigate which concepts can be effectively learned by multi-modal models. We show that concreteness determines both which linguistic features are most informative and the impact of perceptual input in such models. We then introduce ridge regression as a means of propagating perceptual information from concrete nouns to more abstract concepts that is more robust than previous approaches. Finally, we present weighted gram matrix combination, a means of combining representations from distinct modalities that outperforms alternatives when both modalities are sufficiently rich.- Anthology ID:
- Q14-1023
- Volume:
- Transactions of the Association for Computational Linguistics, Volume 2
- Month:
- Year:
- 2014
- Address:
- Cambridge, MA
- Editors:
- Dekang Lin, Michael Collins, Lillian Lee
- Venue:
- TACL
- SIG:
- Publisher:
- MIT Press
- Note:
- Pages:
- 285–296
- Language:
- URL:
- https://aclanthology.org/Q14-1023
- DOI:
- 10.1162/tacl_a_00183
- Cite (ACL):
- Felix Hill, Roi Reichart, and Anna Korhonen. 2014. Multi-Modal Models for Concrete and Abstract Concept Meaning. Transactions of the Association for Computational Linguistics, 2:285–296.
- Cite (Informal):
- Multi-Modal Models for Concrete and Abstract Concept Meaning (Hill et al., TACL 2014)
- PDF:
- https://preview.aclanthology.org/ingest-2024-clasp/Q14-1023.pdf