Associative Multichannel Autoencoder for Multimodal Word Representation

Shaonan Wang, Jiajun Zhang, Chengqing Zong


Abstract
In this paper we address the problem of learning multimodal word representations by integrating textual, visual and auditory inputs. Inspired by the re-constructive and associative nature of human memory, we propose a novel associative multichannel autoencoder (AMA). Our model first learns the associations between textual and perceptual modalities, so as to predict the missing perceptual information of concepts. Then the textual and predicted perceptual representations are fused through reconstructing their original and associated embeddings. Using a gating mechanism our model assigns different weights to each modality according to the different concepts. Results on six benchmark concept similarity tests show that the proposed method significantly outperforms strong unimodal baselines and state-of-the-art multimodal models.
Anthology ID:
D18-1011
Volume:
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing
Month:
October-November
Year:
2018
Address:
Brussels, Belgium
Editors:
Ellen Riloff, David Chiang, Julia Hockenmaier, Jun’ichi Tsujii
Venue:
EMNLP
SIG:
SIGDAT
Publisher:
Association for Computational Linguistics
Note:
Pages:
115–124
Language:
URL:
https://aclanthology.org/D18-1011
DOI:
10.18653/v1/D18-1011
Bibkey:
Cite (ACL):
Shaonan Wang, Jiajun Zhang, and Chengqing Zong. 2018. Associative Multichannel Autoencoder for Multimodal Word Representation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 115–124, Brussels, Belgium. Association for Computational Linguistics.
Cite (Informal):
Associative Multichannel Autoencoder for Multimodal Word Representation (Wang et al., EMNLP 2018)
Copy Citation:
PDF:
https://preview.aclanthology.org/fix-dup-bibkey/D18-1011.pdf
Video:
 https://preview.aclanthology.org/fix-dup-bibkey/D18-1011.mp4
Code
 wangshaonan/Associative-multichannel-autoencoder