On Leveraging the Visual Modality for Neural Machine Translation
Vikas Raunak, Sang Keun Choe, Quanyang Lu, Yi Xu, Florian Metze
Abstract
Leveraging the visual modality effectively for Neural Machine Translation (NMT) remains an open problem in computational linguistics. Recently, Caglayan et al. posit that the observed gains are limited mainly due to the very simple, short, repetitive sentences of the Multi30k dataset (the only multimodal MT dataset available at the time), which renders the source text sufficient for context. In this work, we further investigate this hypothesis on a new large scale multimodal Machine Translation (MMT) dataset, How2, which has 1.57 times longer mean sentence length than Multi30k and no repetition. We propose and evaluate three novel fusion techniques, each of which is designed to ensure the utilization of visual context at different stages of the Sequence-to-Sequence transduction pipeline, even under full linguistic context. However, we still obtain only marginal gains under full linguistic context and posit that visual embeddings extracted from deep vision models (ResNet for Multi30k, ResNext for How2) do not lend themselves to increasing the discriminativeness between the vocabulary elements at token level prediction in NMT. We demonstrate this qualitatively by analyzing attention distribution and quantitatively through Principal Component Analysis, arriving at the conclusion that it is the quality of the visual embeddings rather than the length of sentences, which need to be improved in existing MMT datasets.- Anthology ID:
- W19-8620
- Volume:
- Proceedings of the 12th International Conference on Natural Language Generation
- Month:
- October–November
- Year:
- 2019
- Address:
- Tokyo, Japan
- Editors:
- Kees van Deemter, Chenghua Lin, Hiroya Takamura
- Venue:
- INLG
- SIG:
- SIGGEN
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 147–151
- Language:
- URL:
- https://aclanthology.org/W19-8620
- DOI:
- 10.18653/v1/W19-8620
- Cite (ACL):
- Vikas Raunak, Sang Keun Choe, Quanyang Lu, Yi Xu, and Florian Metze. 2019. On Leveraging the Visual Modality for Neural Machine Translation. In Proceedings of the 12th International Conference on Natural Language Generation, pages 147–151, Tokyo, Japan. Association for Computational Linguistics.
- Cite (Informal):
- On Leveraging the Visual Modality for Neural Machine Translation (Raunak et al., INLG 2019)
- PDF:
- https://preview.aclanthology.org/naacl-24-ws-corrections/W19-8620.pdf
- Data
- How2