Cross-lingual Visual Pre-training for Multimodal Machine Translation

Ozan Caglayan, Menekse Kuyu, Mustafa Sercan Amac, Pranava Madhyastha, Erkut Erdem, Aykut Erdem, Lucia Specia


Abstract
Pre-trained language models have been shown to improve performance in many natural language tasks substantially. Although the early focus of such models was single language pre-training, recent advances have resulted in cross-lingual and visual pre-training methods. In this paper, we combine these two approaches to learn visually-grounded cross-lingual representations. Specifically, we extend the translation language modelling (Lample and Conneau, 2019) with masked region classification and perform pre-training with three-way parallel vision & language corpora. We show that when fine-tuned for multimodal machine translation, these models obtain state-of-the-art performance. We also provide qualitative insights into the usefulness of the learned grounded representations.
Anthology ID:
2021.eacl-main.112
Original:
2021.eacl-main.112v1
Version 2:
2021.eacl-main.112v2
Volume:
Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume
Month:
April
Year:
2021
Address:
Online
Editors:
Paola Merlo, Jorg Tiedemann, Reut Tsarfaty
Venue:
EACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1317–1324
Language:
URL:
https://preview.aclanthology.org/build-pipeline-with-new-library/2021.eacl-main.112/
DOI:
10.18653/v1/2021.eacl-main.112
Bibkey:
Cite (ACL):
Ozan Caglayan, Menekse Kuyu, Mustafa Sercan Amac, Pranava Madhyastha, Erkut Erdem, Aykut Erdem, and Lucia Specia. 2021. Cross-lingual Visual Pre-training for Multimodal Machine Translation. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 1317–1324, Online. Association for Computational Linguistics.
Cite (Informal):
Cross-lingual Visual Pre-training for Multimodal Machine Translation (Caglayan et al., EACL 2021)
Copy Citation:
PDF:
https://preview.aclanthology.org/build-pipeline-with-new-library/2021.eacl-main.112.pdf
Data
Conceptual CaptionsFlickr30k