Multimodal Simultaneous Machine Translation

Lucia Specia


Abstract
Simultaneous machine translation (SiMT) aims to translate a continuous input text stream into another language with the lowest latency and highest quality possible. Therefore, translation has to start with an incomplete source text, which is read progressively, creating the need for anticipation. In this talk I will present work where we seek to understand whether the addition of visual information can compensate for the missing source context. We analyse the impact of different multimodal approaches and visual features on state-of-the-art SiMT frameworks, including fixed and dynamic policy approaches using reinforcement learning. Our results show that visual context is helpful and that visually-grounded models based on explicit object region information perform the best. Our qualitative analysis illustrates cases where only the multimodal systems are able to translate correctly from English into gender-marked languages, as well as deal with differences in word order, such as adjective-noun placement between English and French.
Anthology ID:
2021.mmtlrl-1.5
Volume:
Proceedings of the First Workshop on Multimodal Machine Translation for Low Resource Languages (MMTLRL 2021)
Month:
September
Year:
2021
Address:
Online (Virtual Mode)
Editors:
Thoudam Doren Singh, Cristina España i Bonet, Sivaji Bandyopadhyay, Josef van Genabith
Venue:
MMTLRL
SIG:
Publisher:
INCOMA Ltd.
Note:
Pages:
30
Language:
URL:
https://aclanthology.org/2021.mmtlrl-1.5
DOI:
Bibkey:
Cite (ACL):
Lucia Specia. 2021. Multimodal Simultaneous Machine Translation. In Proceedings of the First Workshop on Multimodal Machine Translation for Low Resource Languages (MMTLRL 2021), page 30, Online (Virtual Mode). INCOMA Ltd..
Cite (Informal):
Multimodal Simultaneous Machine Translation (Specia, MMTLRL 2021)
Copy Citation:
PDF:
https://preview.aclanthology.org/naacl24-info/2021.mmtlrl-1.5.pdf