Exploiting Multimodal Reinforcement Learning for Simultaneous Machine Translation

Julia Ive, Andy Mingren Li, Yishu Miao, Ozan Caglayan, Pranava Madhyastha, Lucia Specia


Abstract
This paper addresses the problem of simultaneous machine translation (SiMT) by exploring two main concepts: (a) adaptive policies to learn a good trade-off between high translation quality and low latency; and (b) visual information to support this process by providing additional (visual) contextual information which may be available before the textual input is produced. For that, we propose a multimodal approach to simultaneous machine translation using reinforcement learning, with strategies to integrate visual and textual information in both the agent and the environment. We provide an exploration on how different types of visual information and integration strategies affect the quality and latency of simultaneous translation models, and demonstrate that visual cues lead to higher quality while keeping the latency low.
Anthology ID:
2021.eacl-main.281
Volume:
Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume
Month:
April
Year:
2021
Address:
Online
Editors:
Paola Merlo, Jorg Tiedemann, Reut Tsarfaty
Venue:
EACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
3222–3233
Language:
URL:
https://aclanthology.org/2021.eacl-main.281
DOI:
10.18653/v1/2021.eacl-main.281
Bibkey:
Cite (ACL):
Julia Ive, Andy Mingren Li, Yishu Miao, Ozan Caglayan, Pranava Madhyastha, and Lucia Specia. 2021. Exploiting Multimodal Reinforcement Learning for Simultaneous Machine Translation. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 3222–3233, Online. Association for Computational Linguistics.
Cite (Informal):
Exploiting Multimodal Reinforcement Learning for Simultaneous Machine Translation (Ive et al., EACL 2021)
Copy Citation:
PDF:
https://preview.aclanthology.org/naacl24-info/2021.eacl-main.281.pdf
Code
 ImperialNLP/pysimt