CMU’s Machine Translation System for IWSLT 2019

Tejas Srinivasan, Ramon Sanabria, Florian Metze


Abstract
In Neural Machine Translation (NMT) the usage of sub-words and characters as source and target units offers a simple and flexible solution for translation of rare and unseen words. However, selecting the optimal subword segmentation involves a trade-off between expressiveness and flexibility, and is language and dataset-dependent. We present Block Multitask Learning (BMTL), a novel NMT architecture that predicts multiple targets of different granularities simulta- neously, removing the need to search for the optimal seg- mentation strategy. Our multi-task model exhibits improvements of up to 1.7 BLEU points on each decoder over single-task baseline models with the same number of parameters on datasets from two language pairs of IWSLT15 and one from IWSLT19. The multiple hypotheses generated at different granularities can also be combined as a post-processing step to give better translations.
Anthology ID:
2019.iwslt-1.10
Volume:
Proceedings of the 16th International Conference on Spoken Language Translation
Month:
November 2-3
Year:
2019
Address:
Hong Kong
Editors:
Jan Niehues, Rolando Cattoni, Sebastian Stüker, Matteo Negri, Marco Turchi, Thanh-Le Ha, Elizabeth Salesky, Ramon Sanabria, Loic Barrault, Lucia Specia, Marcello Federico
Venue:
IWSLT
SIG:
SIGSLT
Publisher:
Association for Computational Linguistics
Note:
Pages:
Language:
URL:
https://aclanthology.org/2019.iwslt-1.10
DOI:
Bibkey:
Cite (ACL):
Tejas Srinivasan, Ramon Sanabria, and Florian Metze. 2019. CMU’s Machine Translation System for IWSLT 2019. In Proceedings of the 16th International Conference on Spoken Language Translation, Hong Kong. Association for Computational Linguistics.
Cite (Informal):
CMU’s Machine Translation System for IWSLT 2019 (Srinivasan et al., IWSLT 2019)
Copy Citation:
PDF:
https://preview.aclanthology.org/emnlp-22-attachments/2019.iwslt-1.10.pdf