Sequence to Sequence Mixture Model for Diverse Machine Translation

Xuanli He, Gholamreza Haffari, Mohammad Norouzi


Abstract
Sequence to sequence (SEQ2SEQ) models lack diversity in their generated translations. This can be attributed to their limitations in capturing lexical and syntactic variations in parallel corpora, due to different styles, genres, topics, or ambiguity of human translation process. In this paper, we develop a novel sequence to sequence mixture (S2SMIX) model that improves both translation diversity and quality by adopting a committee of specialized translation models rather than a single translation model. Each mixture component selects its own training dataset via optimization of the marginal log-likelihood, which leads to a soft clustering of the parallel corpus. Experiments on four language pairs demonstrate the superiority of our mixture model compared to SEQ2SEQ model with the standard and diversity encouraged beam search. Our mixture model incurs negligible additional parameters and no extra computation in the decoding time.
Anthology ID:
K18-1056
Volume:
Proceedings of the 22nd Conference on Computational Natural Language Learning
Month:
October
Year:
2018
Address:
Brussels, Belgium
Editors:
Anna Korhonen, Ivan Titov
Venue:
CoNLL
SIG:
SIGNLL
Publisher:
Association for Computational Linguistics
Note:
Pages:
583–592
Language:
URL:
https://aclanthology.org/K18-1056
DOI:
10.18653/v1/K18-1056
Bibkey:
Cite (ACL):
Xuanli He, Gholamreza Haffari, and Mohammad Norouzi. 2018. Sequence to Sequence Mixture Model for Diverse Machine Translation. In Proceedings of the 22nd Conference on Computational Natural Language Learning, pages 583–592, Brussels, Belgium. Association for Computational Linguistics.
Cite (Informal):
Sequence to Sequence Mixture Model for Diverse Machine Translation (He et al., CoNLL 2018)
Copy Citation:
PDF:
https://preview.aclanthology.org/add_acl24_videos/K18-1056.pdf