Laurent Sartran
2020
The DeepMind Chinese–English Document Translation System at WMT2020
Lei Yu
|
Laurent Sartran
|
Po-Sen Huang
|
Wojciech Stokowiec
|
Domenic Donato
|
Srivatsan Srinivasan
|
Alek Andreev
|
Wang Ling
|
Sona Mokra
|
Agustin Dal Lago
|
Yotam Doron
|
Susannah Young
|
Phil Blunsom
|
Chris Dyer
Proceedings of the Fifth Conference on Machine Translation
This paper describes the DeepMind submission to the Chinese→English constrained data track of the WMT2020 Shared Task on News Translation. The submission employs a noisy channel factorization as the backbone of a document translation system. This approach allows the flexible combination of a number of independent component models which are further augmented with back-translation, distillation, fine-tuning with in-domain data, Monte-Carlo Tree Search decoding, and improved uncertainty estimation. In order to address persistent issues with the premature truncation of long sequences we included specialized length models and sentence segmentation techniques. Our final system provides a 9.9 BLEU points improvement over a baseline Transformer on our test set (newstest 2019).
Better Document-Level Machine Translation with Bayes’ Rule
Lei Yu
|
Laurent Sartran
|
Wojciech Stokowiec
|
Wang Ling
|
Lingpeng Kong
|
Phil Blunsom
|
Chris Dyer
Transactions of the Association for Computational Linguistics, Volume 8
We show that Bayes’ rule provides an effective mechanism for creating document translation models that can be learned from only parallel sentences and monolingual documents a compelling benefit because parallel documents are not always available. In our formulation, the posterior probability of a candidate translation is the product of the unconditional (prior) probability of the candidate output document and the “reverse translation probability” of translating the candidate output back into the source language. Our proposed model uses a powerful autoregressive language model as the prior on target language documents, but it assumes that each sentence is translated independently from the target to the source language. Crucially, at test time, when a source document is observed, the document language model prior induces dependencies between the translations of the source sentences in the posterior. The model’s independence assumption not only enables efficient use of available data, but it additionally admits a practical left-to-right beam-search algorithm for carrying out inference. Experiments show that our model benefits from using cross-sentence context in the language model, and it outperforms existing document translation approaches.
Search
Co-authors
- Lei Yu 2
- Wojciech Stokowiec 2
- Wang Ling 2
- Phil Blunsom 2
- Chris Dyer 2
- show all...