Abstract
We seek to maximally use various data sources, such as parallel and monolingual data, to build an effective and efficient document-level translation system. In particular, we start by considering a noisy channel approach (CITATION) that combines a target-to-source translation model and a language model. By applying Bayes’ rule strategically, we reformulate this approach as a log-linear combination of translation, sentence-level and document-level language model probabilities. In addition to using static coefficients for each term, this formulation alternatively allows for the learning of dynamic per-token weights to more finely control the impact of the language models. Using both static or dynamic coefficients leads to improvements over a context-agnostic baseline and a context-aware concatenation model.- Anthology ID:
- 2020.spnlp-1.11
- Volume:
- Proceedings of the Fourth Workshop on Structured Prediction for NLP
- Month:
- November
- Year:
- 2020
- Address:
- Online
- Editors:
- Priyanka Agrawal, Zornitsa Kozareva, Julia Kreutzer, Gerasimos Lampouras, André Martins, Sujith Ravi, Andreas Vlachos
- Venue:
- spnlp
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 95–101
- Language:
- URL:
- https://aclanthology.org/2020.spnlp-1.11
- DOI:
- 10.18653/v1/2020.spnlp-1.11
- Cite (ACL):
- Sébastien Jean and Kyunghyun Cho. 2020. Log-Linear Reformulation of the Noisy Channel Model for Document-Level Neural Machine Translation. In Proceedings of the Fourth Workshop on Structured Prediction for NLP, pages 95–101, Online. Association for Computational Linguistics.
- Cite (Informal):
- Log-Linear Reformulation of the Noisy Channel Model for Document-Level Neural Machine Translation (Jean & Cho, spnlp 2020)
- PDF:
- https://preview.aclanthology.org/nschneid-patch-4/2020.spnlp-1.11.pdf