Proceedings of the Fourth Workshop on Discourse in Machine Translation (DiscoMT 2019)

Andrei Popescu-Belis, Sharid Loáiciga, Christian Hardmeier, Deyi Xiong (Editors)


Anthology ID:
D19-65
Month:
November
Year:
2019
Address:
Hong Kong, China
Venue:
DiscoMT
SIG:
Publisher:
Association for Computational Linguistics
URL:
https://aclanthology.org/D19-65
DOI:
Bib Export formats:
BibTeX
PDF:
https://preview.aclanthology.org/emnlp22-frontmatter/D19-65.pdf

pdf bib
Proceedings of the Fourth Workshop on Discourse in Machine Translation (DiscoMT 2019)
Andrei Popescu-Belis | Sharid Loáiciga | Christian Hardmeier | Deyi Xiong

pdf bib
Analysing Coreference in Transformer Outputs
Ekaterina Lapshinova-Koltunski | Cristina España-Bonet | Josef van Genabith

We analyse coreference phenomena in three neural machine translation systems trained with different data settings with or without access to explicit intra- and cross-sentential anaphoric information. We compare system performance on two different genres: news and TED talks. To do this, we manually annotate (the possibly incorrect) coreference chains in the MT outputs and evaluate the coreference chain translations. We define an error typology that aims to go further than pronoun translation adequacy and includes types such as incorrect word selection or missing words. The features of coreference chains in automatic translations are also compared to those of the source texts and human translations. The analysis shows stronger potential translationese effects in machine translated outputs than in human translations.

pdf bib
Context-Aware Neural Machine Translation Decoding
Eva Martínez Garcia | Carles Creus | Cristina España-Bonet

This work presents a decoding architecture that fuses the information from a neural translation model and the context semantics enclosed in a semantic space language model based on word embeddings. The method extends the beam search decoding process and therefore can be applied to any neural machine translation framework. With this, we sidestep two drawbacks of current document-level systems: (i) we do not modify the training process so there is no increment in training time, and (ii) we do not require document-level an-notated data. We analyze the impact of the fusion system approach and its parameters on the final translation quality for English–Spanish. We obtain consistent and statistically significant improvements in terms of BLEU and METEOR and we observe how the fused systems are able to handle synonyms to propose more adequate translations as well as help the system to disambiguate among several translation candidates for a word.

pdf
When and Why is Document-level Context Useful in Neural Machine Translation?
Yunsu Kim | Duc Thanh Tran | Hermann Ney

Document-level context has received lots of attention for compensating neural machine translation (NMT) of isolated sentences. However, recent advances in document-level NMT focus on sophisticated integration of the context, explaining its improvement with only a few selected examples or targeted test sets. We extensively quantify the causes of improvements by a document-level model in general test sets, clarifying the limit of the usefulness of document-level context in NMT. We show that most of the improvements are not interpretable as utilizing the context. We also show that a minimal encoding is sufficient for the context modeling and very long context is not helpful for NMT.

pdf
Data augmentation using back-translation for context-aware neural machine translation
Amane Sugiyama | Naoki Yoshinaga

A single sentence does not always convey information that is enough to translate it into other languages. Some target languages need to add or specialize words that are omitted or ambiguous in the source languages (e.g, zero pronouns in translating Japanese to English or epicene pronouns in translating English to French). To translate such ambiguous sentences, we need contexts beyond a single sentence, and have so far explored context-aware neural machine translation (NMT). However, a large amount of parallel corpora is not easily available to train accurate context-aware NMT models. In this study, we first obtain large-scale pseudo parallel corpora by back-translating monolingual data, and then investigate its impact on the translation accuracy of context-aware NMT models. We evaluated context-aware NMT models trained with small parallel corpora and the large-scale pseudo parallel corpora on English-Japanese and English-French datasets to demonstrate the large impact of the data augmentation for context-aware NMT models.

pdf
Context-aware Neural Machine Translation with Coreference Information
Takumi Ohtani | Hidetaka Kamigaito | Masaaki Nagata | Manabu Okumura

We present neural machine translation models for translating a sentence in a text by using a graph-based encoder which can consider coreference relations provided within the text explicitly. The graph-based encoder can dynamically encode the source text without attending to all tokens in the text. In experiments, our proposed models provide statistically significant improvement to the previous approach of at most 0.9 points in the BLEU score on the OpenSubtitle2018 English-to-Japanese data set. Experimental results also show that the graph-based encoder can handle a longer text well, compared with the previous approach.

pdf
Analysing concatenation approaches to document-level NMT in two different domains
Yves Scherrer | Jörg Tiedemann | Sharid Loáiciga

In this paper, we investigate how different aspects of discourse context affect the performance of recent neural MT systems. We describe two popular datasets covering news and movie subtitles and we provide a thorough analysis of the distribution of various document-level features in their domains. Furthermore, we train a set of context-aware MT models on both datasets and propose a comparative evaluation scheme that contrasts coherent context with artificially scrambled documents and absent context, arguing that the impact of discourse-aware MT models will become visible in this way. Our results show that the models are indeed affected by the manipulation of the test data, providing a different view on document-level translation quality than absolute sentence-level scores.