Abstract
We show that the state-of-the-art Transformer MT model is not biased towards monotonic reordering (unlike previous recurrent neural network models), but that nevertheless, long-distance dependencies remain a challenge for the model. Since most dependencies are short-distance, common evaluation metrics will be little influenced by how well systems perform on them. We therefore propose an automatic approach for extracting challenge sets rich with long-distance dependencies, and argue that evaluation using this methodology provides a complementary perspective on system performance. To support our claim, we compile challenge sets for English-German and German-English, which are much larger than any previously released challenge set for MT. The extracted sets are large enough to allow reliable automatic evaluation, which makes the proposed approach a scalable and practical solution for evaluating MT performance on the long-tail of syntactic phenomena.- Anthology ID:
- K19-1028
- Volume:
- Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL)
- Month:
- November
- Year:
- 2019
- Address:
- Hong Kong, China
- Editors:
- Mohit Bansal, Aline Villavicencio
- Venue:
- CoNLL
- SIG:
- SIGNLL
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 291–303
- Language:
- URL:
- https://preview.aclanthology.org/icon-24-ingestion/K19-1028/
- DOI:
- 10.18653/v1/K19-1028
- Cite (ACL):
- Leshem Choshen and Omri Abend. 2019. Automatically Extracting Challenge Sets for Non-Local Phenomena in Neural Machine Translation. In Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL), pages 291–303, Hong Kong, China. Association for Computational Linguistics.
- Cite (Informal):
- Automatically Extracting Challenge Sets for Non-Local Phenomena in Neural Machine Translation (Choshen & Abend, CoNLL 2019)
- PDF:
- https://preview.aclanthology.org/icon-24-ingestion/K19-1028.pdf