Tongran Liu
2020
Does Multi-Encoder Help? A Case Study on Context-Aware Neural Machine Translation
Bei Li
|
Hui Liu
|
Ziyang Wang
|
Yufan Jiang
|
Tong Xiao
|
Jingbo Zhu
|
Tongran Liu
|
Changliang Li
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
In encoder-decoder neural models, multiple encoders are in general used to represent the contextual information in addition to the individual sentence. In this paper, we investigate multi-encoder approaches in document-level neural machine translation (NMT). Surprisingly, we find that the context encoder does not only encode the surrounding sentences but also behaves as a noise generator. This makes us rethink the real benefits of multi-encoder in context-aware translation - some of the improvements come from robust training. We compare several methods that introduce noise and/or well-tuned dropout setup into the training of these encoders. Experimental results show that noisy training plays an important role in multi-encoder-based NMT, especially when the training data is small. Also, we establish a new state-of-the-art on IWSLT Fr-En task by careful use of noise generation and dropout methods.
Learning Architectures from an Extended Search Space for Language Modeling
Yinqiao Li
|
Chi Hu
|
Yuhao Zhang
|
Nuo Xu
|
Yufan Jiang
|
Tong Xiao
|
Jingbo Zhu
|
Tongran Liu
|
Changliang Li
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Neural architecture search (NAS) has advanced significantly in recent years but most NAS systems restrict search to learning architectures of a recurrent or convolutional cell. In this paper, we extend the search space of NAS. In particular, we present a general approach to learn both intra-cell and inter-cell architectures (call it ESS). For a better search result, we design a joint learning method to perform intra-cell and inter-cell NAS simultaneously. We implement our model in a differentiable architecture search system. For recurrent neural language modeling, it outperforms a strong baseline significantly on the PTB and WikiText data, with a new state-of-the-art on PTB. Moreover, the learned architectures show good transferability to other systems. E.g., they improve state-of-the-art systems on the CoNLL and WNUT named entity recognition (NER) tasks and CoNLL chunking task, indicating a promising line of research on large-scale pre-learned architectures.
Search
Co-authors
- Yufan Jiang 2
- Tong Xiao 2
- Jingbo Zhu 2
- Changliang Li 2
- Bei Li 1
- show all...
Venues
- acl2