Max Meng
2020
Regularized Context Gates on Transformer for Machine Translation
Xintong Li
|
Lemao Liu
|
Rui Wang
|
Guoping Huang
|
Max Meng
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Context gates are effective to control the contributions from the source and target contexts in the recurrent neural network (RNN) based neural machine translation (NMT). However, it is challenging to extend them into the advanced Transformer architecture, which is more complicated than RNN. This paper first provides a method to identify source and target contexts and then introduce a gate mechanism to control the source and target contributions in Transformer. In addition, to further reduce the bias problem in the gate mechanism, this paper proposes a regularization method to guide the learning of the gates with supervision automatically generated using pointwise mutual information. Extensive experiments on 4 translation datasets demonstrate that the proposed model obtains an averaged gain of 1.0 BLEU score over a strong Transformer baseline.
2019
On the Word Alignment from Neural Machine Translation
Xintong Li
|
Guanlin Li
|
Lemao Liu
|
Max Meng
|
Shuming Shi
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics
Prior researches suggest that neural machine translation (NMT) captures word alignment through its attention mechanism, however, this paper finds attention may almost fail to capture word alignment for some NMT models. This paper thereby proposes two methods to induce word alignment which are general and agnostic to specific NMT models. Experiments show that both methods induce much better word alignment than attention. This paper further visualizes the translation through the word alignment induced by NMT. In particular, it analyzes the effect of alignment errors on translation errors at word level and its quantitative analysis over many testing examples consistently demonstrate that alignment errors are likely to lead to translation errors measured by different metrics.
2018
Target Foresight Based Attention for Neural Machine Translation
Xintong Li
|
Lemao Liu
|
Zhaopeng Tu
|
Shuming Shi
|
Max Meng
Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)
In neural machine translation, an attention model is used to identify the aligned source words for a target word (target foresight word) in order to select translation context, but it does not make use of any information of this target foresight word at all. Previous work proposed an approach to improve the attention model by explicitly accessing this target foresight word and demonstrated the substantial gains in alignment task. However, this approach is useless in machine translation task on which the target foresight word is unavailable. In this paper, we propose a new attention model enhanced by the implicit information of target foresight word oriented to both alignment and translation tasks. Empirical experiments on Chinese-to-English and Japanese-to-English datasets show that the proposed attention model delivers significant improvements in terms of both alignment error rate and BLEU.
Search
Co-authors
- Xintong Li 3
- Lemao Liu 3
- Shuming Shi 2
- Zhaopeng Tu 1
- Guanlin Li 1
- show all...