2020
pdf
abs
Alibaba’s Submission for the WMT 2020 APE Shared Task: Improving Automatic Post-Editing with Pre-trained Conditional Cross-Lingual BERT
Jiayi Wang
|
Ke Wang
|
Kai Fan
|
Yuqi Zhang
|
Jun Lu
|
Xin Ge
|
Yangbin Shi
|
Yu Zhao
Proceedings of the Fifth Conference on Machine Translation
The goal of Automatic Post-Editing (APE) is basically to examine the automatic methods for correcting translation errors generated by an unknown machine translation (MT) system. This paper describes Alibaba’s submissions to the WMT 2020 APE Shared Task for the English-German language pair. We design a two-stage training pipeline. First, a BERT-like cross-lingual language model is pre-trained by randomly masking target sentences alone. Then, an additional neural decoder on the top of the pre-trained model is jointly fine-tuned for the APE task. We also apply an imitation learning strategy to augment a reasonable amount of pseudo APE training data, potentially preventing the model to overfit on the limited real training data and boosting the performance on held-out data. To verify our proposed model and data augmentation, we examine our approach with the well-known benchmarking English-German dataset from the WMT 2017 APE task. The experiment results demonstrate that our system significantly outperforms all other baselines and achieves the state-of-the-art performance. The final results on the WMT 2020 test dataset show that our submission can achieve +5.56 BLEU and -4.57 TER with respect to the official MT baseline.
pdf
abs
Alibaba Submission to the WMT20 Parallel Corpus Filtering Task
Jun Lu
|
Xin Ge
|
Yangbin Shi
|
Yuqi Zhang
Proceedings of the Fifth Conference on Machine Translation
This paper describes the Alibaba Machine Translation Group submissions to the WMT 2020 Shared Task on Parallel Corpus Filtering and Alignment. In the filtering task, three main methods are applied to evaluate the quality of the parallel corpus, i.e. a) Dual Bilingual GPT-2 model, b) Dual Conditional Cross-Entropy Model and c) IBM word alignment model. The scores of these models are combined by using a positive-unlabeled (PU) learning model and a brute-force search to obtain additional gains. Besides, a few simple but efficient rules are adopted to evaluate the quality and the diversity of the corpus. In the alignment-filtering task, the extraction pipeline of bilingual sentence pairs includes the following steps: bilingual lexicon mining, language identification, sentence segmentation and sentence alignment. The final result shows that, in both filtering and alignment tasks, our system significantly outperforms the LASER-based system.
2018
pdf
abs
Alibaba’s Neural Machine Translation Systems for WMT18
Yongchao Deng
|
Shanbo Cheng
|
Jun Lu
|
Kai Song
|
Jingang Wang
|
Shenglan Wu
|
Liang Yao
|
Guchun Zhang
|
Haibo Zhang
|
Pei Zhang
|
Changfeng Zhu
|
Boxing Chen
Proceedings of the Third Conference on Machine Translation: Shared Task Papers
This paper describes the submission systems of Alibaba for WMT18 shared news translation task. We participated in 5 translation directions including English ↔ Russian, English ↔ Turkish in both directions and English → Chinese. Our systems are based on Google’s Transformer model architecture, into which we integrated the most recent features from the academic research. We also employed most techniques that have been proven effective during the past WMT years, such as BPE, back translation, data selection, model ensembling and reranking, at industrial scale. For some morphologically-rich languages, we also incorporated linguistic knowledge into our neural network. For the translation tasks in which we have participated, our resulting systems achieved the best case sensitive BLEU score in all 5 directions. Notably, our English → Russian system outperformed the second reranked system by 5 BLEU score.
pdf
abs
Alibaba Submission to the WMT18 Parallel Corpus Filtering Task
Jun Lu
|
Xiaoyu Lv
|
Yangbin Shi
|
Boxing Chen
Proceedings of the Third Conference on Machine Translation: Shared Task Papers
This paper describes the Alibaba Machine Translation Group submissions to the WMT 2018 Shared Task on Parallel Corpus Filtering. While evaluating the quality of the parallel corpus, the three characteristics of the corpus are investigated, i.e. 1) the bilingual/translation quality, 2) the monolingual quality and 3) the corpus diversity. Both rule-based and model-based methods are adapted to score the parallel sentence pairs. The final parallel corpus filtering system is reliable, easy to build and adapt to other language pairs.
2014
pdf
An Iterative Link-based Method for Parallel Web Page Mining
Le Liu
|
Yu Hong
|
Jun Lu
|
Jun Lang
|
Heng Ji
|
Jianmin Yao
Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)
2012
pdf
Semi-supervised Chinese Word Segmentation for CLP2012
Saike He
|
Nan He
|
Songxiang Cen
|
Jun Lu
Proceedings of the Second CIPS-SIGHAN Joint Conference on Chinese Language Processing
pdf
bib
abs
Monolingual Data Optimisation for Bootstrapping SMT Engines
Jie Jiang
|
Andy Way
|
Nelson Ng
|
Rejwanul Haque
|
Mike Dillinger
|
Jun Lu
Workshop on Monolingual Machine Translation
Content localisation via machine translation (MT) is a sine qua non, especially for international online business. While most applications utilise rule-based solutions due to the lack of suitable in-domain parallel corpora for statistical MT (SMT) training, in this paper we investigate the possibility of applying SMT where huge amounts of monolingual content only are available. We describe a case study where an analysis of a very large amount of monolingual online trading data from eBay is conducted by ALS with a view to reducing this corpus to the most representative sample in order to ensure the widest possible coverage of the total data set. Furthermore, minimal yet optimal sets of sentences/words/terms are selected for generation of initial translation units for future SMT system-building.
2011
pdf
Factual or Satisfactory: What Search Results Are Better?
Yu Hong
|
Jun Lu
|
Shiqi Zhao
Proceedings of the 25th Pacific Asia Conference on Language, Information and Computation