Renshou Wu
2022
The AISP-SJTU Translation System for WMT 2022
Guangfeng Liu
|
Qinpei Zhu
|
Xingyu Chen
|
Renjie Feng
|
Jianxin Ren
|
Renshou Wu
|
Qingliang Miao
|
Rui Wang
|
Kai Yu
Proceedings of the Seventh Conference on Machine Translation (WMT)
This paper describes AISP-SJTU’s participation in WMT 2022 shared general MT task. In this shared task, we participated in four translation directions: English-Chinese, Chinese-English, English-Japanese and Japanese-English. Our systems are based on the Transformer architecture with several novel and effective variants, including network depth and internal structure. In our experiments, we employ data filtering, large-scale back-translation, knowledge distillation, forward-translation, iterative in-domain knowledge finetune and model ensemble. The constrained systems achieve 48.8, 29.7, 39.3 and 22.0 case-sensitive BLEU scores on EN-ZH, ZH-EN, EN-JA and JA-EN, respectively.
The AISP-SJTU Simultaneous Translation System for IWSLT 2022
Qinpei Zhu
|
Renshou Wu
|
Guangfeng Liu
|
Xinyu Zhu
|
Xingyu Chen
|
Yang Zhou
|
Qingliang Miao
|
Rui Wang
|
Kai Yu
Proceedings of the 19th International Conference on Spoken Language Translation (IWSLT 2022)
This paper describes AISP-SJTU’s submissions for the IWSLT 2022 Simultaneous Translation task. We participate in the text-to-text and speech-to-text simultaneous translation from English to Mandarin Chinese. The training of the CAAT is improved by training across multiple values of right context window size, which achieves good online performance without setting a prior right context window size for training. For speech-to-text task, the best model we submitted achieves 25.87, 26.21, 26.45 BLEU in low, medium and high regimes on tst-COMMON, corresponding to 27.94, 28.31, 28.43 BLEU in text-to-text task.
Search
Co-authors
- Guangfeng Liu 2
- Qinpei Zhu 2
- Xingyu Chen 2
- Qingliang Miao 2
- Rui Wang 2
- show all...