Bo Qin
2023
Understanding and Improving the Robustness of Terminology Constraints in Neural Machine Translation
Huaao Zhang
|
Qiang Wang
|
Bo Qin
|
Zelin Shi
|
Haibo Wang
|
Ming Chen
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
In this work, we study the robustness of two typical terminology translation methods: Placeholder (PH) and Code-Switch (CS), concerning (1) the number of constraints and (2) the target constraint length. We identify that existing terminology constraint test sets, such as IATE, Wiktionary, and TICO, are blind to this issue due to oversimplified constraint settings. To solve it, we create a new challenging test set of English-German, increasing the average constraint count per sentence from 1.1~1.7 to 6.1 and the length per target constraint from 1.1~1.2 words to 3.4 words. Then we find that PH and CS methods degrade as the number of constraints increases, but they have complementary strengths. Specifically, PH is better at retaining high constraint accuracy but lower translation quality as measured by BLEU and COMET scores. In contrast, CS has the opposite results. Based on these observations, we propose a simple but effective method combining the advantages of PH and CS. This approach involves training a model like PH to predict the term labels, and then during inference replacing those labels with target terminology text like CS, so that the subsequent generation is aware of the target term content. Extensive experimental results show that this approach can achieve high constraint accuracy and translation quality simultaneously, regardless of the number or length of constraints.
2022
The RoyalFlush System for the WMT 2022 Efficiency Task
Bo Qin
|
Aixin Jia
|
Qiang Wang
|
Jianning Lu
|
Shuqin Pan
|
Haibo Wang
|
Ming Chen
Proceedings of the Seventh Conference on Machine Translation (WMT)
This paper describes the submission of the RoyalFlush neural machine translation system for the WMT 2022 translation efficiency task. Unlike the commonly used autoregressive translation system, we adopted a two-stage translation paradigm called Hybrid Regression Translation (HRT) to combine the advantages of autoregressive and non-autoregressive translation. Specifically, HRT first autoregressively generates a discontinuous sequence (e.g., make a prediction every k tokens, k1) and then fills in all previously skipped tokens at once in a non-autoregressive manner. Thus, we can easily trade off the translation quality and speed by adjusting k. In addition, by integrating other modeling techniques (e.g., sequence-level knowledge distillation and deep-encoder-shallow-decoder layer allocation strategy) and a mass of engineering efforts, HRT improves 80% inference speed and achieves equivalent translation performance with the same-capacity AT counterpart. Our fastest system reaches 6k+ words/second on the GPU latency setting, estimated to be about 3.1x faster than the last year’s winner.
2009
Lemmatic Machine Translation
Stephen Soderland
|
Christopher Lim
|
Mausam
|
Bo Qin
|
Oren Etzioni
|
Jonathan Pool
Proceedings of Machine Translation Summit XII: Papers
Search
Co-authors
- Qiang Wang 2
- Haibo Wang 2
- Ming Chen 2
- Huaao Zhang 1
- Zelin Shi 1
- show all...