2023
pdf
abs
Leveraging Multilingual Knowledge Graph to Boost Domain-specific Entity Translation of ChatGPT
Min Zhang
|
Limin Liu
|
Zhao Yanqing
|
Xiaosong Qiao
|
Su Chang
|
Xiaofeng Zhao
|
Junhao Zhu
|
Ming Zhu
|
Song Peng
|
Yinglu Li
|
Yilun Liu
|
Wenbing Ma
|
Mengyao Piao
|
Shimin Tao
|
Hao Yang
|
Yanfei Jiang
Proceedings of Machine Translation Summit XIX, Vol. 2: Users Track
Recently, ChatGPT has shown promising results for Machine Translation (MT) in general domains and is becoming a new paradigm for translation. In this paper, we focus on how to apply ChatGPT to domain-specific translation and propose to leverage Multilingual Knowledge Graph (MKG) to help ChatGPT improve the domain entity translation quality. To achieve this, we extract the bilingual entity pairs from MKG for the domain entities that are recognized from source sentences. We then introduce these pairs into translation prompts, instructing ChatGPT to use the correct translations of the domain entities. To evaluate the novel MKG method for ChatGPT, we conduct comparative experiments on three Chinese-English (zh-en) test datasets constructed from three specific domains, of which one domain is from biomedical science, and the other two are from the Information and Communications Technology (ICT) industry — Visible Light Communication (VLC) and wireless domains. Experimental results demonstrate that both the overall translation quality of ChatGPT (+6.21, +3.13 and +11.25 in BLEU scores) and the translation accuracy of domain entities (+43.2%, +30.2% and +37.9% absolute points) are significantly improved with MKG on the three test datasets.
pdf
abs
HW-TSC’s Participation in the WMT 2023 Automatic Post Editing Shared Task
Jiawei Yu
|
Min Zhang
|
Zhao Yanqing
|
Xiaofeng Zhao
|
Yuang Li
|
Su Chang
|
Yinglu Li
|
Ma Miaomiao
|
Shimin Tao
|
Hao Yang
Proceedings of the Eighth Conference on Machine Translation
The paper presents the submission by HW-TSC in the WMT 2023 Automatic Post Editing (APE) shared task for the English-Marathi (En-Mr) language pair. Our method encompasses several key steps. First, we pre-train an APE model by utilizing synthetic APE data provided by the official task organizers. Then, we fine-tune the model by employing real APE data. For data augmentation, we incorporate candidate translations obtained from an external Machine Translation (MT) system. Furthermore, we integrate the En-Mr parallel corpus from the Flores-200 dataset into our training data. To address the overfitting issue, we employ R-Drop during the training phase. Given that APE systems tend to exhibit a tendency of ‘over-correction’, we employ a sentence-level Quality Estimation (QE) system to select the final output, deciding between the original translation and the corresponding output generated by the APE model. Our experiments demonstrate that pre-trained APE models are effective when being fine-tuned with the APE corpus of a limited size, and the performance can be further improved with external MT augmentation. Our approach improves the TER and BLEU scores on the development set by -2.42 and +3.76 points, respectively.
2022
pdf
abs
Partial Could Be Better than Whole. HW-TSC 2022 Submission for the Metrics Shared Task
Yilun Liu
|
Xiaosong Qiao
|
Zhanglin Wu
|
Su Chang
|
Min Zhang
|
Yanqing Zhao
|
Song Peng
|
Shimin Tao
|
Hao Yang
|
Ying Qin
|
Jiaxin Guo
|
Minghan Wang
|
Yinglu Li
|
Peng Li
|
Xiaofeng Zhao
Proceedings of the Seventh Conference on Machine Translation (WMT)
In this paper, we present the contribution of HW-TSC to WMT 2022 Metrics Shared Task. We propose one reference-based metric, HWTSC-EE-BERTScore*, and four referencefree metrics including HWTSC-Teacher-Sim, HWTSC-TLM, KG-BERTScore and CROSSQE. Among these metrics, HWTSC-Teacher-Sim and CROSS-QE are supervised, whereas HWTSC-EE-BERTScore*, HWTSC-TLM and KG-BERTScore are unsupervised. We use these metrics in the segment-level and systemlevel tracks. Overall, our systems achieve strong results for all language pairs on previous test sets and a new state-of-the-art in many sys-level case sets.
pdf
abs
CrossQE: HW-TSC 2022 Submission for the Quality Estimation Shared Task
Shimin Tao
|
Su Chang
|
Ma Miaomiao
|
Hao Yang
|
Xiang Geng
|
Shujian Huang
|
Min Zhang
|
Jiaxin Guo
|
Minghan Wang
|
Yinglu Li
Proceedings of the Seventh Conference on Machine Translation (WMT)
Quality estimation (QE) is a crucial method to investigate automatic methods for estimating the quality of machine translation results without reference translations. This paper presents Huawei Translation Services Center’s (HW-TSC’s) work called CrossQE in WMT 2022 QE shared tasks 1 and 2, namely sentence- and word- level quality prediction and explainable QE.CrossQE employes the framework of predictor-estimator for task 1, concretely with a pre-trained cross-lingual XLM-RoBERTa large as predictor and task-specific classifier or regressor as estimator. An extensive set of experimental results show that after adding bottleneck adapter layer, mean teacher loss, masked language modeling task loss and MC dropout methods in CrossQE, the performance has improved to a certain extent. For task 2, CrossQE calculated the cosine similarity between each word feature in the target and each word feature in the source by task 1 sentence-level QE system’s predictor, and used the inverse value of maximum similarity between each word in the target and the source as the word translation error risk value. Moreover, CrossQE has outstanding performance on QE test sets of WMT 2022.
2021
pdf
abs
HI-CMLM: Improve CMLM with Hybrid Decoder Input
Minghan Wang
|
Guo Jiaxin
|
Yuxia Wang
|
Yimeng Chen
|
Su Chang
|
Daimeng Wei
|
Min Zhang
|
Shimin Tao
|
Hao Yang
Proceedings of the 14th International Conference on Natural Language Generation
Mask-predict CMLM (Ghazvininejad et al.,2019) has achieved stunning performance among non-autoregressive NMT models, but we find that the mechanism of predicting all of the target words only depending on the hidden state of [MASK] is not effective and efficient in initial iterations of refinement, resulting in ungrammatical repetitions and slow convergence. In this work, we mitigate this problem by combining copied source with embeddings of [MASK] in decoder. Notably. it’s not a straightforward copying that is shown to be useless, but a novel heuristic hybrid strategy — fence-mask. Experimental results show that it gains consistent boosts on both WMT14 En<->De and WMT16 En<->Ro corpus by 0.5 BLEU on average, and 1 BLEU for less-informative short sentences. This reveals that incorporating additional information by proper strategies is beneficial to improve CMLM, particularly translation quality of short texts and speeding up early-stage convergence.
pdf
abs
How Length Prediction Influence the Performance of Non-Autoregressive Translation?
Minghan Wang
|
Guo Jiaxin
|
Yuxia Wang
|
Yimeng Chen
|
Su Chang
|
Hengchao Shang
|
Min Zhang
|
Shimin Tao
|
Hao Yang
Proceedings of the Fourth BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP
Length prediction is a special task in a series of NAT models where target length has to be determined before generation. However, the performance of length prediction and its influence on translation quality has seldom been discussed. In this paper, we present comprehensive analyses on length prediction task of NAT, aiming to find the factors that influence performance, as well as how it associates with translation quality. We mainly perform experiments based on Conditional Masked Language Model (CMLM) (Ghazvininejad et al., 2019), a representative NAT model, and evaluate it on two language pairs, En-De and En-Ro. We draw two conclusions: 1) The performance of length prediction is mainly influenced by properties of language pairs such as alignment pattern, word order or intrinsic length ratio, and is also affected by the usage of knowledge distilled data. 2) There is a positive correlation between the performance of the length prediction and the BLEU score.