Xinglin Lyu


2024

pdf
DeMPT: Decoding-enhanced Multi-phase Prompt Tuning for Making LLMs Be Better Context-aware Translators
Xinglin Lyu | Junhui Li | Yanqing Zhao | Min Zhang | Daimeng Wei | Shimin Tao | Hao Yang | Min Zhang
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing

2023

pdf
HW-TSC 2023 Submission for the Quality Estimation Shared Task
Yuang Li | Chang Su | Ming Zhu | Mengyao Piao | Xinglin Lyu | Min Zhang | Hao Yang
Proceedings of the Eighth Conference on Machine Translation

Quality estimation (QE) is an essential technique to assess machine translation quality without reference translations. In this paper, we focus on Huawei Translation Services Center’s (HW-TSC’s) submission to the sentence-level QE shared task, named Ensemble-CrossQE. Our system uses CrossQE, the same model architecture as our last year’s submission, which consists of a multilingual base model and a task-specific downstream layer. The input is the concatenation of the source and the translated sentences. To enhance the performance, we finetuned and ensembled multiple base models such as XLM-R, InfoXLM, RemBERT and CometKiwi. Moreover, we introduce a new corruption-based data augmentation method, which generates deletion, substitution and insertion errors in the original translation and uses a reference-based QE model to obtain pseudo scores. Results show that our system achieves impressive performance on sentence-level QE test sets and ranked the first place for three language pairs: English-Hindi, English-Tamil and English-Telegu. In addition, we participated in the error span detection task. The submitted model outperforms the baseline on Chinese-English and Hebrew-English language pairs.

2022

pdf
Modeling Consistency Preference via Lexical Chains for Document-level Neural Machine Translation
Xinglin Lyu | Junhui Li | Shimin Tao | Hao Yang | Ying Qin | Min Zhang
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing

In this paper we aim to relieve the issue of lexical translation inconsistency for document-level neural machine translation (NMT) by modeling consistency preference for lexical chains, which consist of repeated words in a source-side document and provide a representation of the lexical consistency structure of the document. Specifically, we first propose lexical-consistency attention to capture consistency context among words in the same lexical chains. Then for each lexical chain we define and learn a consistency-tailored latent variable, which will guide the translation of corresponding sentences to enhance lexical translation consistency. Experimental results on Chinese→English and French→English document-level translation tasks show that our approach not only significantly improves translation performance in BLEU, but also substantially alleviates the problem of the lexical translation inconsistency.

2021

pdf
Encouraging Lexical Translation Consistency for Document-Level Neural Machine Translation
Xinglin Lyu | Junhui Li | Zhengxian Gong | Min Zhang
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing

Recently a number of approaches have been proposed to improve translation performance for document-level neural machine translation (NMT). However, few are focusing on the subject of lexical translation consistency. In this paper we apply “one translation per discourse” in NMT, and aim to encourage lexical translation consistency for document-level NMT. This is done by first obtaining a word link for each source word in a document, which tells the positions where the source word appears. Then we encourage the translation of those words within a link to be consistent in two ways. On the one hand, when encoding sentences within a document we properly share context information of those words. On the other hand, we propose an auxiliary loss function to better constrain that their translation should be consistent. Experimental results on Chinese↔English and English→French translation tasks show that our approach not only achieves state-of-the-art performance in BLEU scores, but also greatly improves lexical consistency in translation.