Xiangyu Duan


2024

pdf
Multimodal Cross-lingual Phrase Retrieval
Chuanqi Dong | Wenjie Zhou | Xiangyu Duan | Yuqi Zhang | Min Zhang
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)

Cross-lingual phrase retrieval aims to retrieve parallel phrases among languages. Current approaches only deals with textual modality. There lacks multimodal data resources and explorations for multimodal cross-lingual phrase retrieval (MXPR). In this paper, we create the first MXPR data resource and propose a novel approach for MXPR to explore the effectiveness of multi-modality. The MXPR data resource is built by marrying the benchmark dataset for textual cross-lingual phrase retrieval with Wikimedia Commons, which is a media store containing tremendous texts and related images. In the built resource, the phrase pairs of the textual benchmark dataset are equipped with their related images. Based on this novel data resource, we introduce a strategy to bridge the gap between different modalities by multimodal relation generation with a large multimodal pre-trained model and consistency training. Experiments on benchmarked dataset covering eight language pairs show that our MXPR approach, which deals with multimodal phrases, performs significantly better than pure textual cross-lingual phrase retrieval.

pdf
Revisiting the Self-Consistency Challenges in Multi-Choice Question Formats for Large Language Model Evaluation
Wenjie Zhou | Qiang Wang | Mingzhou Xu | Ming Chen | Xiangyu Duan
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)

Multi-choice questions (MCQ) are a common method for assessing the world knowledge of large language models (LLMs), demonstrated by benchmarks such as MMLU and C-Eval. However, recent findings indicate that even top-tier LLMs, such as ChatGPT and GPT4, might display inconsistencies when faced with slightly varied inputs. This raises concerns about the credibility of MCQ-based evaluations. To address this issue, we introduced three knowledge-equivalent question variants: option position shuffle, option label replacement, and conversion to a True/False format. We rigorously tested a range of LLMs, varying in model size (from 6B to 70B) and types—pretrained language model (PLM), supervised fine-tuning (SFT), and reinforcement learning from human feedback (RLHF). Our findings from MMLU and C-Eval revealed that accuracy for individual questions lacks robustness, particularly in smaller models (<30B) and PLMs. Consequently, we advocate that consistent accuracy may serve as a more reliable metric for evaluating and ranking LLMs.

pdf
Submodular-based In-context Example Selection for LLMs-based Machine Translation
Baijun Ji | Xiangyu Duan | Zhenyu Qiu | Tong Zhang | Junhui Li | Hao Yang | Min Zhang
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)

Large Language Models (LLMs) have demonstrated impressive performances across various NLP tasks with just a few prompts via in-context learning. Previous studies have emphasized the pivotal role of well-chosen examples in in-context learning, as opposed to randomly selected instances that exhibits unstable results.A successful example selection scheme depends on multiple factors, while in the context of LLMs-based machine translation, the common selection algorithms only consider the single factor, i.e., the similarity between the example source sentence and the input sentence.In this paper, we introduce a novel approach to use multiple translational factors for in-context example selection by using monotone submodular function maximization.The factors include surface/semantic similarity between examples and inputs on both source and target sides, as well as the diversity within examples.Importantly, our framework mathematically guarantees the coordination between these factors, which are different and challenging to reconcile.Additionally, our research uncovers a previously unexamined dimension: unlike other NLP tasks, the translation part of an example is also crucial, a facet disregarded in prior studies.Experiments conducted on BLOOMZ-7.1B and LLAMA2-13B, demonstrate that our approach significantly outperforms random selection and robust single-factor baselines across various machine translation tasks.

2023

pdf
Disambiguated Lexically Constrained Neural Machine Translation
Jinpeng Zhang | Nini Xiao | Ke Wang | Chuanqi Dong | Xiangyu Duan | Yuqi Zhang | Min Zhang
Findings of the Association for Computational Linguistics: ACL 2023

Lexically constrained neural machine translation (LCNMT), which controls the translation generation with pre-specified constraints, is important in many practical applications. Current approaches to LCNMT typically assume that the pre-specified lexicon constraints are contextually appropriate. This assumption limits their application to real-world scenarios where a source lexicon may have multiple target constraints, and disambiguation is needed to select the most suitable one. In this paper, we propose disambiguated LCNMT (D-LCNMT) to solve the problem. D-LCNMT is a robust and effective two-stage framework that disambiguates the constraints based on contexts at first, then integrates the disambiguated constraints into LCNMT. Experimental results show that our approach outperforms strong baselines including existing data argumentation based approaches on benchmark datasets, and comprehensive experiments in scenarios where a source lexicon corresponds to multiple target constraints demonstrate the constraint disambiguation superiority of our approach.

2022

pdf
Third-Party Aligner for Neural Word Alignments
Jinpeng Zhang | Chuanqi Dong | Xiangyu Duan | Yuqi Zhang | Min Zhang
Findings of the Association for Computational Linguistics: EMNLP 2022

Word alignment is to find translationally equivalent words between source and target sentences. Previous work has demonstrated that self-training can achieve competitive word alignment results. In this paper, we propose to use word alignments generated by a third-party word aligner to supervise the neural word alignment training. Specifically, source word and target word of each word pair aligned by the third-party aligner are trained to be close neighbors to each other in the contextualized embedding space when fine-tuning a pre-trained cross-lingual language model. Experiments on the benchmarks of various language pairs show that our approach can surprisingly do self-correction over the third-party supervision by finding more accurate word alignments and deleting wrong word alignments, leading to better performance than various third-party word aligners, including the currently best one. When we integrate all supervisions from various third-party aligners, we achieve state-of-the-art word alignment performances, with averagely more than two points lower alignment error rates than the best third-party aligner.We released our code at https://github.com/sdongchuanqi/Third-Party-Supervised-Aligner.

pdf
TSMind: Alibaba and Soochow University’s Submission to the WMT22 Translation Suggestion Task
Xin Ge | Ke Wang | Jiayi Wang | Nini Xiao | Xiangyu Duan | Yu Zhao | Yuqi Zhang
Proceedings of the Seventh Conference on Machine Translation (WMT)

This paper describes the joint submission of Alibaba and Soochow University to the WMT 2022 Shared Task on Translation Suggestion (TS). We participate in the English to/from German and English to/from Chinese tasks. Basically, we utilize the model paradigm fine-tuning on the downstream tasks based on large-scale pre-trained models, which has recently achieved great success. We choose FAIR’s WMT19 English to/from German news translation system and MBART50 for English to/from Chinese as our pre-trained models. Considering the task’s condition of limited use of training data, we follow the data augmentation strategies provided by Yang to boost our TS model performance. And we further involve the dual conditional cross-entropy model and GPT-2 language model to filter augmented data. The leader board finally shows that our submissions are ranked first in three of four language directions in the Naive TS task of the WMT22 Translation Suggestion task.

2021

pdf
基于层间知识蒸馏的神经机器翻译(Inter-layer Knowledge Distillation for Neural Machine Translation)
Chang Jin (金畅) | Renchong Duan (段仁翀) | Nini Xiao (肖妮妮) | Xiangyu Duan (段湘煜)
Proceedings of the 20th Chinese National Conference on Computational Linguistics

神经机器翻译(NMT)通常采用多层神经网络模型结构,随着网络层数的加深,所得到的特征也越来越抽象,但是在现有的神经机器翻译模型中,高层的抽象信息仅在预测分布时被利用。为了更好地利用这些信息,本文提出了层间知识蒸馏,目的在于将高层网络的抽象知识迁移到低层网络,使低层网络能够捕捉更加有用的信息,从而提升整个模型的翻译质量。区别于传统教师模型和学生模型的知识蒸馏,层间知识蒸馏实现的是同一个模型内部不同层之间的知识迁移。通过在中文-英语、英语-罗马尼亚语、德语-英语三个数据集上的实验,结果证明层间蒸馏方法能够有效提升翻译性能,分别在中-英、英-罗、德-英上提升1.19,0.72,1.35的BLEU值,同时也证明有效地利用高层信息能够提高神经网络模型的翻译质量。

pdf
Combining Static Word Embeddings and Contextual Representations for Bilingual Lexicon Induction
Jinpeng Zhang | Baijun Ji | Nini Xiao | Xiangyu Duan | Min Zhang | Yangbin Shi | Weihua Luo
Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021

2020

pdf
Token Drop mechanism for Neural Machine Translation
Huaao Zhang | Shigui Qiu | Xiangyu Duan | Min Zhang
Proceedings of the 28th International Conference on Computational Linguistics

Neural machine translation with millions of parameters is vulnerable to unfamiliar inputs. We propose Token Drop to improve generalization and avoid overfitting for the NMT model. Similar to word dropout, whereas we replace dropped token with a special token instead of setting zero to words. We further introduce two self-supervised objectives: Replaced Token Detection and Dropped Token Prediction. Our method aims to force model generating target translation with less information, in this way the model can learn textual representation better. Experiments on Chinese-English and English-Romanian benchmark demonstrate the effectiveness of our approach and our model achieves significant improvements over a strong Transformer baseline.

pdf
Bilingual Dictionary Based Neural Machine Translation without Using Parallel Sentences
Xiangyu Duan | Baijun Ji | Hao Jia | Min Tan | Min Zhang | Boxing Chen | Weihua Luo | Yue Zhang
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics

In this paper, we propose a new task of machine translation (MT), which is based on no parallel sentences but can refer to a ground-truth bilingual dictionary. Motivated by the ability of a monolingual speaker learning to translate via looking up the bilingual dictionary, we propose the task to see how much potential an MT system can attain using the bilingual dictionary and large scale monolingual corpora, while is independent on parallel sentences. We propose anchored training (AT) to tackle the task. AT uses the bilingual dictionary to establish anchoring points for closing the gap between source language and target language. Experiments on various language pairs show that our approaches are significantly better than various baselines, including dictionary-based word-by-word translation, dictionary-supervised cross-lingual word embedding transformation, and unsupervised MT. On distant language pairs that are hard for unsupervised MT to perform well, AT performs remarkably better, achieving performances comparable to supervised SMT trained on more than 4M parallel sentences.

pdf
Factorized Transformer for Multi-Domain Neural Machine Translation
Yongchao Deng | Hongfei Yu | Heng Yu | Xiangyu Duan | Weihua Luo
Findings of the Association for Computational Linguistics: EMNLP 2020

Multi-Domain Neural Machine Translation (NMT) aims at building a single system that performs well on a range of target domains. However, along with the extreme diversity of cross-domain wording and phrasing style, the imperfections of training data distribution and the inherent defects of the current sequential learning process all contribute to making the task of multi-domain NMT very challenging. To mitigate these problems, we propose the Factorized Transformer, which consists of an in-depth factorization of the parameters of an NMT model, namely Transformer in this paper, into two categories: domain-shared ones that encode common cross-domain knowledge and domain-specific ones that are private for each constituent domain. We experiment with various designs of our model and conduct extensive validations on English to French open multi-domain dataset. Our approach achieves state-of-the-art performance and opens up new perspectives for multi-domain and open-domain applications.

2019

pdf
Zero-Shot Cross-Lingual Abstractive Sentence Summarization through Teaching Generation and Attention
Xiangyu Duan | Mingming Yin | Min Zhang | Boxing Chen | Weihua Luo
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics

Abstractive Sentence Summarization (ASSUM) targets at grasping the core idea of the source sentence and presenting it as the summary. It is extensively studied using statistical models or neural models based on the large-scale monolingual source-summary parallel corpus. But there is no cross-lingual parallel corpus, whose source sentence language is different to the summary language, to directly train a cross-lingual ASSUM system. We propose to solve this zero-shot problem by using resource-rich monolingual ASSUM system to teach zero-shot cross-lingual ASSUM system on both summary word generation and attention. This teaching process is along with a back-translation process which simulates source-summary pairs. Experiments on cross-lingual ASSUM task show that our proposed method is significantly better than pipeline baselines and previous works, and greatly enhances the cross-lingual performances closer to the monolingual performances.

pdf
Contrastive Attention Mechanism for Abstractive Sentence Summarization
Xiangyu Duan | Hongfei Yu | Mingming Yin | Min Zhang | Weihua Luo | Yue Zhang
Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)

We propose a contrastive attention mechanism to extend the sequence-to-sequence framework for abstractive sentence summarization task, which aims to generate a brief summary of a given source sentence. The proposed contrastive attention mechanism accommodates two categories of attention: one is the conventional attention that attends to relevant parts of the source sentence, the other is the opponent attention that attends to irrelevant or less relevant parts of the source sentence. Both attentions are trained in an opposite way so that the contribution from the conventional attention is encouraged and the contribution from the opponent attention is discouraged through a novel softmax and softmin functionality. Experiments on benchmark datasets show that, the proposed contrastive attention mechanism is more focused on the relevant parts for the summary than the conventional attention mechanism, and greatly advances the state-of-the-art performance on the abstractive sentence summarization task. We release the code at https://github.com/travel-go/ Abstractive-Text-Summarization.

2018

pdf bib
Proceedings of the Seventh Named Entities Workshop
Nancy Chen | Rafael E. Banchs | Xiangyu Duan | Min Zhang | Haizhou Li
Proceedings of the Seventh Named Entities Workshop

pdf
NEWS 2018 Whitepaper
Nancy Chen | Xiangyu Duan | Min Zhang | Rafael E. Banchs | Haizhou Li
Proceedings of the Seventh Named Entities Workshop

Transliteration is defined as phonetic translation of names across languages. Transliteration of Named Entities (NEs) is necessary in many applications, such as machine translation, corpus alignment, cross-language IR, information extraction and automatic lexicon acquisition. All such systems call for high-performance transliteration, which is the focus of shared task in the NEWS 2018 workshop. The objective of the shared task is to promote machine transliteration research by providing a common benchmarking platform for the community to evaluate the state-of-the-art technologies.

pdf
Report of NEWS 2018 Named Entity Transliteration Shared Task
Nancy Chen | Rafael E. Banchs | Min Zhang | Xiangyu Duan | Haizhou Li
Proceedings of the Seventh Named Entities Workshop

This report presents the results from the Named Entity Transliteration Shared Task conducted as part of The Seventh Named Entities Workshop (NEWS 2018) held at ACL 2018 in Melbourne, Australia. Similar to previous editions of NEWS, the Shared Task featured 19 tasks on proper name transliteration, including 13 different languages and two different Japanese scripts. A total of 6 teams from 8 different institutions participated in the evaluation, submitting 424 runs, involving different transliteration methodologies. Four performance metrics were used to report the evaluation results. The NEWS shared task on machine transliteration has successfully achieved its objectives by providing a common ground for the research community to conduct comparative evaluations of state-of-the-art technologies that will benefit the future research and development in this area.

2016

pdf bib
Proceedings of the Sixth Named Entity Workshop
Xiangyu Duan | Rafael E. Banchs | Min Zhang | Haizhou Li | A Kumaran
Proceedings of the Sixth Named Entity Workshop

pdf
Whitepaper of NEWS 2016 Shared Task on Machine Transliteration
Xiangyu Duan | Min Zhang | Haizhou Li | Rafael Banchs | A Kumaran
Proceedings of the Sixth Named Entity Workshop

pdf
Report of NEWS 2016 Machine Transliteration Shared Task
Xiangyu Duan | Rafael Banchs | Min Zhang | Haizhou Li | A. Kumaran
Proceedings of the Sixth Named Entity Workshop

2015

pdf bib
Proceedings of the Fifth Named Entity Workshop
Xiangyu Duan | Rafael E. Banchs | Min Zhang | Haizhou Li | A Kumaran
Proceedings of the Fifth Named Entity Workshop

pdf bib
Report of NEWS 2015 Machine Transliteration Shared Task
Rafael E. Banchs | Min Zhang | Xiangyu Duan | Haizhou Li | A. Kumaran
Proceedings of the Fifth Named Entity Workshop

2014

pdf
Synchronous Constituent Context Model for Inducing Bilingual Synchronous Structures
Xiangyu Duan | Min Zhang | Qiaoming Zhu
Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers

2011

pdf
Joint Alignment and Artificial Data Generation: An Empirical Study of Pivot-based Machine Transliteration
Min Zhang | Xiangyu Duan | Ming Liu | Yunqing Xia | Haizhou Li
Proceedings of 5th International Joint Conference on Natural Language Processing

2010

pdf
Pseudo-Word for Phrase-Based Machine Translation
Xiangyu Duan | Min Zhang | Haizhou Li
Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics

pdf
I2R’s machine translation system for IWSLT 2010
Xiangyu Duan | Rafael Banchs | Jun Lang | Deyi Xiong | Aiti Aw | Min Zhang | Haizhou Li
Proceedings of the 7th International Workshop on Spoken Language Translation: Evaluation Campaign

pdf
Machine Transliteration: Leveraging on Third Languages
Min Zhang | Xiangyu Duan | Vladimir Pervouchine | Haizhou Li
Coling 2010: Posters

2009

pdf
I2R’s machine translation system for IWSLT 2009
Xiangyu Duan | Deyi Xiong | Hui Zhang | Min Zhang | Haizhou Li
Proceedings of the 6th International Workshop on Spoken Language Translation: Evaluation Campaign

In this paper, we describe the system and approach used by the Institute for Infocomm Research (I2R) for the IWSLT 2009 spoken language translation evaluation campaign. Two kinds of machine translation systems are applied, namely, phrase-based machine translation system and syntax-based machine translation system. To test syntax-based machine translation system on spoken language translation, variational systems are explored. On top of both phrase-based and syntax-based single systems, we further use rescoring method to improve the individual system performance and use system combination method to combine the strengths of the different individual systems. Rescoring is applied on each single system output, and system combination is applied on all rescoring outputs. Finally, our system combination framework shows better performance in Chinese-English BTEC task.

2007

pdf
Probabilistic Parsing Action Models for Multi-Lingual Dependency Parsing
Xiangyu Duan | Jun Zhao | Bo Xu
Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL)