2023
pdf
abs
Better Simultaneous Translation with Monotonic Knowledge Distillation
Shushu Wang
|
Jing Wu
|
Kai Fan
|
Wei Luo
|
Jun Xiao
|
Zhongqiang Huang
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Simultaneous machine translation (SiMT) presents a unique challenge as it requires generating target tokens before the source sentence is fully consumed. This can lead to the hallucination problem, where target tokens are generated without support from the source sentence. The prefix-to-prefix training data used to train SiMT models are not always parallel, due to divergent word order between the source and target languages, and can contribute to the problem. In this paper, we propose a novel approach that leverages traditional translation models as teachers and employs a two-stage beam search algorithm to generate monotonic yet accurate reference translations for sequence-level knowledge distillation. Experimental results demonstrate the significant improvements achieved by our approach over multiple strong SiMT baselines, leading to new state-of-the-art performance across various language pairs. Notably, when evaluated on a monotonic version of the WMT15 De-En test set, which includes references generated in a more monotonic style by professional translators, our approach achieves even more substantial improvement over the baselines. The source code and data are publicly available for further exploration.
2022
pdf
abs
Discrete Cross-Modal Alignment Enables Zero-Shot Speech Translation
Chen Wang
|
Yuchen Liu
|
Boxing Chen
|
Jiajun Zhang
|
Wei Luo
|
Zhongqiang Huang
|
Chengqing Zong
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
End-to-end Speech Translation (ST) aims at translating the source language speech into target language text without generating the intermediate transcriptions. However, the training of end-to-end methods relies on parallel ST data, which are difficult and expensive to obtain. Fortunately, the supervised data for automatic speech recognition (ASR) and machine translation (MT) are usually more accessible, making zero-shot speech translation a potential direction. Existing zero-shot methods fail to align the two modalities of speech and text into a shared semantic space, resulting in much worse performance compared to the supervised ST methods. In order to enable zero-shot ST, we propose a novel Discrete Cross-Modal Alignment (DCMA) method that employs a shared discrete vocabulary space to accommodate and match both modalities of speech and text. Specifically, we introduce a vector quantization module to discretize the continuous representations of speech and text into a finite set of virtual tokens, and use ASR data to map corresponding speech and text to the same virtual token in a shared codebook. This way, source language speech can be embedded in the same semantic space as the source language text, which can be then transformed into target language text with an MT module. Experiments on multiple language pairs demonstrate that our zero-shot ST method significantly improves the SOTA, and even performers on par with the strong supervised ST baselines.
2021
pdf
abs
Mutual-Learning Improves End-to-End Speech Translation
Jiawei Zhao
|
Wei Luo
|
Boxing Chen
|
Andrew Gilman
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing
A currently popular research area in end-to-end speech translation is the use of knowledge distillation from a machine translation (MT) task to improve the speech translation (ST) task. However, such scenario obviously only allows one way transfer, which is limited by the performance of the teacher model. Therefore, We hypothesis that the knowledge distillation-based approaches are sub-optimal. In this paper, we propose an alternative–a trainable mutual-learning scenario, where the MT and the ST models are collaboratively trained and are considered as peers, rather than teacher/student. This allows us to improve the performance of end-to-end ST more effectively than with a teacher-student paradigm. As a side benefit, performance of the MT model also improves. Experimental results show that in our mutual-learning scenario, models can effectively utilise the auxiliary information from peer models and achieve compelling results on Must-C dataset.
2018
pdf
abs
IRCMS at SemEval-2018 Task 7 : Evaluating a basic CNN Method and Traditional Pipeline Method for Relation Classification
Zhongbo Yin
|
Zhunchen Luo
|
Wei Luo
|
Mao Bin
|
Changhai Tian
|
Yuming Ye
|
Shuai Wu
Proceedings of the 12th International Workshop on Semantic Evaluation
This paper presents our participation for sub-task1 (1.1 and 1.2) in SemEval 2018 task 7: Semantic Relation Extraction and Classification in Scientific Papers (Gábor et al., 2018). We experimented on this task with two methods: CNN method and traditional pipeline method. We use the context between two entities (included) as input information for both methods, which extremely reduce the noise effect. For the CNN method, we construct a simple convolution neural network to automatically learn features from raw texts without any manual processing. Moreover, we use the softmax function to classify the entity pair into a specific relation category. For the traditional pipeline method, we use the Hackabout method as a representation which is described in section3.5. The CNN method’s result is much better than traditional pipeline method (49.1% vs. 42.3% and 71.1% vs. 54.6% ).
2016
pdf
Speculation and Negation Scope Detection via Convolutional Neural Networks
Zhong Qian
|
Peifeng Li
|
Qiaoming Zhu
|
Guodong Zhou
|
Zhunchen Luo
|
Wei Luo
Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing
2010
pdf
The ICT statistical machine translation system for IWSLT 2010
Hao Xiong
|
Jun Xie
|
Hui Yu
|
Kai Liu
|
Wei Luo
|
Haitao Mi
|
Yang Liu
|
Yajuan Lü
|
Qun Liu
Proceedings of the 7th International Workshop on Spoken Language Translation: Evaluation Campaign
2002
pdf
Medstract: creating large-scale information servers from biomedical texts
James Pustejovsky
|
José Castaño
|
Roser Saurí
|
Jason Zhang
|
Wei Luo
Proceedings of the ACL-02 Workshop on Natural Language Processing in the Biomedical Domain