Wei Chen


2021

pdf bib
CoMAE: A Multi-factor Hierarchical Framework for Empathetic Response Generation
Chujie Zheng | Yong Liu | Wei Chen | Yongcai Leng | Minlie Huang
Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021

pdf bib
Counterfactual Supporting Facts Extraction for Explainable Medical Record Based Diagnosis with Graph Network
Haoran Wu | Wei Chen | Shuang Xu | Bo Xu
Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

Providing a reliable explanation for clinical diagnosis based on the Electronic Medical Record (EMR) is fundamental to the application of Artificial Intelligence in the medical field. Current methods mostly treat the EMR as a text sequence and provide explanations based on a precise medical knowledge base, which is disease-specific and difficult to obtain for experts in reality. Therefore, we propose a counterfactual multi-granularity graph supporting facts extraction (CMGE) method to extract supporting facts from irregular EMR itself without external knowledge bases in this paper. Specifically, we first structure the sequence of EMR into a hierarchical graph network and then obtain the causal relationship between multi-granularity features and diagnosis results through counterfactual intervention on the graph. Features having the strongest causal connection with the results provide interpretive support for the diagnosis. Experimental results on real Chinese EMR of the lymphedema demonstrate that our method can diagnose four types of EMR correctly, and can provide accurate supporting facts for the results. More importantly, the results on different diseases demonstrate the robustness of our approach, which represents the potential application in the medical field.

pdf bib
EARL: Informative Knowledge-Grounded Conversation Generation with Entity-Agnostic Representation Learning
Hao Zhou | Minlie Huang | Yong Liu | Wei Chen | Xiaoyan Zhu
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing

Generating informative and appropriate responses is challenging but important for building human-like dialogue systems. Although various knowledge-grounded conversation models have been proposed, these models have limitations in utilizing knowledge that infrequently occurs in the training data, not to mention integrating unseen knowledge into conversation generation. In this paper, we propose an Entity-Agnostic Representation Learning (EARL) method to introduce knowledge graphs to informative conversation generation. Unlike traditional approaches that parameterize the specific representation for each entity, EARL utilizes the context of conversations and the relational structure of knowledge graphs to learn the category representation for entities, which is generalized to incorporating unseen entities in knowledge graphs into conversation generation. Automatic and manual evaluations demonstrate that our model can generate more informative, coherent, and natural responses than baseline models.

pdf bib
End-to-End Conversational Search for Online Shopping with Utterance Transfer
Liqiang Xiao | Jun Ma | Xin Luna Dong | Pascual Martínez-Gómez | Nasser Zalmout | Wei Chen | Tong Zhao | Hao He | Yaohui Jin
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing

Successful conversational search systems can present natural, adaptive and interactive shopping experience for online shopping customers. However, building such systems from scratch faces real word challenges from both imperfect product schema/knowledge and lack of training dialog data. In this work we first propose ConvSearch, an end-to-end conversational search system that deeply combines the dialog system with search. It leverages the text profile to retrieve products, which is more robust against imperfect product schema/knowledge compared with using product attributes alone. We then address the lack of data challenges by proposing an utterance transfer approach that generates dialogue utterances by using existing dialog from other domains, and leveraging the search behavior data from e-commerce retailer. With utterance transfer, we introduce a new conversational search dataset for online shopping. Experiments show that our utterance transfer method can significantly improve the availability of training dialogue data without crowd-sourcing, and the conversational search system significantly outperformed the best tested baseline.

2020

pdf bib
Robust Neural Machine Translation with ASR Errors
Haiyang Xue | Yang Feng | Shuhao Gu | Wei Chen
Proceedings of the First Workshop on Automatic Simultaneous Translation

In many practical applications, neural machine translation systems have to deal with the input from automatic speech recognition (ASR) systems which may contain a certain number of errors. This leads to two problems which degrade translation performance. One is the discrepancy between the training and testing data and the other is the translation error caused by the input errors may ruin the whole translation. In this paper, we propose a method to handle the two problems so as to generate robust translation to ASR errors. First, we simulate ASR errors in the training data so that the data distribution in the training and test is consistent. Second, we focus on ASR errors on homophone words and words with similar pronunciation and make use of their pronunciation information to help the translation model to recover from the input errors. Experiments on two Chinese-English data sets show that our method is more robust to input errors and can outperform the strong Transformer baseline significantly.

2018

pdf bib
Unsupervised Neural Machine Translation with Weight Sharing
Zhen Yang | Wei Chen | Feng Wang | Bo Xu
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

Unsupervised neural machine translation (NMT) is a recently proposed approach for machine translation which aims to train the model without using any labeled data. The models proposed for unsupervised NMT often use only one shared encoder to map the pairs of sentences from different languages to a shared-latent space, which is weak in keeping the unique and internal characteristics of each language, such as the style, terminology, and sentence structure. To address this issue, we introduce an extension by utilizing two independent encoders but sharing some partial weights which are responsible for extracting high-level representations of the input sentences. Besides, two different generative adversarial networks (GANs), namely the local GAN and global GAN, are proposed to enhance the cross-language translation. With this new approach, we achieve significant improvements on English-German, English-French and Chinese-to-English translation tasks.

pdf bib
Semi-Supervised Disfluency Detection
Feng Wang | Wei Chen | Zhen Yang | Qianqian Dong | Shuang Xu | Bo Xu
Proceedings of the 27th International Conference on Computational Linguistics

While the disfluency detection has achieved notable success in the past years, it still severely suffers from the data scarcity. To tackle this problem, we propose a novel semi-supervised approach which can utilize large amounts of unlabelled data. In this work, a light-weight neural net is proposed to extract the hidden features based solely on self-attention without any Recurrent Neural Network (RNN) or Convolutional Neural Network (CNN). In addition, we use the unlabelled corpus to enhance the performance. Besides, the Generative Adversarial Network (GAN) training is applied to enforce the similar distribution between the labelled and unlabelled data. The experimental results show that our approach achieves significant improvements over strong baselines.

pdf bib
Peperomia at SemEval-2018 Task 2: Vector Similarity Based Approach for Emoji Prediction
Jing Chen | Dechuan Yang | Xilian Li | Wei Chen | Tengjiao Wang
Proceedings of The 12th International Workshop on Semantic Evaluation

This paper describes our participation in SemEval 2018 Task 2: Multilingual Emoji Prediction, in which participants are asked to predict a tweet’s most associated emoji from 20 emojis. Instead of regarding it as a 20-class classification problem we regard it as a text similarity problem. We propose a vector similarity based approach for this task. First the distributed representation (tweet vector) for each tweet is generated, then the similarity between this tweet vector and each emoji’s embedding is evaluated. The most similar emoji is chosen as the predicted label. Experimental results show that our approach performs comparably with the classification approach and shows its advantage in classifying emojis with similar semantic meaning.

pdf bib
Improving Neural Machine Translation with Conditional Sequence Generative Adversarial Nets
Zhen Yang | Wei Chen | Feng Wang | Bo Xu
Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)

This paper proposes an approach for applying GANs to NMT. We build a conditional sequence generative adversarial net which comprises of two adversarial sub models, a generator and a discriminator. The generator aims to generate sentences which are hard to be discriminated from human-translated sentences ( i.e., the golden target sentences); And the discriminator makes efforts to discriminate the machine-generated sentences from human-translated ones. The two sub models play a mini-max game and achieve the win-win situation when they reach a Nash Equilibrium. Additionally, the static sentence-level BLEU is utilized as the reinforced objective for the generator, which biases the generation towards high BLEU points. During training, both the dynamic discriminator and the static BLEU objective are employed to evaluate the generated sentences and feedback the evaluations to guide the learning of the generator. Experimental results show that the proposed model consistently outperforms the traditional RNNSearch and the newly emerged state-of-the-art Transformer on English-German and Chinese-English translation tasks.

pdf bib
The Sogou-TIIC Speech Translation System for IWSLT 2018
Yuguang Wang | Liangliang Shi | Linyu Wei | Weifeng Zhu | Jinkun Chen | Zhichao Wang | Shixue Wen | Wei Chen | Yanfeng Wang | Jia Jia
Proceedings of the 15th International Conference on Spoken Language Translation

This paper describes our speech translation system for the IWSLT 2018 Speech Translation of lectures and TED talks from English to German task. The pipeline approach is employed in our work, which mainly includes the Automatic Speech Recognition (ASR) system, a post-processing module, and the Neural Machine Translation (NMT) system. Our ASR system is an ensemble system of Deep-CNN, BLSTM, TDNN, N-gram Language model with lattice rescoring. We report average results on tst2013, tst2014, tst2015. Our best combination system has an average WER of 6.73. The machine translation system is based on Google’s Transformer architecture. We achieved an improvement of 3.6 BLEU over baseline system by applying several techniques, such as cleaning parallel corpus, fine tuning of single model, ensemble models and re-scoring with additional features. Our final average result on speech translation is 31.02 BLEU.

2017

pdf bib
Towards Compact and Fast Neural Machine Translation Using a Combined Method
Xiaowei Zhang | Wei Chen | Feng Wang | Shuang Xu | Bo Xu
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing

Neural Machine Translation (NMT) lays intensive burden on computation and memory cost. It is a challenge to deploy NMT models on the devices with limited computation and memory budgets. This paper presents a four stage pipeline to compress model and speed up the decoding for NMT. Our method first introduces a compact architecture based on convolutional encoder and weight shared embeddings. Then weight pruning is applied to obtain a sparse model. Next, we propose a fast sequence interpolation approach which enables the greedy decoding to achieve performance on par with the beam search. Hence, the time-consuming beam search can be replaced by simple greedy decoding. Finally, vocabulary selection is used to reduce the computation of softmax layer. Our final model achieves 10 times speedup, 17 times parameters reduction, less than 35MB storage size and comparable performance compared to the baseline model.

pdf bib
Sogou Neural Machine Translation Systems for WMT17
Yuguang Wang | Shanbo Cheng | Liyang Jiang | Jiajun Yang | Wei Chen | Muze Li | Lin Shi | Yanfeng Wang | Hongtao Yang
Proceedings of the Second Conference on Machine Translation

2016

pdf bib
A Character-Aware Encoder for Neural Machine Translation
Zhen Yang | Wei Chen | Feng Wang | Bo Xu
Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers

This article proposes a novel character-aware neural machine translation (NMT) model that views the input sequences as sequences of characters rather than words. On the use of row convolution (Amodei et al., 2015), the encoder of the proposed model composes word-level information from the input sequences of characters automatically. Since our model doesn’t rely on the boundaries between each word (as the whitespace boundaries in English), it is also applied to languages without explicit word segmentations (like Chinese). Experimental results on Chinese-English translation tasks show that the proposed character-aware NMT model can achieve comparable translation performance with the traditional word based NMT models. Despite the target side is still word based, the proposed model is able to generate much less unknown words.

pdf bib
pkudblab at SemEval-2016 Task 6 : A Specific Convolutional Neural Network System for Effective Stance Detection
Wan Wei | Xiao Zhang | Xuqin Liu | Wei Chen | Tengjiao Wang
Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016)

2015

pdf bib
Semi-supervised Chinese Word Segmentation based on Bilingual Information
Wei Chen | Bo Xu
Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing

2014

pdf bib
Context-based Natural Language Processing for GIS-based Vague Region Visualization
Wei Chen
Proceedings of the ACL 2014 Workshop on Language Technologies and Computational Social Science

pdf bib
Exploiting Community Emotion for Microblog Event Detection
Gaoyan Ou | Wei Chen | Tengjiao Wang | Zhongyu Wei | Binyang Li | Dongqing Yang | Kam-Fai Wong
Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)

2013

pdf bib
The CASIA machine translation system for IWSLT 2013
Xingyuan Peng | Xiaoyin Fu | Wei Wei | Zhenbiao Chen | Wei Chen | Bo Xu
Proceedings of the 10th International Workshop on Spoken Language Translation: Evaluation Campaign

In this paper, we describe the CASIA statistical machine translation (SMT) system for the IWSLT2013 Evaluation Campaign. We participated in the Chinese-English and English-Chinese translation tasks. For both of these tasks, we used a hierarchical phrase-based (HPB) decoder and made it as our baseline translation system. A number of techniques were proposed to deal with these translation tasks, including parallel sentence extraction, pre-processing, translation model (TM) optimization, language model (LM) interpolation, turning, and post-processing. With these techniques, the translation results were significantly improved compared with that of the baseline system.

pdf bib
Source aware phrase-based decoding for robust conversational spoken language translation
Sankaranarayanan Ananthakrishnan | Wei Chen | Rohit Kumar | Dennis Mehay
Proceedings of the 10th International Workshop on Spoken Language Translation: Papers

Spoken language translation (SLT) systems typically follow a pipeline architecture, in which the best automatic speech recognition (ASR) hypothesis of an input utterance is fed into a statistical machine translation (SMT) system. Conversational speech often generates unrecoverable ASR errors owing to its rich vocabulary (e.g. out-of-vocabulary (OOV) named entities). In this paper, we study the possibility of alleviating the impact of unrecoverable ASR errors on translation performance by minimizing the contextual effects of incorrect source words in target hypotheses. Our approach is driven by locally-derived penalties applied to bilingual phrase pairs as well as target language model (LM) likelihoods in the vicinity of source errors. With oracle word error labels on an OOV word-rich English-to-Iraqi Arabic translation task, we show statistically significant relative improvements of 3.2% BLEU and 2.0% METEOR over an error-agnostic baseline SMT system. We then investigate the impact of imperfect source error labels on error-aware translation performance. Simulation experiments reveal that modest translation improvements are to be gained with this approach even when the source error labels are noisy.

2012

pdf bib
Active error detection and resolution for speech-to-speech translation
Rohit Prasad | Rohit Kumar | Sankaranarayanan Ananthakrishnan | Wei Chen | Sanjika Hewavitharana | Matthew Roy | Frederick Choi | Aaron Challenner | Enoch Kan | Arvid Neelakantan | Prem Natarajan
Proceedings of the 9th International Workshop on Spoken Language Translation: Papers

We describe a novel two-way speech-to-speech (S2S) translation system that actively detects a wide variety of common error types and resolves them through user-friendly dialog with the user(s). We present algorithms for detecting out-of-vocabulary (OOV) named entities and terms, sense ambiguities, homophones, idioms, ill-formed input, etc. and discuss novel, interactive strategies for recovering from such errors. We also describe our approach for prioritizing different error types and an extensible architecture for implementing these decisions. We demonstrate the efficacy of our system by presenting analysis on live interactions in the English-to-Iraqi Arabic direction that are designed to invoke different error types for spoken language translation. Our analysis shows that the system can successfully resolve 47% of the errors, resulting in a dramatic improvement in the transfer of problematic concepts.

2009

pdf bib
Understanding Mental States in Natural Language
Wei Chen
Proceedings of the Eight International Conference on Computational Semantics

2008

pdf bib
Dimensions of Subjectivity in Natural Language
Wei Chen
Proceedings of ACL-08: HLT, Short Papers