Chen Zhang


2021

pdf bib
Why Machine Reading Comprehension Models Learn Shortcuts?
Yuxuan Lai | Chen Zhang | Yansong Feng | Quzhe Huang | Dongyan Zhao
Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021

pdf bib
Exploiting Position Bias for Robust Aspect Sentiment Classification
Fang Ma | Chen Zhang | Dawei Song
Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021

pdf bib
Extract, Integrate, Compete: Towards Verification Style Reading Comprehension
Chen Zhang | Yuxuan Lai | Yansong Feng | Dongyan Zhao
Findings of the Association for Computational Linguistics: EMNLP 2021

In this paper, we present a new verification style reading comprehension dataset named VGaokao from Chinese Language tests of Gaokao. Different from existing efforts, the new dataset is originally designed for native speakers’ evaluation, thus requiring more advanced language understanding skills. To address the challenges in VGaokao, we propose a novel Extract-Integrate-Compete approach, which iteratively selects complementary evidence with a novel query updating mechanism and adaptively distills supportive evidence, followed by a pairwise competition to push models to learn the subtle difference among similar text pieces. Experiments show that our methods outperform various baselines on VGaokao with retrieved complementary evidence, while having the merits of efficiency and explainability. Our dataset and code are released for further research.

pdf bib
DynaEval: Unifying Turn and Dialogue Level Evaluation
Chen Zhang | Yiming Chen | Luis Fernando D’Haro | Yan Zhang | Thomas Friedrichs | Grandee Lee | Haizhou Li
Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)

A dialogue is essentially a multi-turn interaction among interlocutors. Effective evaluation metrics should reflect the dynamics of such interaction. Existing automatic metrics are focused very much on the turn-level quality, while ignoring such dynamics. To this end, we propose DynaEval, a unified automatic evaluation framework which is not only capable of performing turn-level evaluation, but also holistically considers the quality of the entire dialogue. In DynaEval, the graph convolutional network (GCN) is adopted to model a dialogue in totality, where the graph nodes denote each individual utterance and the edges represent the dependency between pairs of utterances. A contrastive loss is then applied to distinguish well-formed dialogues from carefully constructed negative samples. Experiments show that DynaEval significantly outperforms the state-of-the-art dialogue coherence model, and correlates strongly with human judgements across multiple dialogue evaluation aspects at both turn and dialogue level.

pdf bib
Revisiting Self-training for Few-shot Learning of Language Model
Yiming Chen | Yan Zhang | Chen Zhang | Grandee Lee | Ran Cheng | Haizhou Li
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing

As unlabeled data carry rich task-relevant information, they are proven useful for few-shot learning of language model. The question is how to effectively make use of such data. In this work, we revisit the self-training technique for language model fine-tuning and present a state-of-the-art prompt-based few-shot learner, SFLM. Given two views of a text sample via weak and strong augmentation techniques, SFLM generates a pseudo label on the weakly augmented version. Then, the model predicts the same pseudo label when fine-tuned with the strongly augmented version. This simple approach is shown to outperform other state-of-the-art supervised and semi-supervised counterparts on six sentence classification and six sentence-pair classification benchmarking tasks. In addition, SFLM only relies on a few in-domain unlabeled data. We conduct a comprehensive analysis to demonstrate the robustness of our proposed approach under various settings, including augmentation techniques, model scale, and few-shot knowledge transfer across tasks.

2020

pdf bib
Towards Persona-Based Empathetic Conversational Models
Peixiang Zhong | Chen Zhang | Hao Wang | Yong Liu | Chunyan Miao
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)

Empathetic conversational models have been shown to improve user satisfaction and task outcomes in numerous domains. In Psychology, persona has been shown to be highly correlated to personality, which in turn influences empathy. In addition, our empirical analysis also suggests that persona plays an important role in empathetic conversations. To this end, we propose a new task towards persona-based empathetic conversations and present the first empirical study on the impact of persona on empathetic responding. Specifically, we first present a novel large-scale multi-domain dataset for persona-based empathetic conversations. We then propose CoBERT, an efficient BERT-based response selection model that obtains the state-of-the-art performance on our dataset. Finally, we conduct extensive experiments to investigate the impact of persona on empathetic responding. Notably, our results show that persona improves empathetic responding more when CoBERT is trained on empathetic conversations than non-empathetic ones, establishing an empirical link between persona and empathy in human conversations.

pdf bib
A Multi-task Learning Framework for Opinion Triplet Extraction
Chen Zhang | Qiuchi Li | Dawei Song | Benyou Wang
Findings of the Association for Computational Linguistics: EMNLP 2020

The state-of-the-art Aspect-based Sentiment Analysis (ABSA) approaches are mainly based on either detecting aspect terms and their corresponding sentiment polarities, or co-extracting aspect and opinion terms. However, the extraction of aspect-sentiment pairs lacks opinion terms as a reference, while co-extraction of aspect and opinion terms would not lead to meaningful pairs without determining their sentiment dependencies. To address the issue, we present a novel view of ABSA as an opinion triplet extraction task, and propose a multi-task learning framework to jointly extract aspect terms and opinion terms, and simultaneously parses sentiment dependencies between them with a biaffine scorer. At inference phase, the extraction of triplets is facilitated by a triplet decoding method based on the above outputs. We evaluate the proposed framework on four SemEval benchmarks for ASBA. The results demonstrate that our approach significantly outperforms a range of strong baselines and state-of-the-art approaches.

pdf bib
LadaBERT: Lightweight Adaptation of BERT through Hybrid Model Compression
Yihuan Mao | Yujing Wang | Chufan Wu | Chen Zhang | Yang Wang | Quanlu Zhang | Yaming Yang | Yunhai Tong | Jing Bai
Proceedings of the 28th International Conference on Computational Linguistics

BERT is a cutting-edge language representation model pre-trained by a large corpus, which achieves superior performances on various natural language understanding tasks. However, a major blocking issue of applying BERT to online services is that it is memory-intensive and leads to unsatisfactory latency of user requests, raising the necessity of model compression. Existing solutions leverage the knowledge distillation framework to learn a smaller model that imitates the behaviors of BERT. However, the training procedure of knowledge distillation is expensive itself as it requires sufficient training data to imitate the teacher model. In this paper, we address this issue by proposing a tailored solution named LadaBERT (Lightweight adaptation of BERT through hybrid model compression), which combines the advantages of different model compression methods, including weight pruning, matrix factorization and knowledge distillation. LadaBERT achieves state-of-the-art accuracy on various public datasets while the training overheads can be reduced by an order of magnitude.

pdf bib
SimulSpeech: End-to-End Simultaneous Speech to Text Translation
Yi Ren | Jinglin Liu | Xu Tan | Chen Zhang | Tao Qin | Zhou Zhao | Tie-Yan Liu
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics

In this work, we develop SimulSpeech, an end-to-end simultaneous speech to text translation system which translates speech in source language to text in target language concurrently. SimulSpeech consists of a speech encoder, a speech segmenter and a text decoder, where 1) the segmenter builds upon the encoder and leverages a connectionist temporal classification (CTC) loss to split the input streaming speech in real time, 2) the encoder-decoder attention adopts a wait-k strategy for simultaneous translation. SimulSpeech is more challenging than previous cascaded systems (with simultaneous automatic speech recognition (ASR) and simultaneous neural machine translation (NMT)). We introduce two novel knowledge distillation methods to ensure the performance: 1) Attention-level knowledge distillation transfers the knowledge from the multiplication of the attention matrices of simultaneous NMT and ASR models to help the training of the attention mechanism in SimulSpeech; 2) Data-level knowledge distillation transfers the knowledge from the full-sentence NMT model and also reduces the complexity of data distribution to help on the optimization of SimulSpeech. Experiments on MuST-C English-Spanish and English-German spoken language translation datasets show that SimulSpeech achieves reasonable BLEU scores and lower delay compared to full-sentence end-to-end speech to text translation (without simultaneous translation), and better performance than the two-stage cascaded simultaneous translation model in terms of BLEU scores and translation delay.

2019

pdf bib
Aspect-based Sentiment Classification with Aspect-specific Graph Convolutional Networks
Chen Zhang | Qiuchi Li | Dawei Song
Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)

Due to their inherent capability in semantic alignment of aspects and their context words, attention mechanism and Convolutional Neural Networks (CNNs) are widely applied for aspect-based sentiment classification. However, these models lack a mechanism to account for relevant syntactical constraints and long-range word dependencies, and hence may mistakenly recognize syntactically irrelevant contextual words as clues for judging aspect sentiment. To tackle this problem, we propose to build a Graph Convolutional Network (GCN) over the dependency tree of a sentence to exploit syntactical information and word dependencies. Based on it, a novel aspect-specific sentiment classification framework is raised. Experiments on three benchmarking collections illustrate that our proposed model has comparable effectiveness to a range of state-of-the-art models, and further demonstrate that both syntactical information and long-range word dependencies are properly captured by the graph convolution structure.

2016

pdf bib
I2RNTU at SemEval-2016 Task 4: Classifier Fusion for Polarity Classification in Twitter
Zhengchen Zhang | Chen Zhang | Fuxiang Wu | Dong-Yan Huang | Weisi Lin | Minghui Dong
Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016)

2010

pdf bib
Towards Conversation Entailment: An Empirical Investigation
Chen Zhang | Joyce Chai
Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing

2009

pdf bib
What do We Know about Conversation Participants: Experiments on Conversation Entailment
Chen Zhang | Joyce Chai
Proceedings of the SIGDIAL 2009 Conference

2006

pdf bib
Towards Conversational QA: Automatic Identification of Problematic Situations and User Intent
Joyce Y. Chai | Chen Zhang | Tyler Baldwin
Proceedings of the COLING/ACL 2006 Main Conference Poster Sessions