Zhong Qian


2022

pdf
Document-level Event Factuality Identification via Machine Reading Comprehension Frameworks with Transfer Learning
Zhong Qian | Heng Zhang | Peifeng Li | Qiaoming Zhu | Guodong Zhou
Proceedings of the 29th International Conference on Computational Linguistics

Document-level Event Factuality Identification (DEFI) predicts the factuality of a specific event based on a document from which the event can be derived, which is a fundamental and crucial task in Natural Language Processing (NLP). However, most previous studies only considered sentence-level task and did not adopt document-level knowledge. Moreover, they modelled DEFI as a typical text classification task depending on annotated information heavily, and limited to the task-specific corpus only, which resulted in data scarcity. To tackle these issues, we propose a new framework formulating DEFI as Machine Reading Comprehension (MRC) tasks considering both Span-Extraction (Ext) and Multiple-Choice (Mch). Our model does not employ any other explicit annotated information, and utilizes Transfer Learning (TL) to extract knowledge from universal large-scale MRC corpora for cross-domain data augmentation. The empirical results on DLEFM corpus demonstrate that the proposed model outperforms several state-of-the-arts.

2019

pdf
Document-Level Event Factuality Identification via Adversarial Neural Network
Zhong Qian | Peifeng Li | Qiaoming Zhu | Guodong Zhou
Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)

Document-level event factuality identification is an important subtask in event factuality and is crucial for discourse understanding in Natural Language Processing (NLP). Previous studies mainly suffer from the scarcity of suitable corpus and effective methods. To solve these two issues, we first construct a corpus annotated with both document- and sentence-level event factuality information on both English and Chinese texts. Then we present an LSTM neural network based on adversarial training with both intra- and inter-sequence attentions to identify document-level event factuality. Experimental results show that our neural network model can outperform various baselines on the constructed corpus.

2016

pdf
Speculation and Negation Scope Detection via Convolutional Neural Networks
Zhong Qian | Peifeng Li | Qiaoming Zhu | Guodong Zhou | Zhunchen Luo | Wei Luo
Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing