Zhao Jun


2021

pdf bib
Topic Knowledge Acquisition and Utilization for Machine Reading Comprehension in Social Media Domain
Tian Zhixing | Zhang Yuanzhe | Liu Kang | Zhao Jun
Proceedings of the 20th Chinese National Conference on Computational Linguistics

“In this paper we focus on machine reading comprehension in social media. In this domain onenormally posts a message on the assumption that the readers have specific background knowledge. Therefore those messages are usually short and lacking in background information whichis different from the text in the other domain. Thus it is difficult for a machine to understandthe messages comprehensively. Fortunately a key nature of social media is clustering. A group of people tend to express their opinion or report news around one topic. Having realized this we propose a novel method that utilizes the topic knowledge implied by the clustered messages to aid in the comprehension of those short messages. The experiments on TweetQA datasets demonstrate the effectiveness of our method.”

pdf bib
Multi-Strategy Knowledge Distillation Based Teacher-Student Framework for Machine Reading Comprehension
Yu Xiaoyan | Liu Qingbin | He Shizhu | Liu Kang | Liu Shengping | Zhao Jun | Zhou Yongbin
Proceedings of the 20th Chinese National Conference on Computational Linguistics

“The irrelevant information in documents poses a great challenge for machine reading compre-hension (MRC). To deal with such a challenge current MRC models generally fall into twoseparate parts: evidence extraction and answer prediction where the former extracts the key evi-dence corresponding to the question and the latter predicts the answer based on those sentences.However such pipeline paradigms tend to accumulate errors i.e. extracting the incorrect evi-dence results in predicting the wrong answer. In order to address this problem we propose aMulti-Strategy Knowledge Distillation based Teacher-Student framework (MSKDTS) for ma-chine reading comprehension. In our approach we first take evidence and document respec-tively as the input reference information to build a teacher model and a student model. Then the multi-strategy knowledge distillation method transfers the knowledge from the teacher model to the student model at both feature and prediction level through knowledge distillation approach.Therefore in the testing phase the enhanced student model can predict answer similar to the teacher model without being aware of which sentence is the corresponding evidence in the docu-ment. Experimental results on the ReCO dataset demonstrate the effectiveness of our approachand further ablation studies prove the effectiveness of both knowledge distillation strategies.”

pdf bib
A Robustly Optimized BERT Pre-training Approach with Post-training
Liu Zhuang | Lin Wayne | Shi Ya | Zhao Jun
Proceedings of the 20th Chinese National Conference on Computational Linguistics

“In the paper we present a ‘pre-training’+‘post-training’+‘fine-tuning’ three-stage paradigm which is a supplementary framework for the standard ‘pre-training’+‘fine-tuning’ languagemodel approach. Furthermore based on three-stage paradigm we present a language modelnamed PPBERT. Compared with original BERT architecture that is based on the standard two-stage paradigm we do not fine-tune pre-trained model directly but rather post-train it on the domain or task related dataset first which helps to better incorporate task-awareness knowl-edge and domain-awareness knowledge within pre-trained model also from the training datasetreduce bias. Extensive experimental results indicate that proposed model improves the perfor-mance of the baselines on 24 NLP tasks which includes eight GLUE benchmarks eight Su-perGLUE benchmarks six extractive question answering benchmarks. More remarkably our proposed model is a more flexible and pluggable model where post-training approach is able to be plugged into other PLMs that are based on BERT. Extensive ablations further validate the effectiveness and its state-of-the-art (SOTA) performance. The open source code pre-trained models and post-trained models are available publicly.”