2023
pdf
abs
基于预训练语言模型的端到端概念体系构建方法(End to End Taxonomy Construction Method with Pretrained Language Model)
Wang Siyi (思懿 王)
|
He Shizhu (世柱 何)
|
Liu Kang (康 刘)
|
Zhao Jun (军 赵)
Proceedings of the 22nd Chinese National Conference on Computational Linguistics
“概念体系描述概念间上下文关系并组织为层次结构,是一类重要的知识资源。本文研究概念体系的自动构建技术,致力于把一个给定的概念集合(词语集合)按照上下位关系,组织成树状结构的概念体系(概念树)。传统做法将概念体系构建任务分解为概念间上下位语义关系判断及概念层次结构构建这两个独立的子任务。两个子任务缺乏信息反馈,容易造成错误累积等问题。近年来,越来越多任务使用预训练语言模型获取词语的语义特征并判断词语间的语义关系,虽然在概念体系构建中取得了一定效果,但是这类做法只能建模第一个子任务,依然存在错误累计等问题。为了解决分步式方法存在的错误累计问题并有效获取词语及其关系语义特征,本文提出一种基于预训练语言模型的端到端概念体系构建方法,一方面利用预训练语言模型获取概念及其上下位关系的语义信息和部分概念体系结构的结构信息,另一方面利用强化学习端到端地建模概念关系判断和完整体系结构的生成。在WordNet数据集上的实验表明,本文所提方法能取得了良好效果,同等条件下,我们的F1值比最好的模型有7.3%的相对提升。”
pdf
bib
abs
预训练语言模型中的知识分析、萃取与增强(Knowledge Analysis, Extraction and Enhancement inPre-trained Language Models)
Chen Yubo (玉博 陈)
|
Cao Pengfei (鹏飞 曹)
|
Wang Chenhao (晨皓 王)
|
Li Jiachun (嘉淳 李)
|
Liu Kang (康 刘)
|
Zhao Jun (军 赵)
Proceedings of the 22nd Chinese National Conference on Computational Linguistics (Volume 4: Tutorial Abstracts)
“近年来,大规模预训练语言模型在知识密集型的自然语言处理任务上取得了令人瞩目的进步。这似乎表明,预训练语言模型能够自发地从语料中学习大量知识,并隐式地保存在参数之中。然而,这一现象的背后机理仍然萦绕着许多谜团,语言模型究竟掌握了哪些知识,如何提取和利用这些知识,如何用外部知识弥补模型不足,这些问题都亟待进一步探索。在本次讲习班中,我们将重点介绍在预训练语言模型知识分析、知识萃取、知识增强等领域的近期研究进展。”
2022
pdf
abs
An Exploration of Prompt-Based Zero-Shot Relation Extraction Method
Zhao Jun
|
Hu Yuan
|
Xu Nuo
|
Gui Tao
|
Zhang Qi
|
Chen Yunwen
|
Gao Xiang
Proceedings of the 21st Chinese National Conference on Computational Linguistics
“Zero-shot relation extraction is an important method for dealing with the newly emerging relations in the real world which lacks labeled data. However, the mainstream two-tower zero-shot methods usually rely on large-scale and in-domain labeled data of predefined relations. In this work, we view zero-shot relation extraction as a semantic matching task optimized by prompt-tuning, which still maintains superior generalization performance when the labeled data of predefined relations are extremely scarce. To maximize the efficiency of data exploitation, instead of directly fine-tuning, we introduce a prompt-tuning technique to elicit the existing relational knowledge in pre-trained language model (PLMs). In addition, very few relation descriptions are exposed to the model during training, which we argue is the performance bottleneck of two-tower methods. To break through the bottleneck, we model the semantic interaction between relational instances and their descriptions directly during encoding. Experiment results on two academic datasets show that (1) our method outperforms the previous state-of-the-art method by a large margin with different samples of predefined relations; (2) this advantage will be further amplified in the low-resource scenario.”
pdf
abs
Abstains from Prediction: Towards Robust Relation Extraction in Real World
Zhao Jun
|
Zhang Yongxin
|
Xu Nuo
|
Gui Tao
|
Zhang Qi
|
Chen Yunwen
|
Gao Xiang
Proceedings of the 21st Chinese National Conference on Computational Linguistics
“Supervised learning is a classic paradigm of relation extraction (RE). However, a well-performing model can still confidently make arbitrarily wrong predictions when exposed to samples of unseen relations. In this work, we propose a relation extraction method with rejection option to improve robustness to unseen relations. To enable the classifier to reject unseen relations, we introduce contrastive learning techniques and carefully design a set of class-preserving transformations to improve the discriminability between known and unseen relations. Based on the learned representation, inputs of unseen relations are assigned a low confidence score and rejected. Off-the-shelf open relation extraction (OpenRE) methods can be adopted to discover the potential relations in these rejected inputs. In addition, we find that the rejection can be further improved via readily available distantly supervised data. Experiments on two public datasets prove the effectiveness of our method capturing discriminative representations for unseen relation rejection.”
pdf
abs
Can We Really Trust Explanations? Evaluating the Stability of Feature Attribution Explanation Methods via Adversarial Attack
Yang Zhao
|
Zhang Yuanzhe
|
Jiang Zhongtao
|
Ju Yiming
|
Zhao Jun
|
Liu Kang
Proceedings of the 21st Chinese National Conference on Computational Linguistics
“Explanations can increase the transparency of neural networks and make them more trustworthy. However, can we really trust explanations generated by the existing explanation methods? If the explanation methods are not stable enough, the credibility of the explanation will be greatly reduced. Previous studies seldom considered such an important issue. To this end, this paper proposes a new evaluation frame to evaluate the stability of current typical feature attribution explanation methods via textual adversarial attack. Our frame could generate adversarial examples with similar textual semantics. Such adversarial examples will make the original models have the same outputs, but make most current explanation methods deduce completely different explanations. Under this frame, we test five classical explanation methods and show their performance on several stability-related metrics. Experimental results show our evaluation is effective and could reveal the stability performance of existing explanation methods.”
2021
pdf
abs
Topic Knowledge Acquisition and Utilization for Machine Reading Comprehension in Social Media Domain
Tian Zhixing
|
Zhang Yuanzhe
|
Liu Kang
|
Zhao Jun
Proceedings of the 20th Chinese National Conference on Computational Linguistics
In this paper we focus on machine reading comprehension in social media. In this domain onenormally posts a message on the assumption that the readers have specific background knowledge. Therefore those messages are usually short and lacking in background information whichis different from the text in the other domain. Thus it is difficult for a machine to understandthe messages comprehensively. Fortunately a key nature of social media is clustering. A group of people tend to express their opinion or report news around one topic. Having realized this we propose a novel method that utilizes the topic knowledge implied by the clustered messages to aid in the comprehension of those short messages. The experiments on TweetQA datasets demonstrate the effectiveness of our method.
pdf
abs
Multi-Strategy Knowledge Distillation Based Teacher-Student Framework for Machine Reading Comprehension
Yu Xiaoyan
|
Liu Qingbin
|
He Shizhu
|
Liu Kang
|
Liu Shengping
|
Zhao Jun
|
Zhou Yongbin
Proceedings of the 20th Chinese National Conference on Computational Linguistics
The irrelevant information in documents poses a great challenge for machine reading compre-hension (MRC). To deal with such a challenge current MRC models generally fall into twoseparate parts: evidence extraction and answer prediction where the former extracts the key evi-dence corresponding to the question and the latter predicts the answer based on those sentences. However such pipeline paradigms tend to accumulate errors i.e. extracting the incorrect evi-dence results in predicting the wrong answer. In order to address this problem we propose aMulti-Strategy Knowledge Distillation based Teacher-Student framework (MSKDTS) for ma-chine reading comprehension. In our approach we first take evidence and document respec-tively as the input reference information to build a teacher model and a student model. Then the multi-strategy knowledge distillation method transfers the knowledge from the teacher model to the student model at both feature and prediction level through knowledge distillation approach. Therefore in the testing phase the enhanced student model can predict answer similar to the teacher model without being aware of which sentence is the corresponding evidence in the docu-ment. Experimental results on the ReCO dataset demonstrate the effectiveness of our approachand further ablation studies prove the effectiveness of both knowledge distillation strategies.
pdf
abs
A Robustly Optimized BERT Pre-training Approach with Post-training
Liu Zhuang
|
Lin Wayne
|
Shi Ya
|
Zhao Jun
Proceedings of the 20th Chinese National Conference on Computational Linguistics
In the paper we present a ‘pre-training’+‘post-training’+‘fine-tuning’ three-stage paradigm which is a supplementary framework for the standard ‘pre-training’+‘fine-tuning’ languagemodel approach. Furthermore based on three-stage paradigm we present a language modelnamed PPBERT. Compared with original BERT architecture that is based on the standard two-stage paradigm we do not fine-tune pre-trained model directly but rather post-train it on the domain or task related dataset first which helps to better incorporate task-awareness knowl-edge and domain-awareness knowledge within pre-trained model also from the training datasetreduce bias. Extensive experimental results indicate that proposed model improves the perfor-mance of the baselines on 24 NLP tasks which includes eight GLUE benchmarks eight Su-perGLUE benchmarks six extractive question answering benchmarks. More remarkably our proposed model is a more flexible and pluggable model where post-training approach is able to be plugged into other PLMs that are based on BERT. Extensive ablations further validate the effectiveness and its state-of-the-art (SOTA) performance. The open source code pre-trained models and post-trained models are available publicly.