Chinese National Conference on Computational Linguistics (2020)


up

pdf (full)
bib (full)
Proceedings of the 19th Chinese National Conference on Computational Linguistics

pdf bib
Proceedings of the 19th Chinese National Conference on Computational Linguistics
Maosong Sun (孙茂松) | Sujian Li (李素建) | Yue Zhang (张岳) | Yang Liu (刘洋)

pdf bib
基于规则的双重否定识别——以“不v1不v2”为例(Double Negative Recognition Based on Rules——Taking “不v1不v2” as an Example)
Yu Wang (王昱)

“不v1不v2”是汉语中典型的双重否定结构形式之一,它包括“不+助动词+不+v2”(不得不去)、“不+是+不v2”(不是不好)、述宾结构“不v1...不v2”(不认为他不去)等多种双重否定结构,情况复杂。本文以“不v1不v2”为例,结合“元语否定”、“动词叙实性”、“否定焦点”等概念,对“不v1不v2”进行了全面的考察,制定了“不v1不v2”双重否定结构的识别策略。根据识别策略,设计了双重否定自动识别程序,并在此过程中补充了助动词表、非叙实动词表等词库。最终,对28033句语料进行了识别,识别正确率为97.87%,召回率约为93.10%。

pdf bib
基于语料库的武侠与仙侠网络小说文体、词汇及主题对比分析(A Corpus-based Contrastive Analysis of Style, Vocabulary and Theme of Wuxia and Xianxia Internet Novels)
Sanle Zhang (张三乐) | Pengyuan Liu (刘鹏远) | Hu Zhang (张虎)

网络文学在我国发展迅猛,其数量和影响力呈现逐年上升的趋势,但目前尚无公开的较大规模网络文学作品语料库,鲜见基于语料库对网络文学具体类别作品的定量研究。本文初步建立了一个网络文学语料库,其中包括武侠和仙侠网络小说,使用文本计量、词频统计以及主题挖掘的方法对两类小说的文体风格、具体词汇使用和小说主题进行对比分析。通过比较,我们发现两类小说的文体风格大致相同,它们在词汇的使用和主题上既有共性又各具特色。从微观到宏观,从表面到内容,将定量统计和定性分析相结合,多角度、多层次的对武侠和仙侠网络小说进行比较。

pdf
基于计量的百年中国人名用字性别特征研究(A Quantified Research on Gender Characteristics of Chinese Names in A Century)
Bingjie Du (杜冰洁) | Pengyuan Liu (刘鹏远) | Yongsheng Tian (田永胜)

本文构建了一个包含11万以上条目规模的中国名人人名数据库,每条数据含有人名、性别、出生地等社会文化标签,同时含有拼音、笔画、偏旁等文字信息标签,这是目前已知最大的可用于研究的汉语真人人名数据库。基于该数据库,本文从中选择1919年至今的人名,用定性与定量结合的方法探究人名中汉字的特征和其性别差异以及历时变化。从人名长度来看,男性人名比女性人名长;从人名用字的难易度来看,女性用字比男性更复杂;从用字丰富度来看,人名用字越来越单一和集中化,男性人名的用字丰富度大于女性人名。计算人名用字的性别偏度后发现女性人名的专用自更多。两性用字意象有明显的不同,用字的意象随着时间发生改变,但改变最明显的时间节点是改革开放前后,其中女性的变化比男性显著。除此之外,我们还得出人名中的性别极性字表、各个阶段的高频字表、用字变化趋势表等。

pdf
伟大的男人和倔强的女人:基于语料库的形容词性别偏度历时研究(Great Males and Stubborn Females: A Diachronic Study of Corpus-Based Gendered Skewness in Chinese Adjectives)
Shucheng Zhu (朱述承) | Pengyuan Liu (刘鹏远)

性别偏见现象是社会语言学和计算语学学者均关注的研究热点,但目前大多数研究都是基于英语的,鲜有对汉语中性别偏见现象,特别是基于形容词的研究缺乏。而形容词是衡量社会对男性和女性角色规约的有力抓手。本文首先利用调查问卷的方法,构建了一个含有466个形容词的数据集,定义性别偏度为特定形容词词义和男性或女性群体相匹配的程度,并计算了数据集中每个形容词的性别偏度。然后基于DCC语料库,研究了《人民日报》的形容词性别偏度的历时总体变化,并考察了和姓名搭配的形容词的历时变化。发现《人民日报》所使用的形容词随时间的推移整体呈现中性化趋势,但在文化大革命期间呈现非常男性化的特征,和男性姓名搭配的形容词整体呈现中性化趋势。

pdf
用计量风格学方法考察《水浒传》的作者争议问题——以罗贯中《平妖传》为参照(Quantitive Stylistics Based Research on the Controversy of the Author of “Tales of the Marshes”: Comparing with “Pingyaozhuan” of Luo Guanzhong)
Li Song (宋丽) | Ying Liu (刘颖)

《水浒传》是独著还是合著,施耐庵和罗贯中是何关系一直存在争议。本文将其作者争议粗略归纳为施耐庵作、罗贯中作、施作罗续、罗作他续、施作罗改五种情况,以罗贯中的《平妖传》为参照,用假设检验、文本聚类、文本分类、波动风格计量等方法,结合对文本内容的分析,考察《水浒传》的写作风格,试图为其作者身份认定提供参考。结果显示,只有罗作他续的可能性大,即前70回为罗贯中所作,后由他人续写,其他四种情况可能性都较小。

pdf
多轮对话的篇章级抽象语义表示标注体系研究(Research on Discourse-level Abstract Meaning Representation Annotation framework in Multi-round Dialogue)
Tong Huang (黄彤) | Bin Li (李斌) | Peiyi Yan (闫培艺) | Tingting Ji (计婷婷) | Weiguang Qu (曲维光)

对话分析是智能客服、聊天机器人等自然语言对话应用的基础课题,而对话语料与常规书面语料有较大差异,存在大量的称谓、情感短语、省略、语序颠倒、冗余等复杂现象,对句法和语义分析器的影响较大,对话自动分析的准确率相对书面语料一直不高。其主要原因在于对多轮对话缺乏严整的形式化描写方式,不利于后续的分析计算。因此,本文在梳理国内外针对对话的标注体系和语料库的基础上,提出了基于抽象语义表示的篇章级多轮对话标注体系。具体探讨了了篇章级别的语义结构标注方法,给出了词语和概念关系的对齐方案,针对称谓语和情感短语增加了相应的语义关系和概念,调整了表示主观情感词语的论元结构,并对对话中一些特殊现象进行了规定,设计了人工标注平台,为大规模的多轮对话语料库标注与计算研究奠定基础。

pdf
发音属性优化建模及其在偏误检测的应用(Speech attributes optimization modeling and application in mispronunciation detection)
Minghao Guo (郭铭昊) | Yanlu Xie (解焱陆)

近年来,发音属性常常被用于计算机辅助发音训练系统(CAPT)中。本文针对使用发音属性的一些难点,提出了一种建模细颗粒度发音属性(FSA)的方法,并在跨语言属性识别、发音偏误检测中进行测试。最终,我们得到了最优平均识别准确率约为95%的属性检测器组;在两个二语测试集上的偏误检测,相比基线,基于FSA方法均获得了超过1%的性能提升。此外,我们还根据发音属性的跨语言特性设置了对照实验,并在上述任务中测试和分析。

pdf
基于抽象语义表示的汉语疑问句的标注与分析(Chinese Interrogative Sentences Annotation and Analysis Based on the Abstract Meaning Representation)
Peiyi Yan (闫培艺) | Bin Li (李斌) | Tong Huang (黄彤) | Kairui Huo (霍凯蕊) | Jin Chen (陈瑾) | Weiguang Qu (曲维光)

疑问句的句法语义分析在搜索引擎、信息抽取和问答系统等领域有着广泛的应用。计算语言学多采取问句分类和句法分析相结合的方式来处理疑问句,精度和效率还不理想。而疑问句的语言学研究成果丰富,比如疑问句的结构类型、疑问焦点和疑问代词的非疑问用法等,但缺乏系统的形式化表示。本文致力于解决这一难题,采用基于图结构的汉语句子语义的整体表示方法—中文抽象语义表示(CAMR)来标注疑问句的语义结构,将疑问焦点和整句语义一体化表示出来。然后选取了宾州中文树库CTB8.0网络媒体语料、小学语文教材以及《小王子》中文译本的2万句语料中共计2071句疑问句,统计了疑问句的主要特点。统计表明,各种疑问代词都可以通过疑问概念amr-unknown和语义关系的组合来表示,能够完整地表示出疑问句的关键信息、疑问焦点和语义结构。最后,根据疑问代词所关联的语义关系,统计了疑问焦点的概率分布,其中原因、修饰语和受事的占比最高,分别占26.53%、16.73%以及16.44%。基于抽象语义表示的疑问句标注与分析可以为汉语疑问句研究提供基础理论与资源。

pdf
语用视角下复述句生成方式的类型考察(A Pragmatic Study of Generation Method of Paraphrase Sentence)
Tianhuan Ma (马天欢)

本文将汉语母语者的160份复述文本与其原文进行以小句为单位的逐句比对,发现其中出现了6484对复述句对。从其生成的方式来看,可以分为改换词语和重铸整句两大类。以语用学原理对这些复述句进行分析,发现与以往研究的复述现象不同的是:句对间往往不具有相同的逻辑语义真值,但在特定语境下却能传达同一个语用意义,具有等效的语用功能。这说明在自然语言处理中,识别进入真实交际中的复述句不仅依赖语法、语义知识库,还需要借助含有语用知识和语境信息的知识库。

pdf
面向汉语作为第二语言学习的个性化语法纠错(Personalizing Grammatical Error Correction for Chinese as a Second Language)
Shengsheng Zhang (张生盛) | Guina Pang (庞桂娜) | Liner Yang (杨麟儿) | Chencheng Wang (王辰成) | Yongping Du (杜永萍) | Erhong Yang (杨尔弘) | Yaping Huang (黄雅平)

语法纠错任务旨在通过自然语言处理技术自动检测并纠正文本中的语序、拼写等语法错误。当前许多针对汉语的语法纠错方法已取得较好的效果,但往往忽略了学习者的个性化特征,如二语等级、母语背景等。因此,本文面向汉语作为第二语言的学习者,提出个性化语法纠错,对不同特征的学习者所犯的错误分别进行纠正,并构建了不同领域汉语学习者的数据集进行实验。实验结果表明,将语法纠错模型适应到学习者的各个领域后,性能得到明显提升。

pdf
中文问句的形式分类和资源建设(Formal classification and resource construction of Chinese questions)
Jiangtao Li (黎江涛) | Gaoqi Rao (饶高琦)

本文归纳了问句形式在问句语料筛选中的作用,探索了问句分类必需的形式特征,同时通过人工标注建设了中文问句分类语料库,并在此基础上进行了基于规则和统计的分类实验,通过多轮实验迭代优化特征组合形成特征规则集,为当前问答提供形式上的分类基础。实验中,基于优化特征规则集的有限状态自动机可实现宏平均F1值为0.94;统计机器学习中随机森林模型的分类效果较好,F1值宏平均达到0.98,表明问句形式分类具有相当可行性和准确性。

pdf
基于组块分析的汉语块依存语法(Chinese Chunk-Based Dependency Grammar)
Qingqing Qian (钱青青) | Chengwen Wang (王诚文) | Gaoqi Rao (饶高琦) | Endong Xun (荀恩东)

基于词单元的经典依存语法在面向中文的句子分析中遇到诸多汉语特性引起的困难。为此,本文提出汉语的块依存语法,以谓词为核心,以组块为研究对象,在句内和句间寻找谓词所支配的组块,构建句群级别的句法分析框架。这一操作不仅仅是提升叶子节点的语言单位,而且还针对汉语语义特点进行了分析方式和分析规则上的创新,能够较好地解决微观层次的逻辑结构知识,并为中观论元知识和宏观篇章知识打好铺垫。本文主要介绍了块依存语法理念、表示、分析方法及特点,并简要介绍了块依存树库的构建情况。截至目前为止,树库规模为187万字符(超过4万复句、10万小句),其中包含67%新闻文本和32%百科文本。

pdf
新支话题的句法成分和语义角色研究(A Study of Syntactic Constituent and Semantic Role of New Branch Topic)
Dawei Lu (卢达威)

话题的延续和转换是篇章中重要的语用功能。本文从句首话题共享的角度对话题延续和转换进行了分类,分为句首话题延续、句中子话题延续、完全话题转换、兼语话题转换、新支话题转换五种,进而对话题转换的特殊情况——新支话题展开研究。基于33万字的广义话题结构语料库,本文对新支话题的句法成分、语义角色进行了统计和分析,发现能够成为新支话题的成分绝大多数是有关具体事物的体词性短语;句法成分方面,宾语从句或补语从句主语、主谓谓语句小主语、状语起始句的主语、句末宾语、连谓句非句末宾语、兼语句兼语、介词宾语甚至状语等都能成为新支话题,从而引出新支句,其中句末宾语做新支话题的情况最多,但未发现间接宾语作为新支话题的情况;语义角色方面,大部分主体论元(施事、感事、经事、主事)和客体论元(受事、系事、结果、对象、与事),及少数凭借论元(工具、方式,材料)和环境论元(处所、终点、路径)能成为新支话题引出新支句。其中,系事和受事成为新支话题情况最显著,施事、结果和对象次之。本文的研究揭示了句法、语义对话题转换这一语用现象的一种可能的约束途径。这将有助于人和计算机更深入地理解汉语篇章的话题转换机制,以期将这种语用现象逐步落实到语义直至句法的形式中,最终实现计算机对话题转换的自动分析。

pdf
眼动记录与主旨结构标注的关联性分析研究(Research on the correlation between eye movement feature and thematic structure label)
Haocong Shan (单昊聪) | Qiang Zhou (周强)

给定包含主旨概括句的汉语句群,针对该句群的内部结构标注是基于语言学的分析结果,而阅读句群时的眼动轨迹则蕴含着人的心理认知,两者的信息融合和内在关联性分析是该文主要工作。该文使用基于径向基函数支持向量机和递归特征消除的分类模型,根据标点小句片段对应的眼动指标数据预测该片段是否为包含主旨内容的关键信息,达到了0.76的准确率,并通过分析关键片段上眼动数据的分布特点,提取出对句群主旨概括信息区分度较好的眼动指标。

pdf
汉语竞争类多人游戏语言中疑问句的形式与功能(The Form and Function of Interrogatives in Multi-party Chinese Competitive Game Conversation)
Wenxian Zhang (张文贤) | Qi Su (苏琪)

本文基于自建的竞争类多人游戏对话语料库对汉语疑问句的形式与功能进行了考察。文章首先在前人研究的基础上将疑问句的类型分为五大类,然后考察不同类型的疑问句在对话中出现的位置与功能。研究显示,是非问(包括反复问)与特指问是最常见的类型,选择问使用频率最低。大部分疑问句会引起话轮转换,具有询问功能,此外,否定与指出事实也是疑问句的主要功能。特指问的否定功能与附加问指出事实的 功能比较突出。

pdf
融合目标端句法的AMR-to-Text生成(AMR-to-Text Generation with Target Syntax)
Jie Zhu (朱杰) | Junhui Li (李军辉)

抽象语义表示到文本(AMR-to-Text)生成的任务是给定AMR图,生成相同语义表示的文本。可以把此任务当作一个从源端AMR图到目标端句子的机器翻译任务。目前存在的一些方法都在探索如何更好的对图结构进行建模。然而,它们都存在一个未限定的问题,因为在生成阶段许多句法的决策并不受语义图的约束,从而忽略了句子内部潜藏的句法信息。为了明确考虑这一不足,该文提出一种直接而有效的方法,显示的在AMR-to-Text生成的任务中融入句法信息,并在Transformer和目前该任务最优性能的模型上进行了实验。实验结果表明,在现存的两份标准英文数据集LDC2018E86和LDC2017T10上,都取得了显著的提升,达到了新的最高性能。

pdf
基于神经网络的连动句识别(Recognition of serial-verb sentences based on Neural Network)
Chao Sun (孙超) | Weiguang Qu (曲维光) | Tingxin Wei (魏庭新) | Yanhui Gu (顾彦慧) | Bin Li (李斌) | Junsheng Zhou (周俊生)

连动句是具有连动结构的句子,是汉语中的特殊句法结构,在现代汉语中十分常见且使用频繁。连动句语法结构和语义关系都很复杂,在识别中存在许多问题,对此本文针对连动句的识别问题进行了研究,提出了一种基于神经网络的连动句识别方法。本方法分两步:第一步,运用简单的规则对语料进行预处理;第二步,用文本分类的思想,使用BERT编码,利用多层CNN与BiLSTM模型联合提取特征进行分类,进而完成连动句识别任务。在人工标注的语料上进行实验,实验结果达到92.71%的准确率,F1值为87.41%。

pdf
融合全局和局部信息的汉语宏观篇章结构识别(Combining Global and Local Information to Recognize Chinese Macro Discourse Structure)
Yaxin Fan (范亚鑫) | Feng Jiang (蒋峰) | Xiaomin Chu (褚晓敏) | Peifeng Li (李培峰) | Qiaoming Zhu (朱巧明)

作为宏观篇章分析中的基础任务,篇章结构识别任务的目的是识别相邻篇章单元之间的结构,并层次化构建篇章结构树。已有的工作只考虑局部的结构和语义信息或只考虑全局信息。因此,本文提出了一种融合全局和局部信息的指针网络模型,该模型在考虑全局的语义信息同时,又考虑局部段落间的语义关系密切程度,从而有效地提高宏观篇章结构识别的能力。在汉语宏观篇章树库(MCDTB)的实验结果表明,本文所提出的模型性能优于目前性能最好的模型。

pdf
基于图神经网络的汉语依存分析和语义组合计算联合模型(Joint Learning Chinese Dependency Parsing and Semantic Composition based on Graph Neural Network)
Kai Wang (汪凯) | Mingtong Liu (刘明童) | Yuanmeng Chen (陈圆梦) | Yujie Zhang (张玉洁) | Jinan Xu (徐金安) | Yufeng Chen (陈钰枫)

组合原则表明句子的语义由其构成成分的语义按照一定规则组合而成, 由此基于句法结构的语义组合计算一直是一个重要的探索方向,其中采用树结构的组合计算方法最具有代表性。但是该方法难以应用于大规模数据处理,主要问题是其语义组合的顺序依赖于具体树的结构,无法实现并行处理。本文提出一种基于图的依存句法分析和语义组合计算的联合框架,并借助复述识别任务训练语义组合模型和句法分析模型。一方面图模型可以在训练和预测阶段采用并行处理,极大缩短计算时间;另一方面联合句法分析的语义组合框架不必依赖外部句法分析器,同时两个任务的联合学习可使语义表示同时学习句法结构和语义的上下文信息。我们在公开汉语复述识别数据集LCQMC上进行评测,实验结果显示准确率接近树结构组合方法,达到79.54%,而预测速度提升高达30倍。

pdf
基于强负采样的词嵌入优化算法(Word Embedding Optimization Based on Hard Negative Sampling)
Yuchen Wang (王雨晨) | Miaozhe Lin (林淼哲) | Jiefan Zhan (詹杰凡)

word2vec是自然语言处理领域重要的词嵌入算法之一,为了解决随机负采样作为优化目标可能出现的样本贡献消失问题,提出了可以应用在CBOW和Skip-gram框架上的以余弦距离为度量的强负采样方法:HNS-CBOW和HNS-SG。将原随机负采样过程拆解为两个步骤,首先,计算随机负样本与目标词的余弦距离,然后,再使用距离较近的强负样本更新参数。以英文维基百科数据作为实验语料,在公开的语义-语法数据集上对优化算法的效果进行了定量分析,实验表明,优化后的词嵌入质量显著优于原方法。同时,与GloVe等公开发布的预训练词向量相比,可以在更小的语料库上获得更高的准确性。

pdf
联合依存分析的汉语语义组合模型(Chinese Semantic Composition Model with Dependency Parsing)
Yuanmeng Chen (陈圆梦) | Yujie Zhang (张玉洁) | Jinan Xu (徐金安) | Yufeng Chen (陈钰枫)

在语义组合方法中,结构化方法强调以结构信息指导词义表示的组合方式。现有结构化语义组合方法使用外部分析器获取句法结构信息,导致句法分析与语义组合相互割裂,句法分析的精度严重制约语义组合模型的性能,且训练数据领域不一致等问题会进一步加剧性能的下降。对此,本文提出联合依存分析的语义组合模型,将依存分析与语义组合进行联合,一方面在训练语义组合模型时对依存分析模型进行微调,使其能够更适应语义组合模型使用的训练数据的领域特点;另一方面,在语义组合部分加入依存分析的中间信息表示,获取更丰富的结构信息和语义信息,以此来降低语义组合模型对依存分析错误结果的敏感度,提升模型的鲁棒性。我们以汉语为具体研究对象,将语义组合模型用于复述识别任务,并在CTB5汉语依存分析数据和LCQMC汉语复述识别数据上验证本文提出的模型。实验结果显示,本文所提方法在复述识别任务上的预测正确率和F1值上分别达到76.81%和78.03%;我们进一步设计实验对联合学习和中间信息利用的有效性进行验证,并与相关代表性工作进行了对比分析。

pdf
基于对话约束的回复生成研究(Research on Response Generation via Dialogue Constraints)
Mengyu Guan (管梦雨) | Zhongqing Wang (王中卿) | Shoushan Li (李寿山) | Guodong Zhou (周国栋)

现有的对话系统中存在着生成“好的”、“我不知道”等无意义的安全回复问题。日常对话中,对话者通常围绕特定的主题进行讨论且每句话都有明显的情感和意图。因此该文提出了基于对话约束的回复生成模型,即在Seq2Seq模型的基础上,结合对对话的主题、情感、意图的识别。该方法对生成回复的主题、情感和意图进行约束,从而生成具有合理的情感和意图且与对话主题相关的回复。实验证明,该文提出的方法能有效地提高生成回复的质量。

pdf
多模块联合的阅读理解候选句抽取(Evidence sentence extraction for reading comprehension based on multi-module)
Yu Ji (吉宇) | Xiaoyue Wang (王笑月) | Ru Li (李茹) | Shaoru Guo (郭少茹) | Yong Guan (关勇)

机器阅读理解作为自然语言理解的关键任务,受到国内外学者广泛关注。针对多项选择型阅读理解中无线索标注且涉及多步推理致使候选句抽取困难的问题,本文提出一种基于多模块联合的候选句抽取模型。首先采用部分标注数据微调预训练模型;其次通过TF-IDF递归式抽取多跳推理问题中的候选句;最后结合无监督方式进一步筛选模型预测结果降低冗余性。本文在高考语文选择题及RACE数据集上进行验证,在候选句抽取中,本文方法相比于最优基线模型F1值提升3.44%,在下游答题任务中采用候选句作为模型输入较全文输入时准确率分别提高3.68%和3.6%,上述结果证实本文所提方法有效性。

pdf
基于层次化语义框架的知识库属性映射方法(Property Mapping in Knowledge Base Under the Hierarchical Semantic Framework)
Yu Li (李豫) | Guangyou Zhou (周光有)

面向知识库的自动问答是自然语言处理的一项重要任务,它旨在对用户提出的自然语言形式问题给出精炼、准确的回复。目前由于缺少数据集、特征不一致等因素,导致难以使用通用的数据和方法实现领域知识库问答。因此,本文将“问题意图”视作不同领域问答可能存在的共同特征,将“问题”与三元组知识库中“关系谓词”的映射过程作为问答核心工作。为了考虑多种层次的语义避免重要信息的损失,本文分别将“基于门控卷积的深层语义”和“基于交互注意力机制的浅层语义”两个方面通过门控感知机制相融合。我们在NLPCC-ICCPOL 2016 KBQA数据集上的实验表明,本文提出的方法与现有的基于CDSSM和BDSSM相比,效能有明显的提升。此外,本文通过构造天文常识知识库,将问题与关系谓词映射模型移植到特定领域,结合Bi-LSTM-CRF模型构建了天文常识自动问答系统。

pdf
面向垂直领域的阅读理解数据增强方法(Method for reading comprehension data enhancement in vertical field)
Zhengwei Lv (吕政伟) | Lei Yang (杨雷) | Zhizhong Shi (石智中) | Xiao Liang (梁霄) | Tao Lei (雷涛) | Duoxing Liu (刘多星)

阅读理解问答系统是利用语义理解等自然语言处理技术,根据输入问题,对非结构化文档数据进行分析,生成一个答案,具有很高的研究和应用价值。在垂直领域应用过程中,阅读理解问答数据标注成本高且用户问题表达复杂多样,使得阅读理解问答系统准确率低、鲁棒性差。针对这一问题,本文提出一种面向垂直领域的阅读理解问答数据的增强方法,该方法基于真实用户问题,构造阅读理解训练数据,一方面降低标注成本,另一方面增加训练数据多样性,提升模型的准确率和鲁棒性。本文用汽车领域数据对该方法进行实验验证,其结果表明该方法对垂直领域阅读理解模型的准确率和鲁棒性均能有效提升。

pdf
融入对话上文整体信息的层次匹配回应选择(Learning Overall Dialogue Information for Dialogue Response Selection)
Bowen Si (司博文) | Fang Kong (孔芳)

对话是一个顺序交互的过程,回应选择旨在根据已有对话上文选择合适的回应,是自然语言处理领域的研究热点。已有研究取得了一定的成功,但仍然存在两个突出的问题。一是现有的编码器在挖掘对话文本语义信息上尚存在不足;二是只考虑每一回合对话与备选回应之间的关系,忽视了对话上文的整体语义信息。针对问题一,本文借助多头自注意力机制有效捕捉对话文本的语义信息;针对问题二,整合对话上文的整体语义信息,分别从单词、句子以及整体对话上文三个层次与备选回应进行匹配,充分保证匹配信息的完整。在Ubuntu Corpus V1和Douban Conversation Corpus数据集上的对比实验表明了本文给出方法的有效性。

pdf
一种结合话语伪标签注意力的人机对话意图分类方法(A Human-machine Dialogue Intent Classification Method using Utterance Pseudo Label Attention)
Jiande Ding (丁健德) | Peijie Huang (黄沛杰) | Jiabao Xu (许嘉宝) | Youming Peng (彭佑铭)

在人机对话中,系统需要通过意图分类判断用户意图,再触发相应的业务类型。由于多轮人机对话具有口语化、长文本和特征稀疏等特点,现有的文本分类方法在人机对话意图分类上还存在较大困难。本文在层次注意力网络(hierarchical attention networks, HAN)基础上,提出了一种结合话语伪标签注意力的层次注意力网络模型PLA-HAN (HAN with utterance pseudo label attention)。PLA-HAN通过优选伪标签集、构建单句话语意图识别模型以及设计话语伪标签注意力机制,识别单句话语意图伪标签,并计算话语伪标签注意力。进而将单句话语伪标签注意力嵌入到HAN的层级结构中,与HAN中的句子级别注意力相融合。融合了单句话语意图信息的句子级注意力使模型整体性能得到进一步的提升。我们在中国中文信息学会主办的“客服领域用户意图分类评测比赛”的评测语料上进行实验,实验结果证明PLA-HAN模型取得了优于HAN等对比方法的意图分类性能。

pdf
基于BERTCA的新闻实体与正文语义相关度计算模型(Semantic Relevance Computing Model of News Entity and Text based on BERTCA)
Junyi Xiang (向军毅) | Huijun Hu (胡慧君) | Ruibin Mao (毛瑞彬) | Maofu Liu (刘茂福)

目前的搜索引擎仍然存在“重形式,轻语义”的问题,无法做到对搜索关键词和文本的深层次语义理解,因此语义检索成为当代搜索引擎中亟需解决的问题。为了提高搜索引擎的语义理解能力,提出一种语义相关度的计算方法。首先标注金融类新闻标题实体与新闻正文语义相关度语料1万条,然后建立新闻实体与正文语义相关度计算的BERTCA(Bidirectional Encoder Representation from Transformers Co-Attention)模型,通过使用BERT预训练模型,综合考虑细粒度的实体和粗粒度的正文的语义信息,然后经过协同注意力,实现实体与正文的语义匹配,不仅能计算出金融新闻实体与新闻正文之间的相关度,还能根据相关度阈值来判定相关度类别,实验表明该模型在1万条标注语料上准确率超过95%,优于目前主流模型,最后通过具体搜索示例展现该模型的优秀性能。

pdf
基于多任务学习的生成式阅读理解(Generative Reading Comprehension via Multi-task Learning)
Jin Qian (钱锦) | Rongtao Huang (黄荣涛) | Bowei Zou (邹博伟) | Yu Hong (洪宇)

生成式阅读理解是机器阅读理解领域一项新颖且极具挑战性的研究。与主流的抽取式阅读理解相比,生成式阅读理解模型不再局限于从段落中抽取答案,而是能结合问题和段落生成自然和完整的表述作为答案。然而,现有的生成式阅读理解模型缺乏对答案在段落中的边界信息以及对问题类型信息的理解。为解决上述问题,本文提出一种基于多任务学习的生成式阅读理解模型。该模型在训练阶段将答案生成任务作为主任务,答案抽取和问题分类任务作为辅助任务进行多任务学习,同时学习和优化模型编码层参数;在测试阶段加载模型编码层进行解码生成答案。实验结果表明,答案抽取模型和问题分类模型能够有效提升生成式阅读理解模型的性能。

pdf
基于多头注意力和BiLSTM改进DAM模型的中文问答匹配方法(Chinese question answering method based on multi-head attention and BiLSTM improved DAM model)
Hanzhong Qin (秦汉忠) | Chongchong Yu (于重重) | Weijie Jiang (姜伟杰) | Xia Zhao (赵霞)

针对目前检索式多轮对话深度注意力机制模型DAM(Deep Attention Matching Network)候选回复细节不匹配和语义混淆的问题,本文提出基于多头注意力和双向长短时记忆网络(BiLSTM)改进DAM模型的中文问答匹配方法,该方法采用多头注意力机制,使模型有能力建模较长的多轮对话,更好的处理目标回复与上下文的匹配关系。此外,本文在特征融合过程中采用BiLSTM模型,通过捕获多轮对话中的序列依赖关系,进一步提升选择目标候选回复的准确率。本文在豆瓣和电商两个开放数据集上进行实验,实验性能均优于DAM基线模型,R10@1指标在含有词向量增强的情况下提升了1.5%。

pdf
基于Graph Transformer的知识库问题生成(Question Generation from Knowledge Base with Graph Transformer)
Yue Hu (胡月) | Guangyou Zhou (周光有)

知识库问答依靠知识库推断答案需大量带标注信息的问答对,但构建大规模且精准的数据集不仅代价昂贵,还受领域等因素限制。为缓解数据标注问题,面向知识库的问题生成任务引起了研究者关注,该任务是利用知识库三元组自动生成问题。现有方法仅由一个三元组生成的问题简短且缺乏多样性。为生成信息量丰富且多样化的问题,本文采用Graph Transformer和BERT两个编码层来加强三元组多粒度语义表征以获取背景信息。在SimpleQuestions上的实验结果证明了该方法有效性。

pdf
基于BERT与柱搜索的中文释义生成(Chinese Definition Modeling Based on BERT and Beam Seach)
Qinan Fan (范齐楠) | Cunliang Kong (孔存良) | Liner Yang (杨麟儿) | Erhong Yang (杨尔弘)

释义生成任务是指为一个目标词生成相应的释义。前人研究中文释义生成任务时未考虑目标词的上下文,本文首次在中文释义生成任务中使用了目标词的上下文信息,并提出了一个基于BERT与柱搜索的释义生成模型。本文构建了包含上下文的CWN中文数据集用于开展实验,除了BLEU指标之外,还使用语义相似度作为额外的自动评价指标,实验结果显示本文模型在中文CWN数据集和英文Oxford数据集上均有显著提升,人工评价结果也与自动评价结果一致。最后,本文对生成实例进行了深入分析。

pdf
基于深度学习的实体关系抽取研究综述(Review of Entity Relation Extraction based on deep learning)
Zhentao Xia (夏振涛) | Weiguang Qu (曲维光) | Yanhui Gu (顾彦慧) | Junsheng Zhou (周俊生) | Bin Li (李斌)

作为信息抽取的一项核心子任务,实体关系抽取对于知识图谱、智能问答、语义搜索等自然语言处理应用都十分重要。关系抽取在于从非结构化文本中自动地识别实体之间具有的某种语义关系。该文聚焦句子级别的关系抽取研究,介绍用于关系抽取的主要数据集并对现有的技术作了阐述,主要分为:有监督的关系抽取、远程监督的关系抽取和实体关系联合抽取。我们对比用于该任务的各种模型,分析它们的贡献与缺 陷。最后介绍中文实体关系抽取的研究现状和方法。

pdf
小样本关系分类研究综述(Few-Shot Relation Classification: A Survey)
Han Hu (胡晗) | Pengyuan Liu (刘鹏远)

关系分类作为构建结构化知识的重要一环,在自然语言处理领域备受关注。但在很多应用领域中(医疗、金融领域),收集充足的用于训练关系分类模型的数据是十分困难的。近年来,仅需要少量训练样本的小样本学习研究逐渐新兴于各大领域。本文对近期小样本关系分类模型与方法进行了系统的综述。根据度量方法的不同,将现有方法分为原型式和分布式两大类。根据是否利用额外信息,将模型分为预训练和非预训练两大类。此外,除了常规设定下的小样本学习,本文还梳理了跨领域和稀缺资源场景下的小样本学习,并探讨了目前小样本关系分类方法的局限性,分析了跨领域小样本 学习面临的技术挑战。最后,展望了小样本关系分类未来的发展方向。

pdf
基于阅读理解框架的中文事件论元抽取(Chinese Event Argument Extraction using Reading Comprehension Framework)
Min Chen (陈敏) | Fan Wu (吴凡) | Zhongqing Wang (王中卿) | Peifeng Li (李培峰) | Qiaoming Zhu (朱巧明)

传统的事件论元抽取方法把该任务当作句子中实体提及的多分类或序列标注任务,论元角色的类别在这些方法中只能作为向量表示,而忽略了论元角色的先验信息。实际上,论元角色的语义和论元本身有很大关系。对此,本文提议将其当作机器阅读理解任务,把论元角色表述为自然语言描述的问题,通过在上下文中回答这些问题来抽取论元。该方法更好地利用了论元角色类别的先验信息,在ACE2005中文语料上的实验证明了该方法的有效性。

pdf
基于BERT的端到端中文篇章事件抽取(A BERT-based End-to-End Model for Chinese Document-level Event Extraction)
Hongkuan Zhang (张洪宽) | Hui Song (宋晖) | Shuyi Wang (王舒怡) | Bo Xu (徐波)

篇章级事件抽取研究从整篇文档中检测事件,识别出事件包含的元素并赋予每个元素特定的角色。本文针对限定领域的中文文档提出了基于BERT的端到端模型,在模型的元素和角色识别中依次引入前序层输出的事件类型以及实体嵌入表示,增强文本的事件、元素和角色关联表示,提高篇章中各事件所属元素的识别精度。在此基础上利用标题信息和事件五元组的嵌入式表示,实现主从事件的划分及元素融合。实验证明本文的方法与现有工作相比具有明显的提升。

pdf
面向微博文本的融合字词信息的轻量级命名实体识别(Lightweight Named Entity Recognition for Weibo Based on Word and Character)
Chun Chen (陈淳) | Mingyang Li (李明扬) | Fang Kong (孔芳)

中文社交媒体命名实体识别由于其领域特殊性,一直广受关注。非正式且无结构的微博文本存在以下两个问题:一是词语边界模糊;二是语料规模有限。针对问题一,本文将同维度的字词进行融合,获得丰富的文本序列表征;针对问题二,提出了基于Star-Transformer框架的命名实体识别模型,借助星型拓扑结构更好地捕获动态特征;同时利用高速网络优化Star-Transformer中的信息桥接,提升模型的鲁棒性。本文提出的轻量级命名实体识别模型取得了目前Weibo语料上最好的效果。

pdf
引入源端信息的机器译文自动评价方法研究(Research on Incorporating the Source Information to Automatic Evaluation of Machine Translation)
Qi Luo (罗琪) | Maoxi Li (李茂西)

机器译文自动评价是机器翻译中的一个重要任务。针对目前译文自动评价中完全忽略源语言句子信息,仅利用人工参考译文度量翻译质量的不足,该文提出了引入源语言句子信息的机器译文自动评价方法:从机器译文与其源语言句子组成的二元组中提取描述翻译质量的质量向量,并将其与基于语境词向量的译文自动评价方法利用深度神经网络进行融合。在WMT’19译文自动评价任务数据集上的实验结果表明,所提出的方法能够有效增强机器译文自动评价与人工评价的相关性。深入的实验分析进一步揭示了源语言句子信息在译文自动评价中发挥着重要的作用。

pdf
“细粒度英汉机器翻译错误分析语料库”的构建与思考(Construction of Fine-Grained Error Analysis Corpus of English-Chinese Machine Translation and Its Implications)
Bailian Qiu (裘白莲) | Mingwen Wang (王明文) | Maoxi Li (李茂西) | Cong Chen (陈聪) | Fan Xu (徐凡)

机器翻译错误分析旨在找出机器译文中存在的错误,包括错误类型、错误分布等,它在机器翻译研究和应用中起着重要作用。该文将人工译后编辑与错误分析结合起来,对译后编辑操作进行错误标注,采用自动标注和人工标注相结合的方法,构建了一个细粒度英汉机器翻译错误分析语料库,其中每一个标注样本包括源语言句子、机器译文、人工参考译文、译后编辑译文、词错误率和错误类型标注;标注的错误类型包括增词、漏词、错词、词序错误、未译和命名实体翻译错误等。标注的一致性检验表明了标注的有效性;对标注语料的统计分析结果能有效地指导机器翻译系统的开发和人工译员的后编辑。

pdf
层次化结构全局上下文增强的篇章级神经机器翻译(Hierarchical Global Context Augmented Document-level Neural Machine Translation)
Linqing Chen (陈林卿) | Junhui Li (李军辉) | Zhengxian Gong (贡正仙)

如何有效利用篇章上下文信息一直是篇章级神经机器翻译研究领域的一大挑战。本文提出利用来源于整个篇章的层次化全局上下文提高篇章级神经机器翻译性能。为了实现该目标,本文模型分别获取当前句内单词与篇章内所有句子及单词之间的依赖关系,结合不同层次的依赖关系以获取含有层次化篇章信息的全局上下文。最终源语言当前句子中的每个单词都能获取其独有的综合词和句级别依赖关系的上下文。为了充分利用平行句对语料在训练中的优势本文使用两步训练法,在句子级语料训练模型的基础上使用含有篇章信息的语料进行二次训练以获得捕获全局上下文的能力。在若干基准语料数据集上的实验表明本文提出的模型与若干强基准模型相比取得了有意义的翻译质量提升。实验进一步表明,结合层次化篇章信息的上下文比仅使用词级别上下文更具优势。除此之外,本文尝试通过不同方式将全局上下文与翻译模型结合并观察其对模型性能的影响,并初步探究篇章翻译中全局上下文在篇章中的分布情况。

pdf
基于多语言联合训练的汉-英-缅神经机器翻译方法(Chinese-English-Burmese Neural Machine Translation Method Based on Multilingual Joint Training)
Zhibo Man (满志博) | Cunli Mao (毛存礼) | Zhengtao Yu (余正涛) | Xunyu Li (李训宇) | Shengxiang Gao (高盛祥) | Junguo Zhu (朱俊国)

多语言神经机器翻译是解决低资源神经机器翻译的有效方法,现有方法通常依靠共享词表的方式解决英语、法语以及德语相似语言之间的多语言翻译问题。缅甸语属于一种典型的低资源语言,汉语、英语以及缅甸语之间的语言结构差异性较大,为了缓解由于差异性引起的共享词表大小受限制的问题,提出一种基于多语言联合训练的汉英缅神经机器翻译方法。在Transformer框架下将丰富的汉英平行语料与汉缅、英缅的语料进行联合训练,模型训练过程中分别在编码端和解码端将汉英缅映射在同一语义空间降低汉英缅语言结构差异性对共享词表的影响,通过共享汉英语料训练参数来弥补汉缅数据缺失的问题。实验表明在一对多、多对多的翻译场景下,提出方法相比基线模型的汉-英、英-缅以及汉-缅的BLEU值有明显的提升。

pdf
基于跨语言双语预训练及Bi-LSTM的汉-越平行句对抽取方法(Chinese-Vietnamese Parallel Sentence Pair Extraction Method Based on Cross-lingual Bilingual Pre-training and Bi-LSTM)
Chang Liu (刘畅) | Shengxiang Gao (高盛祥) | Zhengtao Yu (余正涛) | Yuxin Huang (黄于欣) | Congcong You (尤丛丛)

汉越平行句对抽取是缓解汉越平行语料库数据稀缺的重要方法。平行句对抽取可转换为同一语义空间下的句子相似性分类任务,其核心在于双语语义空间对齐。传统语义空间对齐方法依赖于大规模的双语平行语料,越南语作为低资源语言获取大规模平行语料相对困难。针对这个问题本文提出一种利用种子词典进行跨语言双语预训练及Bi-LSTM(Bi-directional Long Short-Term Memory)的汉-越平行句对抽取方法。预训练中仅需要大量的汉越单语和一个汉越种子词典,通过利用汉越种子词典将汉越双语映射到公共语义空间进行词对齐。再利用Bi-LSTM和CNN(Convolutional Neural Networks)分别提取句子的全局特征和局部特征从而最大化表示汉-越句对之间的语义相关性。实验结果表明,本文模型在F1得分上提升7.1%,优于基线模型。

pdf
基于子词级别词向量和指针网络的朝鲜语句子排序(Korean Sentence Ordering Based on Sub Word Level Word Vector and Pointer Network)
Xiaodong Yan (闫晓东) | Xiaoqing Xie (解晓庆)

句子排序是多文档摘要系统和机器阅读理解中重要的任务之一,排序的质量将直接 影响摘要和答案的连贯性与可读性。因此,本文采用在中英文上大规模使用的深度 学习方法,同时结合朝鲜语词语形态变化丰富的特点,提出了一种基于子词级别词 向量和指针网络的朝鲜语句子排序模型,其目的是解决传统方法无法挖掘深层语义 信息问题。 本文提出基于形态素拆分的词向量训练方法(MorV),同时对比子词n元 词向量训练方法(SG),得到朝鲜语词向量;采用了两种句向量方法:基于卷积神经网 络(CNN)、基于长短时记忆网络(LSTM),结合指针网络分别进行实验。结果表明本文 采用MorV和LSTM的句向量结合方法可以更好地捕获句子间的语义逻辑关系,提升句 子排序的效果。 关键词: 词向量 ;形态素拆分 ;指针网络 ;句子排序

pdf
基于统一模型的藏文新闻摘要(Abstractive Summarization of Tibetan News Based on Hybrid Model)
Xiaodong Yan (闫晓东) | Xiaoqing Xie (解晓庆) | Yu Zou (邹煜) | Wei Li (李维)

Seq2seq神经网络模型在中英文文本摘要的研究中取得了良好的效果,但在低资源语言的文本摘要研究还处于探索阶段,尤其是在藏语中。此外,目前还没有大规模的标注语料库进行摘要提取。本文提出了一种生成藏文新闻摘要的统一模型。利用TextRank算法解决了藏语标注训练数据不足的问题。然后,采用两层双GRU神经网络提取代表原始新闻的句子,减少冗余信息。最后,使用基于注意力机制的Seq2Seq来生成理解式摘要。同时,我们加入了指针网络来处理未登录词的问题。实验结果表明,ROUGE-1评分比传统模型提高了2%。 关键词:文本摘要;藏文;TextRank; 指针网络;Bi-GRU

pdf
蒙古文拼写形式多样化现象研究(A Study of Spelling Variety of Mongolian)
Shuangcheng Bai (白双成) | Sile Hu (呼斯勒)

蒙古文文本中存在一个有别于多数其他文字的特别现象──看到的单词字形正确但其内码序列不正确,或者说单词“变形显现字形”序列正确但“名义字符”序列不正确的现象,我们称其为蒙古文的拼写形式多样化现象。本文先定义该现象及相关概念,再通过简单图示、例词拼写形式穷举、新闻语料统计分析和基于整篇文章标注统计等多方式、多角度论证这一现象的事实性和严重性,分析导致这一现象的深层原因并指出拼写形式多样化对蒙古文信息处理和应用方面的严重影响,最后提出通过推广普及录入规范和标准提高用户意识、使用智能输入法避免误录、使用校对纠错工具后纠正、基于生语料的统计学习方法为补充等多途径解决方法。本文对蒙古文标准编码的推广普及具有较好的参考价值。

pdf
面向司法领域的高质量开源藏汉平行语料库构建(A High-quality Open Source Tibetan-Chinese Parallel Corpus Construction of Judicial Domain)
Jiu Sha (沙九) | Luqin Zhou (周鹭琴) | Chong Feng (冯冲) | Hongzheng Li (李洪政) | Tianfu Zhang (张天夫) | Hui Hui (慧慧)

面向司法领域的藏汉机器翻译面临严重的数据稀疏问题。本文将从两个方面展录研究:第一,相比于通用领域,司法领域的藏语要有更严谨的逻辑表达和更多的专业术语。然而,目前藏语资源在司法领域内缺乏对应的语料,稀缺专业术语词以及句法结构。第二,藏语的特殊词汇表达方式和特定句法结构使得通用语料构建方法难以构建藏汉平行语料库。为此,本文提出仺种针对司法领域藏汉平行语料的轻量级构建方法。首先,我们采取人工标注获取一个中等规模的司法领域藏汉专业术语表作为先验知识库,以避免领域越界而产生的语料逻辑表达问题和领域术语缺失问题;其次,我们从全国的地方法庭官网采集实例语料数据,例如裁判文书。我们优先寻找藏文实例数据,其次是汉语,以避免后续构造藏语句子而丢失特殊的词汇表达和句式结构。我们基于以上原则采集藏汉语料构建高质量的藏汉平行语料库,具体方法包括:爬虫获取语料,规则断章对齐检测,语句边界识别,语料库自动清洗。朂终,我们构建了16万级规模的藏汉司法领域语料库,并通过多种翻译模型和交叉实验验证了构建的语料库的高质量特点和鲁棒性。另外,此语料库会弚源以便于相关研究人员用于科研工作。

pdf
一种基于相似度的藏文词同现网络构建及特征分析(A Research on Construction and Feature Analysis of Similarity-based Tibetan Word Co-occurrence Networks)
Dongzhou Jiayang (加羊东周) | Zhijie Cai (才智杰) | Zhuoma Cairang (才让卓玛) | Maocuo San (三毛措)

语言文字是人类智慧和文明的结晶,是经过漫长演化形成的复杂系统。语言同现网络采 用复杂网络技术研究语言的特征,揭示语言文字的内部结构关系。文章分析相似性同 现网络构建模块结构,提出一种基于相似度的藏文词同现网络构建方法,该方法以词 为网络节点,以相似词间连边构造词同现网络。基于相似度藏文词同现网络构建方法, 在大、中、小三类文档上建立了词同现网络,并分析了它们的统计特征,实验数据表明 建立的藏文词同现网络都具有小世界效应和无标度特征。

pdf
《动词句法语义信息词典》知识内容说明书(An Introduction to the Syntactic-Semantic Knowledge-Base of Chinese Verbs)
Yulin Yuan (袁毓林) | Hong Cao (曹宏)

本文首先介绍《实词信息词典》的研制目标与结构内容,重点介绍其中的《动词信息词典》的体系结构与理论背景;然后,介绍《动词信息词典》所区分的8种动词小类及其定义,其为动词所设置的22种语义角色及其定义,由这些语义角色的不同配置而造成的20来种句法格式及其例句,其所考察的动词的9种主要的语法功能及其对于该词类的隶属度;最后,给出《动词信息词典》中检索系统的界面截图,交代其相应的纸质版本的情况。

pdf
面向中文AMR标注体系的兼语语料库构建及识别研究(Research on the Construction and Recognition of Concurrent corpus for Chinese AMR Annotation System)
Wenhui Hou (侯文惠) | Weiguang Qu (曲维光) | Tingxin Wei (魏庭新) | Bin Li (李斌) | Yanhui Gu (顾彦慧) | Junsheng Zhou (周俊生)

兼语结构是汉语中常见的一种动词结构,由述宾短语与主谓短语共享兼语,结构复杂,给句法分析造成困难,因此兼语语料库构建及识别工作对于语义解析及下游任务都具有重要意义。但现存兼语语料库较少,面向中文AMR标注体系的兼语语料库构建仍处于空白阶段。针对这一现状,本文总结了一套兼语语料库标注规范,并构建了一定数量面向中文AMR标注体系的兼语语料库。基于构建的语料库,采用基于字符的神经网络模型识别兼语结构,并对识别结果以及未来的改进方向进行分析总结。

pdf
面向人工智能伦理计算的中文道德词典构建方法研究(Construction of a Chinese Moral Dictionary for Artificial Intelligence Ethical Computing)
Hongrui Wang (王弘睿) | Chang Liu (刘畅) | Dong Yu (于东)

道德词典资源的建设是人工智能伦理计算的一个研究重点。由于道德行为复杂多样,现有的英文道德词典分类体系并不完善,而中文方面目前尚未有相关的词典资源,理论体系和构建方法仍待探究。针对以上问题,该文提出了面向人工智能伦理计算的中文道德词典构建任务,设计了四类标签和四种类型,得到包含25,012个词的中文道德词典资源。实验结果表明,该词典资源不仅能够使机器学会道德知识,判断词的道德标签和类型,而且能够为句子级别的道德文本分析提供数据支持。

pdf
汉语否定焦点识别研究:数据集与基线系统(Research on Chinese Negative Focus Identification: Dataset and Baseline)
Jiaxuan Sheng (盛佳璇) | Bowei Zou (邹博伟) | Longxiang Shen (沈龙骧) | Jing Ye (叶静) | Yu Hong (洪宇)

自然语言文本中存在大量否定语义表达,否定焦点识别任务作为更细粒度的否定语义分析,近年来开始受到自然语言处理学者的关注。该任务旨在识别句子中被否定词修饰和强调的文本片段,其对自然语言处理的下游任务,如情感分析、观点挖掘等具有重要意义。与英语相比,目前面向汉语的否定焦点识别研究彶展缓慢,其主要原因是尚未有中文数据集为模型提供训练和测试数据。为解决上述问题,本文在汉语否定与不确定语料库上进行了否定焦点的标注工作,初步探索了否定焦点在汉语上的语言现象,并构建了一个包含5,762个样本的数据集。同时,本文还提出了一个基于神经网络模型的基线系统,为后续相关研究提供参照。

pdf
面向医学文本处理的医学实体标注规范(Medical Entity Annotation Standard for Medical Text Processing)
Huan Zhang (张欢) | Yuan Zong (宗源) | Baobao Chang (常宝宝) | Zhifang Sui (穗志方) | Hongying Zan (昝红英) | Kunli Zhang (张坤丽)

随着智慧医疗的普及,利用自然语言处理技术识别医学信息的需求日益增长。目前,针对医学实体而言,医学共享语料库仍处于空白状态,这对医学文本信息处理各项任务的进展造成了巨大阻力。如何判断不同的医学实体类别?如何界定不同实体间的涵盖范围?这些问题导致缺乏类似通用场景的大规模规范标注的医学文本数据。针对上述问题,该文参考了UMLS中定义的语义类型,提出面向医学文本信息处理的医学实体标注规范,涵盖了疾病、临床表现、医疗程序、医疗设备等9种医学实体,以及基于规范构建医学实体标注语料库。该文综述了标注规范的描述体系、分类原则、混淆处理、语料标注过程以及医学实体自动标注基线实验等相关问题,希望能为医学实体语料库的构建提供可参考的标注规范,以及为医学实体识别提供语料支持。

pdf
汉语块依存语法与树库构建(Chinese Chunk-Based Dependency Grammar and Treebank construction)
Qingqing Qian (钱青青) | Chengwen Wang (王诚文)

本研究依据以谓词为核心的块依存语法构建块依存树库,在句内和句间寻找谓词所支配的组块,利用汉语中组块和组块间的依存关系补全缺省部分,明确谓词支配关系。目前共标注2199篇文本,涵盖百科、新闻两个领域,共约187万字语料。本文简述了块依存语法的原则,并对组块及其依存关系进行了定义。将详细介绍标注流程、标注一致率、数据分布等情况。基于现有的树库,本研究发现汉语中有约25%的小句是非自足的,约有88%的核心谓词可支配1~3个从属成分。

pdf
汉语学习者依存句法树库构建(Construction of a Treebank of Learner Chinese)
Jialu Shi (师佳璐) | Xinyu Luo (罗昕宇) | Liner Yang (杨麟儿) | Dan Xiao (肖丹) | Zhengsheng Hu (胡正声) | Yijun Wang (王一君) | Jiaxin Yuan (袁佳欣) | Yu Jingsi (余婧思) | Erhong Yang (杨尔弘)

汉语学习者依存句法树库为非母语者语料提供依存句法分析,可以支持第二语言教学与研究,也对面向第二语言的句法分析、语法改错等相关研究具有重要意义。然而,现有的汉语学习者依存句法树库数量较少,且在标注方面仍存在一些问题。为此,本文改进依存句法标注规范,搭建在线标注平台,并开展汉语学习者依存句法标注。本文重点介绍了数据选取、标注流程等问题,并对标注结果进行质量分析,探索二语偏误对标注质量与句法分析的影响。

pdf
CDCPP:跨领域中文标点符号预测(CDCPP: Cross-Domain Chinese Punctuation Prediction)
Pengyuan Liu (刘鹏远) | Weikang Wang (王伟康) | Likun Qiu (邱立坤) | Bingjie Du (杜冰洁)

标点符号对文本理解起很大作用。但目前,在中文文本特别是在社交媒体及问答领域文本中的标点符号使用存在非常多的错误或缺失的情况,这严重影响对其进行语义分析及机器翻译等各项自然语言处理的效果。当前对标点符号进行预测的相关研究多集中于英文对话的语音转写文本,缺少对社交媒体及问答领域文本进行标点预测的相关研究,也没有这些领域公开的数据集。本文首先提出跨领域中文标点符号预测任务,该任务是要利用标点符号基本规范正确的大规模新闻领域文本,建立标点符号预测模型,然后在标点符号标注不规范的社交媒体及问答领域,进行跨领域标点符号预测。随后构建了新闻、社交媒体及问答三个领域的相应数据集。最后还实现了一个基于BERT的标点符号预测基线模型,并在该数据集上进行了实验与分析。实验结果表明,直接利用新闻领域训练的模型,在社交媒体及问答领域上进行标点符号预测的性能均有所下降,在问答领域下降较小,在微博领域下降较大,超过20%,跨领域标点符号预测任务具有一定的挑战性。

pdf
多目标情感分类中文数据集构建及分析研究(Construction and Analysis of Chinese Multi-Target Sentiment Classification Dataset)
Pengyuan Liu (刘鹏远) | Yongsheng Tian (田永胜) | Chengyu Du (杜成玉) | Likun Qiu (邱立坤)

目标级情感分类任务是要得到句子中特定评价目标的情感倾向。一个评论句中往往存在多个目标,多个目标的情感可能一致,也可能不一致。但在已有针对目标级情感分类的评测数据集中:1)大多数是一个句子一个目标;2)在少数有多个目标的句子中,多个目标情感倾向分布很不均衡,多个目标情感一致的情形占较大优势。数据集本身的缺陷限制了模型针对多个目标进行情感分类的提升空间。针对以上问题,本文构建了一个针对多目标情感分类的中文数据集,人工标注了6339个评价目标,共2071条数据。该数据集:1)评价目标个数分布平衡;2)情感正负极性分布平衡;3)多目标情感倾向分布平衡。随后,本文利用多个目标情感分类的主流模型在该数据集上进行了实验与比较分析。结果表明现有主流模型尚不能对存在多个目标且目标情感倾向性不一致实例中的目标进行很好的分类,尤其是目标的情感倾向为中性时。多目标情感分类任务具有一定的难度与挑战性。

pdf
基于Self-Attention的句法感知汉语框架语义角色标注(Syntax-Aware Chinese Frame Semantic Role Labeling Based on Self-Attention)
Xiaohui Wang (王晓晖) | Ru Li (李茹) | Zhiqiang Wang (王智强) | Qinghua Chai (柴清华) | Xiaoqi Han (韩孝奇)

框架语义角色标注(Frame Semantic Role Labeling, FSRL)是基于FrameNet标注体系的语义分析任务。语义角色标注通常对句法有很强的依赖性,目前的语义角色标注模型大多基于双向长短时记忆网络Bi-LSTM,虽然可以获取句子中的长距离依赖信息,但无法很好获取句子中的句法信息。因此,引入self-attention机制来捕获句子中每个词的句法信息。实验结果表明,该模型在CFN(Chinese FrameNet,汉语框架网)数据集上的F1达到83.77%,提升了近11%。

pdf
基于词语聚类的汉语口语教材自动推送素材研究(Study on Automatic Push Material of Oral Chinese Textbook Based on Word Clustering)
Bingbing Yang (杨冰冰) | Huizhou Zhao (赵慧周) | Zhimin Wang (王治敏)

新冠肺炎的蔓延使得线上移动教学成为教育发展的必然趋势,本文以适合汉语教材自动推送的口语素材为研究对象,基于10341条生活类口语语料,对词汇的整体特点进行计量分析,在此基础上使用词向量模型及Kmeans算法对全部词语进行聚类,参考词语聚类结果及对口语语料话题和场景的考察,构建了一个包含15个一级话题、102个二级话题及81个交际场景的汉语口语话题-场景素材库。同时对各级话题常用词进行了总结。本文可为教材自动定制的素材库提供资源支持。

pdf
基于半监督学习的中文社交文本事件聚类方法(Semi-supervised Method to Cluster Chinese Events on Social Streams)
Hengrui Guo (郭恒睿) | Zhongqing Wang (王中卿) | Peifeng Li (李培峰) | Qiaoming Zhu (朱巧明)

面向社交媒体的事件聚类旨在根据事件特征对短文本聚类。目前,事件聚类模型主要分为无监督模型和有监督模型。无监督模型聚类效果较差,有监督模型依赖大量标注数据。基于此,本文提出了一种半监督事件聚类模型(SemiEC),该模型在小规模标注数据的基础上,利用LSTM表征事件,利用线性模型计算文本相似度,进行增量聚类,利用增量聚类产生的标注数据对模型再训练,结束后对不确定样本再聚类。实验表明,SemiEC的性能相比其他模型均有所提高。

pdf
基于多粒度语义交互理解网络的幽默等级识别(A Multi-Granularity Semantic Interaction Understanding Network for Humor Level Recognition)
Jinhui Zhang (张瑾晖) | Shaowu Zhang (张绍武) | Xiaochao Fan (樊小超) | Liang Yang (杨亮) | Hongfei Lin (林鸿飞)

幽默在人们日常交流中发挥着重要作用。随着人工智能的快速发展,幽默等级识别成为自然语言处理领域的热点研究问题之一。已有的幽默等级识别研究往往将幽默文本看作一个整体,忽视了幽默文本内部的语义关系。本文将幽默等级识别视为自然语言推理任务,将幽默文本划分为“铺垫”和“笑点”两个部分,分别对其语义和语义关系进行建模,提出了一种多粒度语义交互理解网络,从单词和子句两个粒度捕获幽默文本中语义的关联和交互。本文在Reddit公开幽默数据集上进行了实验,相比之前最优结果,模型在语料上的准确率提升了1.3%。实验表明,引入幽默内部的语义关系信息可以提高模型幽默识别的性能,而本文提出的模型也可以很好地建模这种语义关系。

pdf
文本情感分析中的重叠现象研究(A Study on Repetition in Text-based Sentiment Analysis)
Tuya Naren (娜仁图雅) | Xiaoyin Xu (徐晓音)

汉语中的重叠现象丰富,文本情感分析任务中应该密切关注语篇空间内的重叠现象及其交互状态。本文就重叠在文本中的样态、特点及情感标记功能进行了理论探讨;重点就构词性重叠、结构性重叠的表现形式及情感语义进行了分析;据此研究的基础上,本文就重叠现象在文本情感分析上的实际应用从几个方面进行了讨论。

pdf
基于BiLSTM-CRF的社会突发事件研判方法(Social Emergency Event Judgement based on BiLSTM-CRF)
Huijun Hu (胡慧君) | Cong Wang (王聪) | Jianhua Dai (代建华) | Maofu Liu (刘茂福)

社会突发事件的分类和等级研判作为应急处置中的一环,其重要性不言而喻。然而,目前研究多数采用人工或规则的方法识别证据进行研判,由于社会突发事件的构成的复杂性和语言描述的灵活性,这对于研判证据识别有很大局限性。本文参考“事件抽取”思想,事件类型和研判证据作为事件中元素,以BiLSTM-CRF方法细粒度的识别,并将二者结合,分类结果作为等级研判的输入,识别出研判证据。最终将识别结果结合注意力机制进行等级研判,通过对研判证据的精准识别从而来增强等级研判的准确性。实验表明,相比人工或规则识别研判证据,本文提出的方法有着更好的鲁棒性,社会突发事件研判时也达到了较好的效果。 关键词:事件分类 ;研判证据识别 ;等级研判 ;BiLSTM-CRF

pdf
结合金融领域情感词典和注意力机制的细粒度情感分析(Attention-based Recurrent Network Combined with Financial Lexicon for Aspect-level Sentiment Classification)
Qinglin Zhu (祝清麟) | Bin Liang (梁斌) | Liuyu Han (刘宇瀚) | Yi Chen (陈奕) | Ruifeng Xu (徐睿峰) | Ruibin Mao (毛瑞彬)

针对在金融领域实体级情感分析任务中,往往缺乏足够的标注语料,以及通用的情感分析模型难以有效处理金融文本等问题。本文构建一个百万级别的金融领域实体情感分析语料库,并标注五千余个金融领域情感词作为金融领域情感词典。同时,基于该金融领域数据集,提出一种结合金融领域情感词典和注意力机制的金融文本细粒度情感分析模型。该模型使用两个LSTM网络分别提取词级别的语义信息和基于情感词典分类后的词类级别信息,能有效获取金融领域词语的特征信息。此外,为了让文本中金融领域情感词获得更多关注,提出一种基于金融领域情感词典的注意力机制来为不同实体获取重要的情感信息。最终在构建的金融领域实体级语料库上进行实验,取得了比对比模型更好的效果。

pdf
基于层次注意力机制和门机制的属性级别情感分析(Aspect-level Sentiment Analysis Based on Hierarchical Attention and Gate Networks)
Chao Feng (冯超) | Haihui Li (黎海辉) | Hongya Zhao (赵洪雅) | Yun Xue (薛云) | Jingyao Tang (唐靖尧)

近年来,作为细粒度的属性级别情感分析在商业界和学术界受到越来越多的关注,其目的在于识别一个句子中多个属性词所对应的情感极性。目前,在解决属性级别情感分析问题的绝大多数工作都集中在注意力机制的设计上,以此突出上下文和属性词中不同词对于属性级别情感分析的贡献,同时使上下文和属性词之间相互关联。本文提出使用层次注意力机制和门机制处理属性级别情感分析任务,在得到属性词的隐藏状态之后,通过注意力机制得到属性词新的表示,然后利用属性词新的表示和注意力机制进一步得到上下文新的表示,层次注意力机制的设计使得上下文和属性词的表达更加准确;同时通过门机制选择对属性词而言上下文中有用的信息,以此丰富上下文的表达,在SemEval 2014 Task4和Twitter数据集上的实验结果表明本文提出模型的有效性。

pdf
基于循环交互注意力网络的问答立场分析(A Recurrent Interactive Attention Network for Answer Stance Analysis)
Wangda Luo (骆旺达) | Yuhan Liu (刘宇瀚) | Bin Liang (梁斌) | Ruifeng Xu (徐睿峰)

针对问答立场任务中,现有方法难以提取问答文本间的依赖关系问题,本文提出一种基于循环交互注意力(Recurrent Interactive Attention, RIA)网络的问答立场分析方法。该方法通过模仿人类阅读理解时的思维方式,基于交互注意力机制和循环迭代方法,有效地从问题和答案的相互联系中挖掘问答文本的立场信息。此外,该方法将问题进行陈述化表示,有效地解决疑问句表述下问题文本无法明确表达自身立场的问题。实验结果表明,本文方法取得了比现有模型方法更好的效果,同时证明该方法能有效拟合问答立场分析任务中的问答对依赖关系。

pdf
新型冠状病毒肺炎相关的推特主题与情感研究(Exploring COVID-19-related Twitter Topic Dynamics across Countries)
Shuailong Liang (梁帅龙) | Derek F. Wong (黄辉) | Yue Zhang (张岳)

我们基于从2020年1月22日至2020年4月30日在推特社交平台上抓取的不同国家和地区发布的50万条推文,研究了有关 2019新型冠状病毒肺炎相关的主题和人们的观点,发现了不同国家之间推特用户的普遍关切和看法之间存在着异同,并且对不同议题的情感态度也有所不同。我们发现大部分推文中包含了强烈的情感,其中表达爱与支持的推文比较普遍。总体来看,人们的情感随着时间的推移逐渐正向增强。

pdf
融入多尺度特征注意力的胶囊神经网络及其在文本分类中的应用(Incorporating Multi-scale Feature Attention into Capsule Network and its Application in Text Classification)
Chaofan Wang (王超凡) | Shenggen Ju (琚生根) | Jieping Sun (孙界平) | Run Chen (陈润)

近些年来,胶囊神经网络(Capsnets)由于拥有强大的文本特征学习能力已被应用到了文本分类任务当中。目前的研究工作大部分都将提取到的文本多元语法特征视为同等重要,而忽略了单词所对应各个多元语法特征的重要程度应该是由具体上下文决定的这一问题,这将直接影响到模型对整个文本的语义理解。针对上述问题,本文提出了多尺度特征部分连接胶囊网络(MulPart-Capsnets)。该方法将多尺度特征注意力融入到Capsnets中,多尺度特征注意力能够自动选择不同尺度的多元语法特征,通过对其进行加权求和,就能为每个单词精确捕捉到丰富的多元语法特征。同时,为了减少子胶囊与父胶囊之间的冗余信息传递,本文同时也对路由算法进行了改进。 本文提出的算法在文本分类任务上针对七个著名的数据集进行了有效性验证,和现有的研究工作相比,性能显著提高,说明了本文的算法能够捕获文本中更丰富的多元语法特征,具有更加强大的文本特征学习能力。

pdf
结合深度学习和语言难度特征的句子可读性计算方法(The method of calculating sentence readability combined with deep learning and language difficulty characteristics)
Yuling Tang (唐玉玲) | Dong Yu (于东)

本文提出了可读性语料库构建的改进方法,基于该方法,构建了规模更大的汉语句子可读性语料库。该语料库在句子绝对难度评估任务上的准确率达到0.7869,相对前人工作提升了0.15以上,证明了改进方法的有效性。将深度学习方法应用于汉语可读性评估,探究了不同深度学习方法自动捕获难度特征的能力,并进仛步探究了向深度学习特征中融入不同层面的语难度特征对模型整体性能的影响。实验结果显示,不同深度学习模型的难度特征捕获能力不尽相同,语言难度特征可以不同程度地提高深度学习模型的难度表征能力。

pdf
基于预训练语言模型的案件要素识别方法(A Method for Case Factor Recognition Based on Pre-trained Language Models)
Haishun Liu (刘海顺) | Lei Wang (王雷) | Yanguang Chen (陈彦光) | Shuchen Zhang (张书晨) | Yuanyuan Sun (孙媛媛) | Hongfei Lin (林鸿飞)

案件要素识别指将案件描述中重要事实描述自动抽取出来,并根据领域专家设计的要素体系进行分类,是智慧司法领域的重要研究内容。基于传统神经网络的文本编码难以提取深层次特征,基于阈值的多标签分类难以捕获标签间依赖关系,因此本文提出了基于预训练语言模型的多标签文本分类模型。该模型采用以Layer-attentive策略进行特征融合的语言模型作为编码器,使用基于LSTM的序列生成模型作为解码器。在“CAIL2019”数据集上进行实验,该方法比基于循环神经网络的算法在F1值上最高可提升7.6%,在相同超参数设置下比基础语言模型(BERT)提升约3.2%。

pdf
基于拼音约束联合学习的汉语语音识别(Chinese Speech Recognition Based on Pinyin Constraint Joint Learning)
Renfeng Liang (梁仁凤) | Zhengtao Yu (余正涛) | Shengxiang Gao (高盛祥) | Yuxin Huang (黄于欣) | Junjun Guo (郭军军) | Shuli Xu (许树理)

当前的语音识别模型在英语、法语等表音文字中已经取得很好的效果。然而,汉语是 一种典型的表意文字,汉字与语音没有直接的对应关系,但拼音作为汉字读音的标注 符号,与汉字存在相互转换的内在联系。因此,在汉语语音识别中利用拼音作为解码 约束,引入一种更接近语音的归纳偏置。基于多任务学习框架,提出一种基于拼音约 束联合学习的汉语语音识别方法,以端到端的汉字语音识别为主任务,以拼音语音识 别为辅助任务,通过共享编码器,同时利用汉字与拼音识别结果作为监督信号,增强 编码器对汉语语音的表达能力。实验结果表明,相比基线模型,提出方法取得更优的 识别效果,词错误率WER降低了2.24个百分点

pdf
基于数据增强和多任务特征学习的中文语法错误检测方法(Chinese Grammar Error Detection based on Data Enhancement and Multi-task Feature Learning)
Haihua Xie (谢海华) | Zhiyou Chen (陈志优) | Jing Cheng (程静) | Xiaoqing Lyu (吕肖庆) | Zhi Tang (汤帜)

由于中文语法的复杂性,中文语法错误检测(CGED)的难度较大,而训练语料和相关研究的缺乏,使得CGED的效果还远达不到能够实用的程度。本文提出一种CGED模型,采用数据增强、预训练语言模型和基于语言学特征多任务学习的方式,弥补训练语料稀缺的不足。数据增强能够有效地扩充训练集,预训练语言模型蕴含丰富的语义信息有助于语法分析,基于语言学特征多任务学习对语言模型进行微调则可以使语言模型学习到跟语法错误检测相关的语言学特征。本文提出的方法在NLPTEA的CGED数据集进行测试,取得了优于其他模型的结果。

pdf
基于有向异构图的发票明细税收分类方法(Tax Classification of Invoice Details Based on Directed Heterogeneous Graph)
Peiyao Zhao (赵珮瑶) | Qinghua Zheng (郑庆华) | Bo Dong (董博) | Jianfei Ruan (阮建飞) | Minnan Luo (罗敏楠)

税收是国家赖以生存的物质基础。为加快税收现代化,方便纳税人便捷、规范开具增值税发票,国税总局规定纳税人在税控系统开票前选择发票明细对应的税收分类才可正常开具发票。提高税收分类的准确度,是构建税收风险指标和分析纳税人行为特征的重要基础。基于此,本文提出了一种基于有向异构图的短文本分类模型(Heterogeneous Directed Graph Attenton Network,HDGAT),利用发票明细间的有向信息建模,引入外部知识,显著地提高了发票明细的税收分类准确度。

pdf
半监督跨领域语义依存分析技术研究(Semi-supervised Domain Adaptation for Semantic Dependency Parsing)
Dazhan Mao (毛达展) | Huayong Li (李华勇) | Yanqiu Shao (邵艳秋)

近年来,尽管深度学习给语义依存分析带来了长足的进步,但由于语义依存分析数据标注代价非常高昂,并且在单领域上性能较好的依存分析器迁移到其他领域时,其性能会大幅度下降。因此为了使其走向实用,就必须解决领域适应问题。本文提出一个新的基于对抗学习的领域适应依存分析模型,我们提出了基于对抗学习的共享双编码器结构,并引入领域私有辅助任务和正交约束,同时也探究了多种预训练模型在跨领域依存分析任务上的效果和性能。

pdf
汉英篇章衔接对齐语料构建研究(Research on the Construction of Chinese-English Discourse Cohesion Alignment Corpus)
Yancui Li (李艳翠) | Jike Feng (冯继克) | Chunxiao Lai (来纯晓) | Hongyu Feng (冯洪玉)

篇章衔接性分析是理解篇章的基础,汉语和英语在指代、连接和省略等主要衔接方式上存在差异。本文旨在创建汉英篇章衔接对齐语料库,给出包括子句、连接词、指代和省略的汉英篇章衔接对齐标注策略,建立包含相应信息的对齐信息的语料库资源,最后对标注语料进行评估并讨论了标注中的难点问题及解决方法。对语料库标注质量评估及简单实验结果表明,本文研究语料标注策略方法切实可行,所标注的资源一致性满足实际需要。

pdf
Cross-Lingual Dependency Parsing via Self-Training
Meishan Zhang | Yue Zhang

Recent advances of multilingual word representations weaken the input divergences across languages, making cross-lingual transfer similar to the monolingual cross-domain and semi-supervised settings. Thus self-training, which is effective for these settings, could be possibly beneficial to cross-lingual as well. This paper presents the first comprehensive study for self-training in cross-lingual dependency parsing. Three instance selection strategies are investigated, where two of which are based on the baseline dependency parsing model, and the third one adopts an auxiliary cross-lingual POS tagging model as evidence. We conduct experiments on the universal dependencies for eleven languages. Results show that self-training can boost the dependency parsing performances on the target languages. In addition, the POS tagger assistant instance selection can achieve further improvements consistently. Detailed analysis is conducted to examine the potentiality of self-training in-depth.

pdf
A Joint Model for Graph-based Chinese Dependency Parsing
Xingchen Li | Mingtong Liu | Yujie Zhang | Jinan Xu | Yufeng Chen

In Chinese dependency parsing, the joint model of word segmentation, POS tagging and dependency parsing has become the mainstream framework because it can eliminate error propagation and share knowledge, where the transition-based model with feature templates maintains the best performance. Recently, the graph-based joint model (Yan et al., 2019) on word segmentation and dependency parsing has achieved better performance, demonstrating the advantages of the graph-based models. However, this work can not provide POS information for downstream tasks, and the POS tagging task was proved to be helpful to the dependency parsing according to the research of the transition-based model. Therefore, we propose a graph-based joint model for Chinese word segmentation, POS tagging and dependency parsing. We designed a charater-level POS tagging task, and then train it jointly with the model of Yan et al. (2019). We adopt two methods of joint POS tagging task, one is by sharing parameters, the other is by using tag attention mechanism, which enables the three tasks to better share intermediate information and improve each other’s performance. The experimental results on the Penn Chinese treebank (CTB5) show that our proposed joint model improved by 0.38% on dependency parsing than the model of Yan et al. (2019). Compared with the best transition-based joint model, our model improved by 0.18%, 0.35% and 5.99% respectively in terms of word segmentation, POS tagging and dependency parsing.

pdf
Semantic-aware Chinese Zero Pronoun Resolution with Pre-trained Semantic Dependency Parser
Lanqiu Zhang | Zizhuo Shen | Yanqiu Shao

Deep learning-based Chinese zero pronoun resolution model has achieved better performance than traditional machine learning-based model. However, the existing work related to Chinese zero pronoun resolution has not yet well integrated linguistic information into the deep learningbased Chinese zero pronoun resolution model.This paper adopts the idea based on the pre-trained model, and integrates the semantic representations in the pre-trained Chinese semantic dependency graph parser into the Chinese zero pronoun resolution model. The experimental results on OntoNotes-5.0 dataset show that our proposed Chinese zero pronoun resolution model with pretrained Chinese semantic dependency parser improves the F-score by 0.4% compared with our baseline model, and obtains better results than other deep learning-based Chinese zero pronoun resolution models. In addition, we integrate the BERT representations into our model so that the performance of our model was improved by 0.7% compared with our baseline model.

pdf
Improving Sentence Classification by Multilingual Data Augmentation and Consensus Learning
Yanfei Wang | Yangdong Chen | Yuejie Zhang

Neural network based models have achieved impressive results on the sentence classification task. However, most of previous work focuses on designing more sophisticated network or effective learning paradigms on monolingual data, which often suffers from insufficient discriminative knowledge for classification. In this paper, we investigate to improve sentence classification by multilingual data augmentation and consensus learning. Comparing to previous methods, our model can make use of multilingual data generated by machine translation and mine their language-share and language-specific knowledge for better representation and classification. We evaluate our model using English (i.e., source language) and Chinese (i.e., target language) data on several sentence classification tasks. Very positive classification performance can be achieved by our proposed model.

pdf
Attention-Based Graph Neural Network with Global Context Awareness for Document Understanding
Yuan Hua | Zheng Huang | Jie Guo | Weidong Qiu

Information extraction from documents such as receipts or invoices is a fundamental and crucial step for office automation. Many approaches focus on extracting entities and relationships from plain texts, however, when it comes to document images, such demand becomes quite challenging since visual and layout information are also of great significance to help tackle this problem. In this work, we propose the attention-based graph neural network to combine textual and visual information from document images.Moreover, the global node is introduced in our graph construction algorithm which is used as a virtual hub to collect the information from all the nodes and edges to help improve the performance. Extensive experiments on real-world datasets show that our method outperforms baseline methods by significant margins.

pdf
Combining Impression Feature Representation for Multi-turn Conversational Question Answering
Shaoling Jing | Shibo Hong | Dongyan Zhao | Haihua Xie | Zhi Tang

Multi-turn conversational Question Answering (ConvQA) is a practical task that requires the understanding of conversation history, such as previous QA pairs, the passage context, and current question. It can be applied to a variety of scenarios with human-machine dialogue. The major challenge of this task is to require the model to consider the relevant conversation history while understanding the passage. Existing methods usually simply prepend the history to the current question, or use the complicated mechanism to model the history. This article proposes an impression feature, which use the word-level inter attention mechanism to learn multi-oriented information from conversation history to the input sequence, including attention from history tokens to each token of the input sequence, and history turn inter attention from different history turns to each token of the input sequence, and self-attention within input sequence, where the input sequence contains a current question and a passage. Then a feature selection method is designed to enhance the useful history turns of conversation and weaken the unnecessary information. Finally, we demonstrate the effectiveness of the proposed method on the QuAC dataset, analyze the impact of different feature selection methods, and verify the validity and reliability of the proposed features through visualization and human evaluation.

pdf
Chinese Long and Short Form Choice Exploiting Neural Network Language Modeling Approaches
Lin Li | Kees van Deemter | Denis Paperno

This paper presents our work in long and short form choice, a significant question of lexical choice, which plays an important role in many Natural Language Understanding tasks. Long and short form sharing at least one identical word meaning but with different number of syllables is a highly frequent linguistic phenomenon in Chinese like 老虎-虎(laohu-hu, tiger)

pdf
Refining Data for Text Generation
Qianying Liu | Tianyi Li | Wenyu Guan | Sujian Li

Recent work on data-to-text generation has made progress under the neural encoder-decoder architectures. However, the data input size is often enormous, while not all data records are important for text generation and inappropriate input may bring noise into the final output. To solve this problem, we propose a two-step approach which first selects and orders the important data records and then generates text from the noise-reduced data. Here we propose a learning to rank model to rank the importance of each record which is supervised by a relation extractor. With the noise-reduced data as input, we implement a text generator which sequentially models the input data records and emits a summary. Experiments on the ROTOWIRE dataset verifies the effectiveness of our proposed method in both performance and efficiency.

pdf
Plan-CVAE: A Planning-based Conditional Variational Autoencoder for Story Generation
Lin Wang | Juntao Li | Rui Yan | Dongyan Zhao

Story generation is a challenging task of automatically creating natural languages to describe a sequence of events, which requires outputting text with not only a consistent topic but also novel wordings. Although many approaches have been proposed and obvious progress has been made on this task, there is still a large room for improvement, especially for improving thematic consistency and wording diversity. To mitigate the gap between generated stories and those written by human writers, in this paper, we propose a planning-based conditional variational autoencoder, namely Plan-CVAE, which first plans a keyword sequence and then generates a story based on the keyword sequence. In our method, the keywords planning strategy is used to improve thematic consistency while the CVAE module allows enhancing wording diversity. Experimental results on a benchmark dataset confirm that our proposed method can generate stories with both thematic consistency and wording novelty, and outperforms state-of-the-art methods on both automatic metrics and human evaluations.

pdf
Towards Causal Explanation Detection with Pyramid Salient-Aware Network
Xinyu Zuo | Yubo Chen | Kang Liu | Jun Zhao

Causal explanation analysis (CEA) can assist us to understand the reasons behind daily events, which has been found very helpful for understanding the coherence of messages. In this paper, we focus on Causal Explanation Detection, an important subtask of causal explanation analysis, which determines whether a causal explanation exists in one message. We design a Pyramid Salient-Aware Network (PSAN) to detect causal explanations on messages. PSAN can assist in causal explanation detection via capturing the salient semantics of discourses contained in their keywords with a bottom graph-based word-level salient network. Furthermore, PSAN can modify the dominance of discourses via a top attention-based discourse-level salient network to enhance explanatory semantics of messages. The experiments on the commonly used dataset of CEA shows that the PSAN outperforms the state-of-the-art method by 1.8% F1 value on the Causal Explanation Detection task.

pdf
Named Entity Recognition with Context-Aware Dictionary Knowledge
Chuhan Wu | Fangzhao Wu | Tao Qi | Yongfeng Huang

Named entity recognition (NER) is an important task in the natural language processing field. Existing NER methods heavily rely on labeled data for model training, and their performance on rare entities is usually unsatisfactory. Entity dictionaries can cover many entities including both popular ones and rare ones, and are useful for NER. However, many entity names are context-dependent and it is not optimal to directly apply dictionaries without considering the context. In this paper, we propose a neural NER approach which can exploit dictionary knowledge with contextual information. We propose to learn context-aware dictionary knowledge by modeling the interactions between the entities in dictionaries and their contexts via context-dictionary attention. In addition, we propose an auxiliary term classification task to predict the types of the matched entity names, and jointly train it with the NER model to fuse both contexts and dictionary knowledge into NER. Extensive experiments on the CoNLL-2003 benchmark dataset validate the effectiveness of our approach in exploiting entity dictionaries to improve the performance of various NER models.

pdf
Chinese Named Entity Recognition via Adaptive Multi-pass Memory Network with Hierarchical Tagging Mechanism
Pengfei Cao | Yubo Chen | Kang Liu | Jun Zhao

Named entity recognition (NER) aims to identify text spans that mention named entities and classify them into pre-defined categories. For Chinese NER task, most of the existing methods are character-based sequence labeling models and achieve great success. However, these methods usually ignore lexical knowledge, which leads to false prediction of entity boundaries. Moreover, these methods have difficulties in capturing tag dependencies. In this paper, we propose an Adaptive Multi-pass Memory Network with Hierarchical Tagging Mechanism (AMMNHT) to address all above problems. Specifically, to reduce the errors of predicting entity boundaries, we propose an adaptive multi-pass memory network to exploit lexical knowledge. In addition, we propose a hierarchical tagging layer to learn tag dependencies. Experimental results on three widely used Chinese NER datasets demonstrate that our proposed model significantly outperforms other state-of-the-art methods.

pdf
A Practice of Tourism Knowledge Graph Construction based on Heterogeneous Information
Dinghe Xiao | Nannan Wang | Jiangang Yu | Chunhong Zhang | Jiaqi Wu

The increasing amount of semi-structured and unstructured data on tourism websites brings a need for information extraction (IE) so as to construct a Tourism-domain Knowledge Graph (TKG), which is helpful to manage tourism information and develop downstream applications such as tourism search engine, recommendation and Q & A. However, the existing TKG is deficient, and there are few open methods to promote the construction and widespread application of TKG. In this paper, we present a systematic framework to build a TKG for Hainan, collecting data from popular tourism websites and structuring it into triples. The data is multi-source and heterogeneous, which raises a great challenge for processing it. So we develop two pipelines of processing methods for semi-structured data and unstructured data respectively. We refer to tourism InfoBox for semi-structured knowledge extraction and leverage deep learning algorithms to extract entities and relations from unstructured travel notes, which are colloquial and high-noise, and then we fuse the extracted knowledge from two sources. Finally, a TKG with 13 entity types and 46 relation types is established, which totally contains 34,079 entities and 441,371 triples. The systematic procedure proposed by this paper can construct a TKG from tourism websites, which can further applied to many scenarios and provide detailed reference for the construction of other domain-specific knowledge graphs.

pdf
A Novel Joint Framework for Multiple Chinese Events Extraction
Nuo Xu | Haihua Xie | Dongyan Zhao

Event extraction is an essential yet challenging task in information extraction. Previous approaches have paid little attention to the problem of roles overlap which is a common phenomenon in practice. To solve this problem, this paper defines event relation triple to explicitly represent relations among triggers, arguments and roles which are incorporated into the model to learn their inter-dependencies. The task of argument extraction is converted to event relation triple extraction. A novel joint framework for multiple Chinese event extraction is proposed which jointly performs predictions for event triggers and arguments based on shared feature representations from pre-trained language model. Experimental comparison with state-of-the-art baselines on ACE 2005 dataset shows the superiority of the proposed method in both trigger classification and argument classification.

pdf
Entity Relative Position Representation based Multi-head Selection for Joint Entity and Relation Extraction
Tianyang Zhao | Zhao Yan | Yunbo Cao | Zhoujun Li

Joint entity and relation extraction has received increasing interests recently, due to the capability of utilizing the interactions between both steps. Among existing studies, the Multi-Head Selection (MHS) framework is efficient in extracting entities and relations simultaneously. However, the method is weak for its limited performance. In this paper, we propose several effective insights to address this problem. First, we propose an entity-specific Relative Position Representation (eRPR) to allow the model to fully leverage the distance information between entities and context tokens. Second, we introduce an auxiliary Global Relation Classification (GRC) to enhance the learning of local contextual features. Moreover, we improve the semantic representation by adopting a pre-trained language model BERT as the feature encoder. Finally, these new keypoints are closely integrated with the multi-head selection framework and optimized jointly. Extensive experiments on two benchmark datasets demonstrate that our approach overwhelmingly outperforms previous works in terms of all evaluation metrics, achieving significant improvements for relation F1 by +2.40% on CoNLL04 and +1.90% on ACE05, respectively.

pdf
A Mixed Learning Objective for Neural Machine Translation
Wenjie Lu | Leiying Zhou | Gongshen Liu | Quanhai Zhang

Evaluation discrepancy and overcorrection phenomenon are two common problems in neural machine translation (NMT). NMT models are generally trained with word-level learning objective, but evaluated by sentence-level metrics. Moreover, the cross-entropy loss function discourages model to generate synonymous predictions and overcorrect them to ground truth words. To address these two drawbacks, we adopt multi-task learning and propose a mixed learning objective (MLO) which combines the strength of word-level and sentence-level evaluation without modifying model structure. At word-level, it calculates semantic similarity between predicted and ground truth words. At sentence-level, it computes probabilistic n-gram matching scores of generated translations. We also combine a loss-sensitive scheduled sampling decoding strategy with MLO to explore its extensibility. Experimental results on IWSLT 2016 German-English and WMT 2019 English-Chinese datasets demonstrate that our methodology can significantly promote translation quality. The ablation study shows that both word-level and sentence-level learning objectives can improve BLEU scores. Furthermore, MLO is consistent with state-of-the-art scheduled sampling methods and can achieve further promotion.

pdf
Multi-Reward based Reinforcement Learning for Neural Machine Translation
Shuo Sun | Hongxu Hou | Nier Wu | Ziyue Guo | Chaowei Zhang

Reinforcement learning (RL) has made remarkable progress in neural machine translation (NMT). However, it exists the problems with uneven sampling distribution, sparse rewards and high variance in training phase. Therefore, we propose a multi-reward reinforcement learning training strategy to decouple action selection and value estimation. Meanwhile, our method combines with language model rewards to jointly optimize model parameters. In addition, we add Gumbel noise in sampling to obtain more effective semantic information. To verify the robustness of our method, we not only conducted experiments on large corpora, but also performed on low-resource languages. Experimental results show that our work is superior to the baselines in WMT14 English-German, LDC2014 Chinese-English and CWMT2018 Mongolian-Chinese tasks, which fully certificates the effectiveness of our method.

pdf
Low-Resource Text Classification via Cross-lingual Language Model Fine-tuning
Xiuhong Li | Zhe Li | Jiabao Sheng | Wushour Slamu

Text classification tends to be difficult when data are inadequate considering the amount of manually labeled text corpora. For low-resource agglutinative languages including Uyghur, Kazakh, and Kyrgyz (UKK languages), in which words are manufactured via stems concatenated with several suffixes and stems are used as the representation of text content, this feature allows infinite derivatives vocabulary that leads to high uncertainty of writing forms and huge redundant features. There are major challenges of low-resource agglutinative text classification the lack of labeled data in a target domain and morphologic diversity of derivations in language structures. It is an effective solution which fine-tuning a pre-trained language model to provide meaningful and favorable-to-use feature extractors for downstream text classification tasks. To this end, we propose a low-resource agglutinative language model fine-tuning AgglutiFiT, specifically, we build a low-noise fine-tuning dataset by morphological analysis and stem extraction, then fine-tune the cross-lingual pre-training model on this dataset. Moreover, we propose an attention-based fine-tuning strategy that better selects relevant semantic and syntactic information from the pre-trained language model and uses those features on downstream text classification tasks. We evaluate our methods on nine Uyghur, Kazakh, and Kyrgyz classification datasets, where they have significantly better performance compared with several strong baselines.

pdf
Constructing Uyghur Name Entity Recognition System using Neural Machine Translation Tag Projection
Anwar Azmat | Li Xiao | Yang Yating | Dong Rui | Osman Turghun

Although named entity recognition achieved great success by introducing the neural networks, it is challenging to apply these models to low resource languages including Uyghur while it depends on a large amount of annotated training data. Constructing a well-annotated named entity corpus manually is very time-consuming and labor-intensive. Most existing methods based on the parallel corpus combined with the word alignment tools. However, word alignment methods introduce alignment errors inevitably. In this paper, we address this problem by a named entity tag transfer method based on the common neural machine translation. The proposed method marks the entity boundaries in Chinese sentence and translates the sentences to Uyghur by neural machine translation system, hope that neural machine translation will align the source and target entity by the self-attention mechanism. The experimental results show that the Uyghur named entity recognition system trained by the constructed corpus achieve good performance on the test set, with 73.80% F1 score(3.79% improvement by baseline)

pdf
Recognition Method of Important Words in Korean Text based on Reinforcement Learning
Yang Feiyang | Zhao Yahui | Cui Rongyi

The manual labeling work for constructing the Korean corpus is too time-consuming and laborious. It is difficult for low-minority languages to integrate resources. As a result, the research progress of Korean language information processing is slow. From the perspective of representation learning, reinforcement learning was combined with traditional deep learning methods. Based on the Korean text classification effect as a benchmark, and studied how to extract important Korean words in sentences. A structured model Information Distilled of Korean (IDK) was proposed. The model recognizes the words in Korean sentences and retains important words and deletes non-important words. Thereby transforming the reconstruction of the sentence into a sequential decision problem. So you can introduce the Policy Gradient method in reinforcement learning to solve the conversion problem. The results show that the model can identify the important words in Korean instead of manual annotation for representation learning. Furthermore, compared with traditional text classification methods, the model also improves the effect of Korean text classification.

pdf
Mongolian Questions Classification Based on Mulit-Head Attention
Guangyi Wang | Feilong Bao | Weihua Wang

Question classification is a crucial subtask in question answering system. Mongolian is a kind of few resource language. It lacks public labeled corpus. And the complex morphological structure of Mongolian vocabulary makes the data-sparse problem. This paper proposes a classification model, which combines the Bi-LSTM model with the Multi-Head Attention mechanism. The Multi-Head Attention mechanism extracts relevant information from different dimensions and representation subspace. According to the characteristics of Mongolian word-formation, this paper introduces Mongolian morphemes representation in the embedding layer. Morpheme vector focuses on the semantics of the Mongolian word. In this paper, character vector and morpheme vector are concatenated to get word vector, which sends to the Bi-LSTM getting context representation. Finally, the Multi-Head Attention obtains global information for classification. The model experimented on the Mongolian corpus. Experimental results show that our proposed model significantly outperforms baseline systems.

pdf
The Annotation Scheme of English-Chinese Clause Alignment Corpus
Shili Ge | Xiaopin Lin | Rou Song

A clause complex consists of clauses, which are connected by component sharing relations and logic-semantic relations. Hence, clause-complex level structural transformations in translation are concerned with the expression adjustment of these two types of relations. In this paper, a formal scheme for tagging structural transformations in English-Chinese translation is designed. The annotation scheme include 3 steps operated on two grammatical levels: parsing an English clause complex into constructs and assembling construct translations on the clause complex level; translating constructs independently on the clause level. The assembling step involves 2 operations: performing operation functions and inserting Chinese words. The corpus annotation shows that it is feasible to divide structural transformations in English-Chinese translation into 2 levels. The corpus, which unfolds formally the operations of clause-complex level structural transformations, would help to improve the end-to-end translation of complicated sentences.

pdf
Categorizing Offensive Language in Social Networks: A Chinese Corpus, Systems and an Explainable Tool
Xiangru Tang | Xianjun Shen

Recently, more and more data have been generated in the online world, filled with offensive language such as threats, swear words or straightforward insults. It is disgraceful for a progressive society, and then the question arises on how language resources and technologies can cope with this challenge. However, previous work only analyzes the problem as a whole but fails to detect particular types of offensive content in a more fine-grained way, mainly because of the lack of annotated data. In this work, we present a densely annotated data-set COLA

pdf
LiveQA: A Question Answering Dataset over Sports Live
Qianying Liu | Sicong Jiang | Yizhong Wang | Sujian Li

In this paper, we introduce LiveQA, a new question answering dataset constructed from play-by-play live broadcast. It contains 117k multiple-choice questions written by human commentators for over 1,670 NBA games, which are collected from the Chinese Hupu1 website. Derived from the characteristics of sports games, LiveQA can potentially test the reasoning ability across timeline-based live broadcasts, which is challenging compared to the existing datasets. In LiveQA, the questions require understanding the timeline, tracking events or doing mathematical computations. Our preliminary experiments show that the dataset introduces a challenging problem for question answering models, and a strong baseline model only achieves the accuracy of 53.1% and cannot beat the dominant option rule. We release the code and data of this paper for future research.

pdf
Chinese and English Elementary Discourse Units Segmentation based on Bi-LSTM-CRF Model
Li Yancui | Lai Chunxiao | Feng Jike | Feng Hongyu

Elementary Discourse Unit (EDU) recognition is the basic task of discourse analysis, and the Chinese and English discourse alignment corpus is helpful to the studies of EDU recognition. This paper first builds Chinese-English parallel discourse corpus, in which EDUs are annotated and aligned. Then, we present the framework of Bi-LSTM-CRF EDUs recognition model using word embedding, POS and syntactic features, which can combine the advantage of CRF and Bi-LSTM. The results show that F1 is about 2% higher than the traditional method. Compared with CRF and Bi-LSTM, the Bi-LSTM-CRF model can combine the advantages of them and obtains satisfactory results for Chinese and English EDUs recognition. The experiment of feature contribution shows that using all features together can get best result, the syntactic feature outperforms than other features.

pdf
Better Queries for Aspect-Category Sentiment Classification
Li Yuncong | Yin Cunxiang | Zhong Sheng-hua | Zhong Huiqiang | Luo Jinchang | Xu Siqi | Wu Xiaohui

Aspect-category sentiment classification (ACSC) aims to identify the sentiment polarities towards the aspect categories mentioned in a sentence. Because a sentence often mentions more than one aspect category and expresses different sentiment polarities to them, finding aspect category-related information from the sentence is the key challenge to accurately recognize the sentiment polarity. Most previous models take both sentence and aspect category as input and query aspect category-related information based on the aspect category. However, these models represent the aspect category as a context-independent vector called aspect embedding, which may not be effective enough as a query. In this paper, we propose two contextualized aspect category representations, Contextualized Aspect Vector (CAV) and Contextualized Aspect Matrix (CAM). Specifically, we use the coarse aspect category-related information found by the aspect category detection task to generate CAV or CAM. Then the CAV or CAM as queries are used to search for fine-grained aspect category-related information like aspect embedding by aspect-category sentiment classification models. In experiments, we integrate the proposed CAV and CAM into several representative aspect embedding-based aspect-category sentiment classification models. Experimental results on the SemEval-2014 Restaurant Review dataset and the Multi-Aspect Multi-Sentiment dataset demonstrate the effectiveness of CAV and CAM.

pdf
Multimodal Sentiment Analysis with Multi-perspective Fusion Network Focusing on Sense Attentive Language
Xia Li | Minping Chen

Multimodal sentiment analysis aims to learn a joint representation of multiple features. As demonstrated by previous studies, it is shown that the language modality may contain more semantic information than that of other modalities. Based on this observation, we propose a Multi-perspective Fusion Network(MPFN) focusing on Sense Attentive Language for multimodal sentiment analysis. Different from previous studies, we use the language modality as the main part of the final joint representation, and propose a multi-stage and uni-stage fusion strategy to get the fusion representation of the multiple modalities to assist the final language-dominated multimodal representation. In our model, a Sense-Level Attention Network is proposed to dynamically learn the word representation which is guided by the fusion of the multiple modalities. As in turn, the learned language representation can also help the multi-stage and uni-stage fusion of the different modalities. In this way, the model can jointly learn a well integrated final representation focusing on the language and the interactions between the multiple modalities both on multi-stage and uni-stage. Several experiments are carried on the CMU-MOSI, the CMU-MOSEI and the YouTube public datasets. The experiments show that our model performs better or competitive results compared with the baseline models.

pdf
CAN-GRU: a Hierarchical Model for Emotion Recognition in Dialogue
Ting Jiang | Bing Xu | Tiejun Zhao | Sheng Li

Emotion recognition in dialogue systems has gained attention in the field of natural language processing recent years, because it can be applied in opinion mining from public conversational data on social media. In this paper, we propose a hierarchical model to recognize emotions in the dialogue. In the first layer, in order to extract textual features of utterances, we propose a convolutional self-attention network(CAN). Convolution is used to capture n-gram information and attention mechanism is used to obtain the relevant semantic information among words in the utterance. In the second layer, a GRU-based network helps to capture contextual information in the conversation. Furthermore, we discuss the effects of unidirectional and bidirectional networks. We conduct experiments on Friends dataset and EmotionPush dataset. The results show that our proposed model(CAN-GRU) and its variants achieve better performance than baselines.

pdf
A Joint Model for Aspect-Category Sentiment Analysis with Shared Sentiment Prediction Layer
Yuncong Li | Zhe Yang | Cunxiang Yin | Xu Pan | Lunan Cui | Qiang Huang | Ting Wei

Aspect-category sentiment analysis (ACSA) aims to predict the aspect categories mentioned in texts and their corresponding sentiment polarities. Some joint models have been proposed to address this task. Given a text, these joint models detect all the aspect categories mentioned in the text and predict the sentiment polarities toward them at once. Although these joint models obtain promising performances, they train separate parameters for each aspect category and therefore suffer from data deficiency of some aspect categories. To solve this problem, we propose a novel joint model which contains a shared sentiment prediction layer. The shared sentiment prediction layer transfers sentiment knowledge between aspect categories and alleviates the problem caused by data deficiency. Experiments conducted on SemEval-2016 Datasets demonstrate the effectiveness of our model.

pdf
Compress Polyphone Pronunciation Prediction Model with Shared Labels
Pengfei Chen | Lina Wang | Hui Di | Kazushige Ouchi | Lvhong Wang

It is well known that deep learning model has huge parameters and is computationally expensive, especially for embedded and mobile devices. Polyphone pronunciations selection is a basic function for Chinese Text-to-Speech (TTS) application. Recurrent neural network (RNN) is a good sequence labeling solution for polyphone pronunciation selection. However, huge parameters and computation make compression needed to alleviate its disadvantage. In contrast to existing quantization with low precision data format and projection layer, we propose a novel method based on shared labels, which focuses on compressing the fully-connected layer before Softmax for models with a huge number of labels in TTS polyphone selection. The basic idea is to compress large number of target labels into a few label clusters, which will share the parameters of fully-connected layer. Furthermore, we combine it with other methods to further compress the polyphone pronunciation selection model. The experimental result shows that for Bi-LSTM (Bidirectional Long Short Term Memory) based polyphone selection, shared labels model decreases about 52% of original model size and accelerates prediction by 44% almost without performance loss. It is worth mentioning that the proposed method can be applied for other tasks to compress the model and accelerate the calculation.

pdf
Multi-task Legal Judgement Prediction Combining a Subtask of Seriousness of Charge
Xu Zhuopeng | Li Xia | Li Yinlin | Wang Zihan | Fanxu Yujie | Lai Xiaoyan

Legal Judgement Prediction has attracted more and more attention in recent years. One of the challenges is how to design a model with better interpretable prediction results. Previous studies have proposed different interpretable models based on the generation of court views and the extraction of charge keywords. Different from previous work, we propose a multi-task legal judgement prediction model which combines a subtask of the seriousness of charges. By introducing this subtask, our model can capture the attention weights of different terms of penalty corresponding to the charges and give more attention to the correct terms of penalty in the fact descriptions. Meanwhile, our model also incorporates the position of defendant making it capable of giving attention to the contextual information of the defendant. We carry several experiments on the public CAIL2018 dataset. Experimental results show that our model achieves better or comparable performance on three subtasks compared with the baseline models. Moreover, we also analyze the interpretable contribution of our model.

pdf
Clickbait Detection with Style-aware Title Modeling and Co-attention
Chuhan Wu | Fangzhao Wu | Tao Qi | Yongfeng Huang

Clickbait is a form of web content designed to attract attention and entice users to click on specific hyperlinks. The detection of clickbaits is an important task for online platforms to improve the quality of web content and the satisfaction of users. Clickbait detection is typically formed as a binary classification task based on the title and body of a webpage, and existing methods are mainly based on the content of title and the relevance between title and body. However, these methods ignore the stylistic patterns of titles, which can provide important clues on identifying clickbaits. In addition, they do not consider the interactions between the contexts within title and body, which are very important for measuring their relevance for clickbait detection. In this paper, we propose a clickbait detection approach with style-aware title modeling and co-attention. Specifically, we use Transformers to learn content representations of title and body, and respectively compute two content-based clickbait scores for title and body based on their representations. In addition, we propose to use a character-level Transformer to learn a style-aware title representation by capturing the stylistic patterns of title, and we compute a title stylistic score based on this representation. Besides, we propose to use a co-attention network to model the relatedness between the contexts within title and body, and further enhance their representations by encoding the interaction information. We compute a title-body matching score based on the representations of title and body enhanced by their interactions. The final clickbait score is predicted by a weighted summation of the aforementioned four kinds of scores. Extensive experiments on two benchmark datasets show that our approach can effectively improve the performance of clickbait detection and consistently outperform many baseline methods.

pdf
Konwledge-Enabled Diagnosis Assistant Based on Obstetric EMRs and Knowledge Graph
Kunli Zhang | Xu Zhao | Lei Zhuang | Qi Xie | Hongying Zan

The obstetric Electronic Medical Record (EMR) contains a large amount of medical data and health information. It plays a vital role in improving the quality of the diagnosis assistant service. In this paper, we treat the diagnosis assistant as a multi-label classification task and propose a Knowledge-Enabled Diagnosis Assistant (KEDA) model for the obstetric diagnosis assistant. We utilize the numerical information in EMRs and the external knowledge from Chinese Obstetric Knowledge Graph (COKG) to enhance the text representation of EMRs. Specifically, the bidirectional maximum matching method and similarity-based approach are used to obtain the entities set contained in EMRs and linked to the COKG. The final knowledge representation is obtained by a weight-based disease prediction algorithm, and it is fused with the text representation through a linear weighting method. Experiment results show that our approach can bring about +3.53 F1 score improvements upon the strong BERT baseline in the diagnosis assistant task.

pdf
Reusable Phrase Extraction Based on Syntactic Parsing
Xuemin Duan | Zan Hongying | Xiaojing Bai | Christoph Zähner

Academic Phrasebank is an important resource for academic writers. Student writers use the phrases of Academic Phrasebank organizing their research article to improve their writing ability. Due to the limited size of Academic Phrasebank, it can not meet all the academic writing needs. There are still a large number of academic phraseology in the authentic research article. In this paper, we proposed an academic phraseology extraction model based on constituency parsing and dependency parsing, which can automatically extract the academic phraseology similar to phrases of Academic Phrasebank from an unlabelled research article. We divided the proposed model into three main components including an academic phraseology corpus module, a sentence simplification module, and a syntactic parsing module. We created a corpus of academic phraseology of 2,129 words to help judge whether a word is neutral and general, and created two datasets under two scenarios to verify the feasibility of the proposed model.

pdf
WAE_RN: Integrating Wasserstein Autoencoder and Relational Network for Text Sequence
Xinxin Zhang | Xiaoming Liu | Guan Yang | Fangfang Li

One challenge in Natural Language Processing (NLP) area is to learn semantic representation in different contexts. Recent works on pre-trained language model have received great attentions and have been proven as an effective technique. In spite of the success of pre-trained language model in many NLP tasks, the learned text representation only contains the correlation among the words in the sentence itself and ignores the implicit relationship between arbitrary tokens in the sequence. To address this problem, we focus on how to make our model effectively learn word representations that contain the relational information between any tokens of text sequences. In this paper, we propose to integrate the relational network(RN) into a Wasserstein autoencoder(WAE). Specifically, WAE and RN are used to better keep the semantic structurse and capture the relational information, respectively. Extensive experiments demonstrate that our proposed model achieves significant improvements over the traditional Seq2Seq baselines.