Proceedings of the 21st Chinese National Conference on Computational Linguistics

Maosong Sun (孙茂松), Yang Liu (刘洋), Wanxiang Che (车万翔), Yang Feng (冯洋), Xipeng Qiu (邱锡鹏), Gaoqi Rao (饶高琦), Yubo Chen (陈玉博) (Editors)


Anthology ID:
2022.ccl-1
Month:
October
Year:
2022
Address:
Nanchang, China
Venue:
CCL
SIG:
Publisher:
Chinese Information Processing Society of China
URL:
https://aclanthology.org/2022.ccl-1
DOI:
Bib Export formats:
PDF:
https://preview.aclanthology.org/auto-file-uploads/2022.ccl-1.pdf

pdf
Proceedings of the 21st Chinese National Conference on Computational Linguistics
Maosong Sun (孙茂松) | Yang Liu (刘洋) | Wanxiang Che (车万翔) | Yang Feng (冯洋) | Xipeng Qiu (邱锡鹏) | Gaoqi Rao (饶高琦) | Yubo Chen (陈玉博)

pdf
中国语言学研究 70 年:核心期刊的词汇增长(70 Years of Linguistics Research in China: Vocabulary Growth of Core Journals)
Shan Wang (王珊) | Runzhe Zhan (詹润哲) | Shuangyun Yao (姚双云)

“建国以来我国语言学经过 70 年的发展取得了瞩目的成就,已有研究主要以回顾主要历史事件的方式介绍这一进程,但尚缺少使用量化手段分析其历时发展的研究。本文以词汇增长为切入点探究这一主题,首次创建大规模语言学中文核心期刊摘要的历时语料库,并使用三大词汇增长模型预测语料库中词汇的变化。本文选择拟合效果最好的 Heaps 模型分阶段深入分析语言学词汇的变化,显示出国家政策的指导作用和特定时代的语言生活特征。此外,与时序无关的验证程序支撑了本文研究方法的有效性。 关键词:中国语言学;词汇增长;核心期刊;摘要;语料库;历时发展”

pdf
一个适合汉语的带有范畴转换的组合范畴语法(A Chinese-Suitable Combinatory Categorial Grammar with Categorial Conversions)
Qingjiang Wang (王庆江) | Shuxian Chen (陈淑娴)

“为使汉语句子中词或短语的范畴对应其充当的句法成分,为组合范畴语法增加范畴转换规则。把实词和短语结构的范畴分别按使用频次和是否由结合规则得到分为典型和非典型,建立短语结构中实词和短语结构的范畴转换规则。尽量不为虚词设立范畴规则,让实词或短语结构通过范畴转换与虚词搭配。树库显示,35%的短语结构形成需要范畴转换,使用范畴转换的短语直接成分中99.67%是实词或短语结构,这套范畴转换规则使组合范畴语法适合缺乏屈折的汉语。”

pdf
双重否定结构自动识别研究(The Research on Automatic Recognition of the Double Negation Structure)
Yu Wang (王昱) | Yulin Yuan (袁毓林)

“双重否定结构是一种“通过两次否定表示肯定意义”的特殊结构,其存在会对自然语言处理中的语义判断与情感分类产生重要影响。本文以“eg eg P== extgreater P”为标准,对现代汉语中所有的“否定词+否定词”结构进行了遍历研究,将双重否定结构按照格式分为了3大类,25小类,常用双重否定结构或构式132个。结合动词的叙实性、否定焦点、语义否定与语用否定等相关理论,本文归纳了双重否定结构的三大成立条件,并据此设计实现了基于规则的双重否定结构自动识别程序。程序实验的精确率为98.85%,召回率为98.90%,F1值为98.85%。同时,程序还从96281句语料中获得了8640句精确率约为99%的含有双重否定结构的句子,为后续基于统计的深度学习模型提供了语料支持的可能。”

pdf
单项形容词定语分布考察及“的”字隐现研究(Study on Distribution of Single Item Adjective Attributives and Appearance and Disappearance of “de”)
Rui Song (宋锐) | Zhimin Wang (王治敏)

“本文以2019-2021年《人民日报》文章中单项形容词定语77845个词例为研究对象,从实用性的角度考察了粘合式与组合式定语词例的分布特征、音节组配模式及“的”字的隐现倾向性。通过研究我们发现,粘合式定语的词例数量明显少于组合式定语词例数量,但使用频数却高出组合式定语的4-5倍。两种定语结构中,形容词和名词重复使用的比例很高,但其共现组合的比例偏少,同时,真实文本中“的”字的隐现具有“两极分化”的特征,绝大部分词例在使用过程中带“的”或不带“的”都具有很强的倾向性,“的”字出现具有区分词义和突显信息的作用,“的”字隐藏能促使语义更加凝练,进一步固化句式结构,使得某些句式形成了特指或隐喻的表达方式。本文为形容词定语结构的词汇语义研究提供依据和参考。”

pdf
基于GPT-2和互信息的语言单位信息量对韵律特征的影响(Prosodic Effects of Speech Unit’s Information Based on GPT-2 and Mutual Information)
Yun Hao (郝韵) | Yanlu Xie (解焱陆) | Binghuai Lin (林炳怀) | Jinsong Zhang (张劲松)

“基于信息论的言语产出研究发现携带信息量越大的语言单位,其语音信号越容易被强化。目前的相关研究主要通过自信息的方式衡量语言单位信息量,但该方法难以对长距离的上下文语境进行建模。本研究引入基于预训练语言模型GPT-2和文本-拼音互信息的语言单位信息量衡量方式,考察汉语的单词、韵母和声调信息量对语音产出的韵律特征的影响。研究结果显示汉语中单词和韵母信息量更大时,其韵律特征倾向于被增强,证明了我们提出的方法是有效的。其中信息量效应在音长特征上相比音高和音强特征更显著。”

pdf
人文社科学术论文语言变异的多维度分析(A multi-dimensional analysis of register variations in Chinese academic papers of Humanities and Social Sciences)
Liangjie Yuan (袁亮杰) | Zhimin Wang (王治敏) | Yu Zhu (朱宇)

“通过自建人文和社科领域中文学术期刊论文语料库(逾920万字),运用多维度分析法对111项语言特征的频次数据进行因子分析和维度识别,发现人文社科领域学术论文具有7个维度的语言特征共现模式:描述性vs.阐释性、概念判断vs.行为再现、铺陈与发展、已然性表述、计数与测量、模糊性表达、顺序与连接。进而,对语料在上述各维度的量化表现施以统计检验和聚类分析,发现学术语体在人文与社科两大领域的语言变异显著体现于除“计数与测量”、“顺序与连接”以外的其他5个维度;人文领域和社科领域内部学科的语言变异,各在6个维度上存在显著差异。本研究为学术汉语写作、汉语语体语法等提供一定启示。”

pdf
基于语料的“一+形容词+量词+名词”构式语义考察(A Semantic Study of “One-Adjective-Quantifier-Noun” Based on Corpus)
Ning Wu (吴宁) | Zhimin Wang (王治敏)

“数形量名”构式是我们日常语言交流中大量使用的结构。本文在北京语言大学BCC在线语料库5710条语料的基础上考察“一形量名”结构,寻求影响构式成立与否的的关键性因素。本文研究了语义限制下进入构式形容词的语义特点、“物理抽象度”对构式名词成分的限制以及量词在构式形成过程中的作用。研究表明,具备高拆分计量性等语义特征的形容词更易进入此构式,进入构式形容词中90%以上项目都可由单一变化物理量进行衡量,此部分形容词在同一意义层面上与构式内的量词互相和谐;“一形量名”构式对“物理抽象度([+易量化、+低有机活性、+形状易概括])”赋值低的名词包容性更高;此外,本文还发现集合量词的出现可降低整体构式的物理抽象度,从而增强“一形量名”构式成立可能性。”

pdf
基于熵的二语语音习得评价研究—以日本学习者习得汉语声母为例(An Entropy-based Evaluation of L2 Speech Acquisition: The Preliminary Report on Chinese Initials Produced by Japanese Learners)
Xiaoli Feng (冯晓莉) | Yingming Gao (高迎明) | Binghuai Lin (林炳怀) | Jinson Zhang (张劲松)

“本文引入“熵”对学习者二语音素发音错误的分布情况进行了量化研究。通过对不同音素及不同二语水平学习者音素错误率和错误分散度的分析发现:1.错误率与错误分散度有较高的相关性,二者的差异反映出错误分布的差异性;2.错误率类似的音素中,与母语音素相似度越高的音素错误分散度越小;3.较初级水平,中级水平学习者音素错误率下降而错误分散度上升。由此可见,熵可以在错误率基础上可以进一步揭示学习者母语音系及二语水平对音素发音错误分散度的影响。”

pdf
儿童心理词汇输出策略及影响因素研究(A Study of Children’s Mental Vocabulary Output Strategies and The Factors Influencing Them)
Jiaming Gan (甘嘉铭) | Zhimin Wang (王治敏)

“儿童心理词汇研究是儿童词汇研究中的重要部分。本文基于心理词表假设,对827位712岁汉语母语儿童展开调查,收集其脑内潜藏的心理词汇,并采用基础词汇定序模型,提取儿童心理词汇定序词表。通过分析词表发现,儿童词汇主要涵盖生活类词汇和以学习为核心的词汇。同时,儿童词汇输出存在思维链的现象,在输出思维链时儿童主要采用了场景策略、范畴策略以及组词策略。此外,通过探究儿童词汇输出影响因素,我们发现儿童输出的词汇量随年龄增长而不断增加,儿童词汇发展从低年龄组到高年龄组发生了显著变化,性别在儿童输出词数上无显著差异,但男孩、女孩关注的词汇类别有各自的倾向。”

pdf
汉语增强依存句法自动转换研究(Transformation of Enhanced Dependencies in Chinese)
Jingsi Yu (余婧思) | Shi Jialu (师佳璐) | Liner Yang (杨麟儿) | Dan Xiao (肖丹) | Erhong Yang (杨尔弘)

“自动句法分析是自然语言处理中的一项核心任务,受限于依存句法中每个节点只能有一条入弧的规则,基础依存句法中许多实词之间的关系无法用依存弧和依存标签直接标明;同时,已有的依存句法体系中的依存关系还有进一步细化、提升的空间,以便从中提取连贯的语义关系。面对这种情况,本文在斯坦福基础依存句法规范的基础上,研制了汉语增强依存句法规范,主要贡献在于:介词和连词的增强、并列项的传播、句式转换和特殊句式的增强。此外,本文提供了基于Python的汉语增强依存句法转换的转换器,以及一个基于Web的演示,该演示将句子从基础依存句法树通过本文的规范解析成依存图。最后,本文探索了增强依存句法的实际应用,并以搭配抽取和信息抽取为例进行相关讨论。”

pdf
名动词多能性指数研究及词类标记的组合应用(A study of nominal verb polyfunctionality index and the combined application of POS tag)
Jiaomei Zhou (周姣美) | Lijiao Yang (杨丽姣) | Hang Xiao (肖航)

“名动词是汉语词类研究及词性标注的难点问题。过去五六十年来,有关名动词概念与体系类属、鉴定标准、标注方法等方面的争议不断,但基于语料库资源,以名动词的动态分布以及量化研究为支撑的研究较为缺乏。本文以现代汉语名动词作为主要考察对象,基于语言学理论方法的反思,将信息论与语料库方法相结合,引入香农圭维纳指数作为量化指标,从多能性指数的研究视角对名动词进行考察,结合《信息处理用现代汉语词类标记规范》的修订研究,分析了名动词类别属性判断在现有印欧系语法词类体系框架下的困境,探讨了名动词跨类属性、词类标记的组合处理及其对于语料库建设、词典编纂等应用领域词类信息标注的探索意义。”

pdf
基于新闻图式结构的篇章功能语用识别方法(Discourse Functional Pragmatics Recognition Based on News Schemata)
Mengqi Du (杜梦琦) | Feng Jiang (蒋峰) | Xiaomin Chu (褚晓敏) | Peifeng Li (李培峰)

“篇章分析是自然语言处理领域的研究热点和重点,篇章功能语用研究旨在分析篇章单元在篇章中的功能和作用,有助于深入理解篇章的主题和内容。目前篇章分析研究以形式语法为主,而篇章作为一个整体的语义单位,其功能和语义却没有引起足够重视。已有功能语用研究以面向事件抽取任务为主,并未进行通用领域的功能语用研究。鉴于功能语用研究的重要性和研究现状,本文提出了基于新闻图式结构的篇章功能语用识别方法来识别篇章功能语用。该方法在获取段落交互信息的同时又融入了篇章的新闻图式结构信息,并结合段落所在篇章中的位置信息,从而有效地提高了篇章功能语用的识别能力。在汉语宏观篇章树库的实验结果证明,本文提出的方法优于所有基准系统。”

pdf
融合知识的多目标词联合框架语义分析模型(Knowledge-integrated Joint Model For Multi-target Frame Semantic Parsing)
Xudong Chen (陈旭东) | Ce Zheng (郑策) | Baobao Chang (常宝宝)

“框架语义分析任务是自然语言处理领域的一项基础性任务。先前的研究工作大多针对单目标词进行模型设计,无法一次性完成多个目标词的框架语义结构提取。本文提出了一个面向多目标的框架语义分析模型,实现对多目标词的联合预测。该模型对框架语义分析的各项子任务进行交互性建模,实现子任务间的双向交互。此外,本文利用关系图网络对框架关系信息进行编码,将其作为框架语义学知识融入模型中。实验表明,本文模型在不借助额外语料的情况下相比之前模型都有不同程度的提高。消融实验证明了本文模型设计的有效性。此外我们分析了模型目前存在的局限性以及未来的改进方向。”

pdf
专业技术文本关键词抽取方法(Keyword Extraction on Professional Technical Text)
Xiangdong Ning (宁祥东) | Bin Gong (龚斌) | Lin Wan (万林) | Yuqing Sun (孙宇清)

“相关性和特异性对于专业技术文本关键词抽取问题至关重要,本文针对代码检索任务,综合语义信息、序列关系和句法结构提出了专业技术文本关键词抽取模型。采用预训练语言模型BERT提取文本抽象语义信息;采用序列关系和句法结构融合分析的方法构建语义关联图,以捕获词汇之间的长距离语义依赖关系;基于随机游走算法和词汇知识计算关键词权重,以兼顾关键词的相关性和特异性。在两个数据集和其他模型进行了性能比较,结果表明本模型抽取的关键词具有更好地相关性和特异性。”

pdf
基于实体信息增强及多粒度融合的多文档摘要(Multi-Document Summarization Based on Entity Information Enhancement and Multi-Granularity Fusion)
Jiarui Tang (唐嘉蕊) | Liu Meiling (刘美玲) | Tiejun Zhao (赵铁军) | Jiyun Zhou (周继云)

“神经网络模型的快速发展使得多文档摘要可以获得人类可读的流畅的摘要,对大规模的数据进行预训练可以更好的从自然语言文本中捕捉更丰富的语义信息,并更好的作用于下游任务。目前很多的多文档摘要的工作也应用了预训练模型(如BERT)并取得了一定的效果,但是这些预训练模型不能更好的从文本中捕获事实性知识,没有考虑到多文档文本的结构化的实体-关系信息,本文提出了基于实体信息增强和多粒度融合的多文档摘要模型MGNIE,将实体关系信息融入预训练模型ERNIE中,增强知识事实以获得多层语义信息,解决摘要生成的事实一致性问题。进而从多种粒度进行多文档层次结构的融合建模,以词信息、实体信息以及句子信息捕捉长文本信息摘要生成所需的关键信息点。本文设计的模型,在国际标准评测数据集MultiNews上对比强基线模型效果和竞争力获得较大提升。”

pdf
融合提示学习的故事生成方法(A Story Generation Method Incorporating Prompt Learning)
Xuanfan Ni (倪宣凡) | Piji Li (李丕绩)

“开放式自动故事生成通过输入故事的开头、大纲、主线等,得到具有一致性、连贯性和逻辑性的故事。现有的方法想要提升生成故事的质量,往往需要大量训练数据和更多参数的模型。针对以上问题,该文利用提示学习在零样本与少样本场景下的优势,同时使用外部常识推理知识,提出了一种故事生成方法。该方法将故事生成分为三个阶段:输入故事的开头,常识推理模型生成可能的事件;根据类型不同,将事件填入问题模板中,构建引导模型生成合理回答的问题;问答模型产生对应问题的答案,并选择困惑度最小的作为故事下文。重复上述过程,最终生成完整的故事。自动评测与人工评测指标表明,与基线模型相比,该文提出的方法能够生成更连贯、具体和合乎逻辑的故事。”

pdf
生成,推理与排序:基于多任务架构的数学文字题生成(Generating, Reasoning & Ranking: Multitask Learning Framework for Math Word Problem Generation)
Tianyang Cao (曹天旸) | Xiaodan Xu (许晓丹) | Baobao Chang (常宝宝)

“数学文字题是一段能反映数学等式潜在逻辑的叙述性文本。成功的数学问题生成在语言生成和教育领域都具有广阔的应用前景。前人的工作大多需要人工标注的模板或关键词作为输入,且未考虑数学表达式本身的特点。本文提出了一种多任务联合训练的问题文本生成模型。我们设计了三个辅助任务,包括数字间关系抽取、数值排序和片段替换预测。他们与生成目标联合训练,用以监督解码器的学习,增强模型对运算逻辑和问题条件的感知能力。实验证明所提方法能有效提升生成的数学文字题的质量。”

pdf
基于SoftLexicon和注意力机制的中文因果关系抽取(Chinese Causality Extraction Based on SoftLexicon and Attention Mechanism)
Shilin Cui (崔仕林) | Rong Yan (闫蓉)

“针对现有中文因果关系抽取方法对因果事件边界难以识别和文本特征表示不充分的问题,提出了一种基于外部词汇信息和注意力机制的中文因果关系抽取模型BiLSTM-TWAM+CRF。该模型首次使用SoftLexicon方法引入外部词汇信息构建词集,解决了因果事件边界难以识别的问题。通过构建的双路关注模块TWAM(Two Way Attention Module),实现了从局部和全局两个角度充分刻画文本特征。实验结果表明,与当前中文因果关系抽取模型相比较,本文方法表现出更优的抽取效果。”

pdf
基于GCN和门机制的汉语框架排歧方法(Chinese Frame Disambiguation Method Based on GCN and Gate Mechanism)
Yanan You (游亚男) | Ru Li (李茹) | Xuefeng Su (苏雪峰) | Zhichao Yan (闫智超) | Minshuai Sun (孙民帅) | Chao Wang (王超)

“汉语框架排歧旨在候选框架中给句子中的目标词选择一个符合其语义场景的框架。目前研究方法存在隐层向量的计算与目标词无关,并且忽略了句法结构信息对框架排歧的影响等缺陷。针对上述问题,使用GCN对句法结构信息进行建模;引入门机制过滤隐层向量中与目标词无关的噪声信息;并在此基础上,提出一种约束机制来约束模型的学习,改进向量表示。该模型在CFN、FN1.5和FN1.7数据集上优于当前最好模型,证明了方法的有效性。”

pdf
基于中文电子病历知识图谱的实体对齐研究(Research on Entity Alignment Based on Knowledge Graph of Chinese Electronic Medical Record)
Lishuang Li (李丽双) | Jiangyuan Dong (董姜媛)

“医疗知识图谱中知识重叠和互补的现象普遍存在,利用实体对齐进行医疗知识图谱融合成为迫切需要。然而据我们调研,目前医疗领域中的实体对齐尚没有一个完整的处理方案。因此本文提出了一个规范的基于中文电子病历的医疗知识图谱实体对齐流程,为医疗领域的实体对齐提供了一种可行的方案。同时针对基于中文电子病历医疗知识图谱之间结构异构性的特点,设计了一个双视角并行图神经网络丨乄乵乐乎乥乴丩模型用于解决医疗领域实体对齐,并取得较好的效果。”

pdf
基于平行交互注意力网络的中文电子病历实体及关系联合抽取(Parallel Interactive Attention Network for Joint Entity and Relation Extraction Based on Chinese Electronic Medical Record)
LiShuang Li (李丽双) | Zehao Wang (王泽昊) | Xueyang Qin (秦雪洋) | Yuan Guanghui (袁光辉)

“基于电子病历构建医学知识图谱对医疗技术的发展具有重要意义,实体和关系抽取是构建知识图谱的关键技术。本文针对目前实体关系联合抽取中存在的特征交互不充分的问题,提出了一种平行交互注意力网络(PIAN)以充分挖掘实体与关系的相关性,在多个标准的医学和通用数据集上取得最优结果;当前中文医学实体及关系标注数据集较少,本文基于中文电子病历构建了实体和关系抽取数据集(CEMRIE),与医学专家共同制定了语料标注规范,并基于所提出的模型实验得出基准结果。”

pdf
基于框架语义映射和类型感知的篇章事件抽取(Document-Level Event Extraction Based on Frame Semantic Mapping and Type Awareness)
Jiang Lu (卢江) | Ru Li (李茹) | Xuefeng Su (苏雪峰) | Zhichao Yan (闫智超) | Jiaxing Chen (陈加兴)

“篇章事件抽取是从给定的文本中识别其事件类型和事件论元。目前篇章事件普遍存在数据稀疏和多值论元耦合的问题。基于此,本文将汉语框架网(CFN)与中文篇章事件建立映射,同时引入滑窗机制和触发词释义改善了事件检测的数据稀疏问题;使用基于类型感知标签的多事件分离策略缓解了论元耦合问题。为了提升模型的鲁棒性,进一步引入对抗训练。本文提出的方法在DuEE-Fin和CCKS2021数据集上实验结果显著优于现有方法。”

pdf
期货领域知识图谱构建(Construction of Knowledge Graph in Futures Field)
Wenxin Li (李雯昕) | Hongying Zan (昝红英) | Tongfeng Guan (关同峰) | Yingjie Han (韩英杰)

“期货领域是数据最丰富的领域之一,本文以商品期货的研究报告为数据来源构建了期货领域知识图谱(Commodity Futures Knowledge Graph,CFKG)。以期货产品为核心,确立了概念分类体系及关系描述体系,形成图谱的概念层;在MHS-BIA与GPN模型的基础上,通过领域专家指导对242万字的研报文本进行标注与校对,形成了CFKG数据层,并设计了可视化查询系统。所构建的CFKG包含17,003个农产品期货关系三元组、13,703种非农产品期货关系三元组,为期货领域文本分析、舆情监控和推理决策等应用提供知识支持。”

pdf
近四十年湘方言语音研究的回顾与展望——基于知识图谱绘制和文献计量分析(Review and Prospect of the Phonetic Research of Xiang Dialects in Recent Forty Years:Based on Knowledge Mapping and Bibliometric Analysis)
Yuting Yang (杨玉婷) | Xinzhong Liu (刘新中) | Zhifeng Peng (彭志峰)

“湘方言语音研究已取得丰硕的研究成果,本文以中国知网中的“学术期刊库”为数据来源,采用文献计量分析的方法,通过CiteSpace等工具,从发文信息、聚类分析及演进趋势等维度对相关文献进行统计分析和可视化知识图谱绘制,全方位考察近四十年的研究概貌,提出“类型学”、“语音层次”将会是湘方言语音研究较新的、有待进一步开拓的领域,为今后开拓新的研究方向提供理论依据,为湖南语保工程的资源进行深度开发利用提供数据支撑。”

pdf
基于知识监督的标签降噪实体对齐(Refined De-noising for Labeled Entity Alignment from Auxiliary Evidence Knowledge)
Fenglong Su (苏丰龙) | Ning Jing (景宁)

“大多数现有的实体对齐解决方案都依赖于干净的标记数据来训练模型,很少关注种子噪声。为了解决实体对齐中的噪声问题,本文提出了一个标签降噪框架,在实体对齐中注入辅助知识和附带监督,以纠正标记和引导过程中的种子错误。特别是,考虑到以前基于邻域嵌入方法的弱点,本文应用了一种新的对偶关系注意力匹配编码器来加速知识图谱的结构学习,同时使用辅助知识来弥补结构表征的不足。然后,通过对抗训练来执行弱监督标签降噪。对于误差累积的问题,本文进一步使用对齐精化模块来提高模型的性能。实验结果表明,所提的框架能够轻松应对含噪声环境下的实体对齐问题,在多个真实数据集上的对齐准确性和噪声辨别能力始终优于其他基线方法。”

pdf
基于图文细粒度对齐语义引导的多模态神经机器翻译方法(Based on Semantic Guidance of Fine-grained Alignment of Image-Text for Multi-modal Neural Machine Translation)
Junjie Ye (叶俊杰) | Junjun Guo (郭军军) | Kaiwen Tan (谭凯文) | Yan Xiang (相艳) | Zhengtao Yu (余正涛)

“多模态神经机器翻译旨在利用视觉信息来提高文本翻译质量。传统多模态机器翻译将图像的全局语义信息融入到翻译模型,而忽略了图像的细粒度信息对翻译质量的影响。对此,该文提出一种基于图文细粒度对齐语义引导的多模态神经机器翻译方法,该方法首先跨模态交互图文信息,以提取图文细粒度对齐语义信息,然后以图文细粒度对齐语义信息为枢纽,采用门控机制将多模态细粒度信息对齐到文本信息上,实现图文多模态特征融合。在多模态机器翻译基准数据集Multi30K 英语→德语、英语→法语以及英语→捷克语翻译任务上的实验结果表明,论文提出方法的有效性,并且优于大多数最先进的多模态机器翻译方法。”

pdf
多特征融合的越英端到端语音翻译方法(A Vietnamese-English end-to-end speech translation method based on multi-feature fusion)
Houli Ma (马候丽) | Ling Dong (董凌) | Wenjun Wang (王文君) | Jian Wang (王剑) | Shengxiang Gao (高盛祥) | Zhengtao Yu (余正涛)

“语音翻译的编码器需要同时编码语音中的声学和语义信息,单一的Fbank或Wav2vec2语音特征表征能力存在不足。本文通过分析人工的Fbank特征与自监督的Wav2vec2特征间的差异性,提出基于交叉注意力机制的声学特征融合方法,并探究了不同的自监督特征和融合方式,加强模型对语音中声学和语义信息的学习。结合越南语语音特点,以Fbank特征为主、Pitch特征为辅混合编码Fbank表征,构建多特征融合的越-英语音翻译模型。实验表明,使用多特征的语音翻译模型相比单特征翻译效果更优,与简单的特征拼接方法相比更有效,所提的多特征融合方法在越-英语音翻译任务上提升了1.97个BLEU值。”

pdf
融入音素特征的英-泰-老多语言神经机器翻译方法(English-Thai-Lao multilingual neural machine translation fused with phonemic features)
Zheng Shen (沈政) | Cunli Mao (毛存礼) | Zhengtao Yu (余正涛) | Shengxiang Gao (高盛祥) | Linqin Wang (王琳钦) | Yuxin Huang (黄于欣)

“多语言神经机器翻译是提升低资源语言翻译质量的有效手段。由于不同语言之间字符差异较大,现有方法难以得到统一的词表征形式。泰语和老挝语属于具有音素相似性的低资源语言,考虑到利用语言相似性能够拉近语义距离,提出一种融入音素特征的多语言词表征学习方法:(1)设计音素特征表示模块和泰老文本表示模块,基于交叉注意力机制得到融合音素特征后的泰老文本表示,拉近泰老之间的语义距离;(2)在微调阶段,基于参数分化得到不同语言对特定的训练参数,缓解联合训练造成模型过度泛化的问题。实验结果表明在ALT数据集上,提出方法在泰-英和老-英两个翻译方向上,相比基线模型提升0.97和0.99个BLEU值。”

pdf
机器音译研究综述(Survey on Machine Transliteration)
Zhuo Li (李卓) | Zhijuan Wang (王志娟) | Xiaobing Zhao (赵小兵)

“机器音译是基于语音相似性自动将文本从一种语言转换为另一种语言的过程,它是机器翻译的一个子任务,侧重于语音信息的翻译。音译后可知道源单词在另一种语言中的发音,使不熟悉源语言的人更容易理解该语言,有益于消除语言和拼写障碍。机器音译在多语言文本处理、语料库对齐、信息抽取等自然语言应用中发挥着重要作用。本文阐述了目前机器音译任务中存在的挑战,对主要的音译方法进行了剖析、分类和整理,对音译数据集进行了罗列汇总,并列出了常用的音译效果评价指标,最后对该领域目前存在的问题进行了说明并对音译学的未来进行了展望。本文以期对进入该领域的新人提供快速的入门指南,或供其他研究者参考。”

pdf
面向 Transformer 模型的蒙古语语音识别词特征编码方法(Researching of the Mongolian word encoding method based on Transformer Mongolian speech recognition)
Xiaoxu Zhang (张晓旭) | Zhiqiang Ma (马志强) | Zhiqiang Liu (刘志强) | Caijilahu Bao (宝财吉拉呼)

“针对 Transformer 模型在蒙古语语音识别任务中无法学习到带有控制符的蒙古语词和语音之间的对应关系,造成模型对蒙古语的不适应问题。提出一种面向 Transformer 模型的蒙古语词编码方法,方法使用蒙古语字母特征与词特征进行混合编码,通过结合蒙古语字母信息使 Transformer 模型能够区分带有控制符的蒙古语词,学习到蒙古语词与语音之间的对应关系。在 IMUT-MC 数据集上,构建 Transformer 模型并进行了词特征编码方法的消融实验和对比实验。消融实验结果表明,词特征编码方法在 HWER、WER、SER 上分别降低了 23.4%、6.9%、2.6%;对比实验结果表明,词特征编码方法领先于所有方法,HWER 和 WER 分别达到 11.8%、19.8%。”

pdf
基于注意力的蒙古语说话人特征提取方法(Attention based Mongolian Speaker Feature Extraction)
Fangyuan Zhu (朱方圆) | Zhiqiang Ma (马志强) | Zhiqiang Liu (刘志强) | Caijilahu Bao (宝财吉拉呼) | Hongbin Wang (王洪彬)

“说话人特征提取模型提取到的说话人特征之间区分性低,使得蒙古语声学模型无法学习到区分性信息,导致模型无法适应不同说话人。提出一种基于注意力的说话人自适应方法,方法引入神经图灵机进行自适应,增加记忆模块存放说话人特征,采用注意力机制计算记忆模块中说话人特征与当前语音说话人特征的相似权重矩阵,通过权重矩阵重新组合成说话人特征s-vector,进而提高说话人特征之间的区分性。在IMUT-MCT数据集上,进行说话人特征提取方法的消融实验、模型自适应实验和案例分析。实验结果表明,对比不同说话人特征s-vector、i-vector与d-vector,s-vector比其他两种方法的SER和WER分别降低4.96%、1.08%;在不同的蒙古语声学模型上进行比较,提出的方法相对于基线均有性能提升。”

pdf
融合双重注意力机制的缅甸语图像文本识别方法(Burmese image text recognition method with dual attention mechanism)
Fengxiao Wang (王奉孝) | Cunli Mao (毛存礼) | Zhengtao Yu (余正涛) | Shengxiang Gao (高盛祥) | Huang Yuxin (黄于欣) | Fuhao Liu (刘福浩)

“由于缅甸语字符具有独特的语言编码结构以及字符组合规则,现有图像文本识别方法在缅甸语图像识别任务中无法充分关注文字边缘的特征,会导致缅甸语字符上下标丢失的问题。因此,本文基于Transformer框架的图像文本识别方法做出改进,提出一种融合通道和空间注意力机制的视觉关注模块,旨在捕获像素级成对关系和通道依赖关系,降低缅甸语图像中噪声干扰从而获得语义更完整的特征图。此外,在解码过程中,将基于多头注意力的解码单元组合为解码器,用于将特征序列转化为缅甸语文字。实验结果表明,该方法在自构的缅甸语图像文本识别数据集上相比Transformer识别准确率提高0.5%,达到95.3%。”

pdf
基于预训练及控制码法的藏文律诗自动生成方法(Automatic Generation of Tibetan Poems based on Pre-training and Control Code Method)
Jia Secha (色差甲) | Jiacuo Cizhen (慈祯嘉措) | Jia Cairang (才让加) | Cairang Huaguo (华果才让)

“诗歌自动写作研究是自然语言生成的一个重要研究领域,被认为是极具挑战且有趣的任务之一。本文提出一种基于预训练及控制码法的藏文律诗生成方法。在藏文预训练语言模型上进行微调后生成质量显著提升,然而引入控制码法后在很大程度上确保了扣题程度,即关键词在生成诗作中的平均覆盖率居高。此外,在生成诗作中不仅提高词汇的丰富性,而且生成结果的多样性也明显提升。经测试表明,基于预训练及控制码法的生成方法显著优于基线方法。”

pdf
基于词典注入的藏汉机器翻译模型预训练方法(Dictionary Injection Based Pretraining Method for Tibetan-Chinese Machine Translation Model)
Duanzhu Sangjie (桑杰端珠) | Jia Cairang (才让加)

“近年来,预训练方法在自然语言处理领域引起了广泛关注,但是在比如藏汉机器等低资源的任务设定下,由于双语监督信息无法直接参与预训练,限制了预训练模型在此类任务上的性能改进。考虑到双语词典是丰富且廉价的先验翻译知识来源,同时受到跨语言交流中人们往往会使用混合语言增加以沟通效率这一现象启发,本文提出一种基于词典注入的藏汉机器翻译模型的预训练方法,为预训练提供学习双语知识关联的广泛可能。经验证,该方法在藏汉和汉藏翻译方向测试集上的 BLEU 值比 BART 强基准分别高出 2.3 和 2.1,证实了本文所提出的方法在藏汉机器翻译任务上的有效性。”

pdf
基于特征融合的汉语被动句自动识别研究(Automatic Recognition of Chinese Passive Sentences Based on Feature Fusion)
Kang Hu (胡康) | Weiguang Qu (曲维光) | Tingxin Wei (魏庭新) | Junsheng Zhou (周俊生) | Yanhui Gu (顾彦慧) | Bin Li (李斌)

“汉语中的被动句根据有无被动标记词可分为有标记被动句和无标记被动句。由于其形态构成复杂多样,给自然语言理解带来很大困难,因此实现汉语被动句的自动识别对自然语言处理下游任务具有重要意义。本文构建了一个被动句语料库,提出了一个融合词性和动词论元框架信息的PC-BERT-CNN模型,对汉语被动句进行自动识别。实验结果表明,本文提出的模型能够准确地识别汉语被动句,其中有标记被动句识别F1值达到98.77%,无标记被动句识别F1值达到96.72%。”

pdf
中文糖尿病问题分类体系及标注语料库构建研究(The Construction of Question Taxonomy and An Annotated Chinese Corpus for Diabetes Question Classification)
Xiaobo Qian (钱晓波) | Wenxiu Xie (谢文秀) | Shaopei Long (龙绍沛) | Murong Lan (兰牧融) | Yuanyuan Mu (慕媛媛) | Tianyong Hao (郝天永)

“糖尿病作为一种典型慢性疾病已成为全球重大公共卫生挑战之一。随着互联网的快速发展,庞大的二型糖尿病患者和高危人群对糖尿病专业信息获取的需求日益突出,糖尿病自动问答服务对患者和高危人群的日常健康服务也发挥着越来越重要的作用,然而存在缺乏细粒度分类等突出问题。本文设计了一个表示用户意图的新型糖尿病问题分类体系,包括6个大类和23个细类。基于该体系,本文从两个专业医疗问答网站爬取并构建了一个包含122732个问答对的中文糖尿病问答语料库DaCorp,同时对其中的8000个糖尿病问题进行人工标注,形成一个细粒度的糖尿病标注数据集。此外,为评估该标注数据集的质量,本文实现了8个主流基线分类模型。实验结果表明,最佳分类模型的准确率达到88.7%,验证了糖尿病标注数据集及所提分类体系的有效性。Dacorp、糖尿病标注数据集和标注指南已在线发布,可以免费用于学术研究。”

pdf
古汉语嵌套命名实体识别数据集的构建和应用研究(Construction and application of classical Chinese nested named entity recognition data set)
Zhiqiang Xie (谢志强) | Jinzhu Liu (刘金柱) | Genhui Liu (刘根辉)

“本文聚焦研究较少的古汉语嵌套命名实体识别任务,以《史记》作为原始语料,针对古文意义丰富而导致的实体分类模糊问题,分别构建了基于字词本义和语境义2个标注标准的古汉语嵌套命名实体数据集,探讨了数据集的实体分类原则和标注格式,并用RoBERTa-classical-chinese+GlobalPointer模型进行对比试验,标准一数据集F1值为80.42%,标准二F1值为77.43%,以此确定了数据集的标注标准。之后对比了六种预训练模型配合GlobalPointer在古汉语嵌套命名实体识别任务上的表现。最终试验结果:RoBERTa-classical-chinese模型F1值为84.71%,表现最好。”

pdf
CoreValue:面向价值观计算的中文核心价值-行为体系及知识库(CoreValue: Chinese Core Value-Behavior Frame and Knowledge Base for Value Computing)
Pengyuan Liu (刘鹏远) | Sanle Zhang (张三乐) | Dong Yu (于东) | Lin Bo (薄琳)

“由主体行为推断其价值观是人工智能理解并具有人类价值观的前提之一。在NLP相关领域,研究主要集中在对文本价值观或道德的是非判断上,鲜见由主体行为推断其价值观的工作,也缺乏相应的数据资源。该文首先构建了中文核心价值-行为体系。该体系以社会主义核心价值观为基础,分为两部分:1)类别体系。共包含8大类核心价值,进一步细分为19小类双方向价值并对应38类行为;2)要素体系。划分为核心与非核心要素共7种。随后,抽取语料中含有主体行为的文本句,依据该体系进行人工标注,构建了一个包含6994个行为句及其对应的细粒度价值与方向,34965个要素的细粒度中文价值-行为知识库。最后,该文提出了价值观类别判别、方向判别及联合判别任务并进行了实验。结果表明,基于预训练语言模型的方法在价值观方向判别上表现优异,在细粒度价值类别判别以及价值类别多标签判别上,有较大提升空间。”

pdf
基于《同义词词林》的中文语体分类资源构建(Construction of Chinese register classification resources based on “Tongyici Cilin”)
Guojing Huang (黄国敬) | Liwei Zhou (周立炜) | Gaoqi Rao (饶高琦) | Jiaojiao Zang (臧娇娇)

“语体词是指在某一语体中专用的词语,是语体的语言要素和形式标记。而语体词的资源可以服务于与现实场景息息相关的NLP应用,但目前此类资源较为稀缺。对此,本文基于《大词林》,完成了“语体词标注”“语体(词)链条标注”和“平行构式标注”三个任务,建立了以语体词为基础的语体分类资源。本资源包含55,710条词语、5017个语体链条和433组平行构式。基于此本文分析中文语体词的分布概况、形态差异以及词义词性的分布情况。”

pdf
《二十四史》古代汉语语义依存图库构建(Construction of Semantic Dependency Graph Bank of Ancient Chinese in twenty four histories)
Tian Huang (黄恬) | Yanqiu Shao (邵艳秋) | Wei Li (李炜)

“语义依存图是NLP处理语义的深层分析方法,能够对句子中词与词之间的语义进行分析。该文针对古代汉语特点,在制定古代汉语语义依存图标注规范的基础上,以《二十四史》为语料来源,完成标注了规模为3000句的古代汉语语义依存图库,标注一致性的kappa值为78.83%。通过与现代汉语语义依存图库的对比,对依存图库基本情况进行统计,分析古代汉语的语义特色和规律。统计显示,古代汉语语义分布宏观上符合齐普夫定律,在语义事件描述上具有强烈的历史性叙事和正式文体特征,如以人物纪传为中心,时间、地点等周边角色描述细致,叙事语言冷静客观,缺少描述情态、语气、程度、时间状态等的修饰词语等。 "

pdf
中文专利关键信息语料库的构建研究(Research on the construction of Chinese patent key information corpus)
Wenting Zhang (张文婷) | Meihan Zhao (赵美含) | Yixuan Ma (马翊轩) | Wenrui Wang (王文瑞) | Yuzhe Liu (刘宇哲) | Muyun Yang (杨沐昀)

“专利文献是一种重要的技术文献,是知识产权强国的重要工作内容。目前专利语料库多集中于信息检索、机器翻译以及文本文分类等领域,尚缺乏更细粒度的标注,不足以支持问答、阅读理解等新形态的人工智能技术研发。本文面向专利智能分析的需要,提出了从解决问题、技术手段、效果三个角度对发明专利进行专利标注,并最终构建了包含313篇的中文专利关键信息语料库。利用命名实体识别技术对语料库关键信息进行识别和验证,表明专利关键信息的识别是不同于领域命名实体识别的更大粒度的信息抽取难题。”

pdf
句式结构树库的自动构建研究(Automatic Construction of Sentence Pattern Structure Treebank)
Chenhui Xie (谢晨晖) | Zhengsheng Hu (胡正升) | Liner Yang (杨麟儿) | Tianxin Liao (廖田昕) | Erhong Yang (杨尔弘)

“句式结构树库是以句本位语法为理论基础构建的句法资源,对汉语教学以及句式结构自动句法分析等研究具有重要意义。目前已有的句式结构树库语料主要来源于教材领域,其他领域的标注数据较为缺乏,如何高效地扩充高质量的句法树库是值得研究的问题。人工标注句法树库费时费力,并且树库质量也难以保证,为此,本文尝试通过规则的方法,将宾州中文树库(ctb)转换为句式结构树库,从而扩大现有句式结构树库的规模。实验结果表明,本文提出的基于树库转换规则的方法是有效的。”

pdf
面向情感分析的汉语构式语料库构建与应用研究—对汉语构式情感分析问题的思考(A Study of Chinese Construction Corpus Compilation and Application for Sentiment Analysis: A Discussion of Sentiment)
Yinqing Wu (吴尹清) | Dejun Li (李德俊)

“文本情感分析又称为意见挖掘,是基于网络大数据对评价主体倾向性的研究。由于其在舆情监控、市场营销、金融等应用领域的特殊意义,近年来受到了越来越广泛的关注。本文关注情感分析面临的语义隐匿性问题,通过构建一个汉语构式语料库,对语料库中的汉语构式进行量化统计,讨论汉语构式与情感分析之间的关系。文章对语料库中表达量级和态度义的构式与词汇进行了标注,并基于该语料库对相关构式和词汇进行了计量分析,按照构式类型、语义类别、常项变项个数等标准统计了语料库中量级和态度义构式的信息,并与量级和态度义词汇的统计信息进行了比对,通过分析构式表义比重和词汇表义比重这两个指标,发现语料库中词汇承载了大部分态度和量级语义信息,构式所承载的态度和量级语义信息较少。虽然构式不是主要的表义单位,但其承载的态度语义信息仍占一定比例。文章为构式语法应用于汉语情感分析提供了实证数据,为后续该类研究提供了一种方法,也为汉语构式研究提供了基于汉语真实文本的数据。文章还专门探讨了目前构式语法应用于汉语情感分析乃至自然语言处理所面临的困难,对后续研究提出了展望。”

pdf
基于关系图注意力网络和宽度学习的负面情绪识别方法(Negative Emotion Recognition Method Based on Rational Graph Attention Network and Broad Learning)
Sancheng Peng (彭三城) | Guanghao Chen (陈广豪) | Lihong Cao (曹丽红) | Rong Zeng (曾嵘) | Yongmei Zhou (周咏梅) | Xinguang Li (李心广)

“对话文本负面情绪识别主要是从对话文本中识别出每个话语的负面情绪,近年来已成为了一个研究热点。然而,让机器在对话文本中识别负面情绪是一项具有挑战性的任务,因为人们在对话中的情感表达通常存在上下文关系。为了解决上述问题,本文提出一种基于关系图注意力网络(Rational Graph Attention Network, RGAT)和宽度学习(Broad Learning, BL)的对话文本负面情绪识别方法,即RGAT-BL。该方法采用预训练模型RoBERTa生成对话文本的初始向量;然后,采用Bi-LSTM对文本向量的局部特征和上下文语义特征进行提取,从而获取话语级别的特征;采用RGAT对说话者之间的长距离依赖关系进行提取,从而获取说话者级别的特征;采用BL对上述两种拼接后的特征进行处理,从而实现对负面情绪进行分类输出。通过在三种数据集上与基线模型进行对比实验,结果表明所提出的方法在三个数据集上的weighted-F 1、macroF 1值都优于基线模型。”

pdf
基于知识迁移的情感-原因对抽取(Emotion-Cause Pair Extraction Based on Knowledge-Transfer)
Fengyuan Zhao (赵凤园) | Dexi Liu (刘德喜) | Qizhi Wan (万齐智) | Changxuan Wan (万常选) | Xiping Liu (刘喜平) | Guoqiong Liao (廖国琼)

“现有的情感瘭原因对抽取模型均没有通过加入外部知识来提升情感瘭原因对的抽取效果。本文提出基于知识迁移的情感瘭原因对抽取模型瘨癅癃癐癅瘭癋癔瘩,采用知识库获取文本的显性知识编码;随后引入外部情感分类语料库迁移得到子句的隐性知识编码;最后拼接两个知识编码,加入情感瘨原因瘩子句预测概率及相对位置,搭配癔癲癡癮癳癦癯癲癭癥癲机制融合上下文,并采用窗口机制优化计算压力,实现情感瘭原因对抽取。在癅癃癐癅数据集上的实验结果显示,本文提出的方法超过当前最先进的模型癅癃癐癅瘭瘲癄。”

pdf
中文自然语言处理多任务中的职业性别偏见测量(Measurement of Occupational Gender Bias in Chinese Natural Language Processing Tasks)
Mengqing Guo (郭梦清) | Jiali Li (李加厉) | Jishun Zhao (赵继舜) | Shucheng Zhu (朱述承) | Ying Liu (刘颖) | Pengyuan Liu (刘鹏远)

“尽管悲观者认为,职场中永远不可能存在性别平等。但随着人们观念的转变,愈来愈多的人们相信,职业的选择应只与个人能力相匹配,而不应由个体的性别决定。目前已经发现自然语言处理的各个任务中都存在着职业性别偏见。但这些研究往往只针对特定的英文任务,缺乏针对中文的、综合多任务的职业性别偏见测量研究。本文基于霍兰德职业模型,从中文自然语言处理中常见的三个任务出发,测量了词向量、共指消解和文本生成中的职业性别偏见,发现不同任务中的职业性别偏见既有一定的共性,又存在着独特的差异性。总体来看,不同任务中的职业性别偏见反映了现实生活中人们对于不同性别所选择职业的刻板印象。此外,在设计不同任务的偏见测量指标时,还需要考虑如语体、词序等语言学要素的影响。”

pdf
基于异构用户知识融合的隐式情感分析研究(Research on Implicit Sentiment Analysis based on Heterogeneous User Knowledge Fusion)
Jian Liao (廖健) | Kai Zhang (张楷) | Suge Wang (王素格) | Jia Lei (雷佳) | Yiyang Zhang (张益阳)

“隐式情感分析因其缺乏显式情感线索的特性是情感分析领域的重要研究难点之一。传统的隐式情感分析方法通常针对隐式情感文本本身的信息进行建模,没有考虑隐式情感的主观差异性特征。本文提出了一种基于异构用户知识融合的隐式情感分析模型HELENE,首先从用户数据中挖掘用户异构的内容知识、社会化属性知识以及社会化关系知识,异构用户知识融合学习框架基于图神经网络模型结合动态预训练模型分别从用户的内部信息和外部信息两个维度对其进行画像建模;在此基础上与隐式情感文本语义信息进行融合学习,使得模型可以对隐式情感进行主观差异化建模表示。此外,本文构建了一个用户个性化通用情感分析语料库,涵盖了较为完整的文本内容信息、用户社会化属性信息和关系信息,可同时满足面向用户个性化建模的隐式或显式情感分析相关研究任务的需要。在所构建数据集上的实验结果显示,本文的方法相比基线模型在用户个性化隐式情感分析任务上具有显著的提升效果。”

pdf
基于主题提示学习的零样本立场检测方法(A Topic-based Prompt Learning Method for Zero-Shot Stance Detection)
Zixiao Chen (陈子潇) | Bin Liang (梁斌) | Ruifeng Xu (徐睿峰)

“零样本立场检测目的是针对未知目标数据进行立场极性预测。一般而言,文本的立场表达是与所讨论的目标主题是紧密联系的。针对未知目标的立场检测,本文将立场表达划分为两种类型:一类在说话者面向不同的主题和讨论目标时表达相同的立场态度,称之为目标无关的表达;另一类在说话者面向特定主题和讨论目标时才表达相应的立场态度,本文称之为目标依赖的表达。对这两种表达进行区分,有效学习到目标无关的表达方式并忽略目标依赖的表达方式,有望强化模型的可迁移能力,使其更加适应零样本立场检测任务。据此,本文提出了一种基于主题提示学习的零样本立场检测方法。具体而言,受自监督学习的启发,本文为了零样本立场检测设置了一个代理任务框架。其中,代理任务通过掩盖上下文中的目标主题词生成辅助样本,并基于提示学习分别预测原样本和辅助样本的立场表达,随后判断原样本和辅助样本的立场表达是否一致,从而在无需人工标注的情况下判断样本的立场表达是否依赖于目标的代理标签。然后,将此代理标签提供给立场检测模型,对应学习可迁移的立场检测特征。在两个基准数据集上的大量实验表明,本文提出的方法在零样本立场检测任务中相比基线模型取得了更优的性能。”

pdf
标签先验知识增强的方面类别情感分析方法研究(Aspect-Category based Sentiment Analysis Enhanced by Label Prior Knowledge)
Renwei Wu (吴任伟) | Lin Li (李琳) | Zheng He (何铮) | Jingling Yuan (袁景凌)

“当前,基于方面类别的情感分析研究旨在将方面类别检测和面向类别的情感分类两个任务协同进行。然而,现有研究未能有效关注情感数据集中存在的噪声标签,影响了情感分析的质量。基于此,本文提出一种标签先验知识增强的方面类别情感分析方法(AP-LPK)。首先本文为面向类别的情感分类构建了自回归提示训练方式,可以激发预训练语言模型的潜力。同时该方式通过自回归生成标签词,以期获得比非自回归更好的语义一致性。其次,每个类别的标签分布作为标签先验知识引入,并通过伯努利分布对其进行进一步精炼,以用于减轻噪声标签的干扰。然后,AP-LPK将上述两个步骤分别得到的情感类别分布进行融合,以获得最终的情感类别预测概率。最后,本文提出的AP-LPK方法在五个数据集上进行评估,包括SemEval 2015和2016的四个基准数据集和AI Challenger 2018的餐厅领域大规模数据集。实验结果表明,本文提出的方法在F1指标上优于现有方法。”

pdf
面向话题的讽刺识别:新任务、新数据和新方法(Topic-Oriented Sarcasm Detection: New Task, New Dataset and New Method)
Bin Liang (梁斌) | Zijie Lin (林子杰) | Bing Qin (秦兵) | Ruifeng Xu (徐睿峰)

“现有的文本讽刺识别研究通常只停留在句子级别的讽刺表达分类,缺乏考虑讽刺对象对讽刺表达的影响。针对这一问题,本文提出一个新的面向话题的讽刺识别任务。该任务通过话题的引入,以话题作为讽刺对象,有助于更好地理解和建模讽刺表达。对应地,本文构建了一个新的面向话题的讽刺识别数据集。这个数据集包含了707个话题,以及对应的4871个话题-评论对组。在此基础上,基于提示学习和大规模预训练语言模型,提出了一种面向话题的讽刺表达提示学习模型。在本文构建的面向话题讽刺识别数据集上的实验结果表明,相比基线模型,本文所提出的面向话题的讽刺表达提示学习模型取得了更优的性能。同时,实验分析也表明本文提出的面向话题的讽刺识别任务相比传统的句子级讽刺识别任务更具挑战性。”

pdf
基于相似度进行句子选择的机器阅读理解数据增强(Machine reading comprehension data Augmentation for sentence selection based on similarity)
Shuang Nie (聂双) | Zheng Ye (叶正) | Jun Qin (覃俊) | Jing Liu (刘晶)

“目前常见的机器阅读理解数据增强方法如回译,单独对文章或者问题进行数据增强,没有考虑文章、问题和选项三元组之间的联系。因此,本文探索了一种利用三元组联系进行文章句子筛选的数据增强方法,通过比较文章与问题以及选项的相似度,选取文章中与二者联系紧密的句子。同时为了使不同选项的三元组区别增大,我们选用了正则化Dropout的策略。实验结果表明,在RACE数据集上的准确率可提高3.8%。”

pdf
一种非结构化数据表征增强的术后风险预测模型(An Unstructured Data Representation Enhanced Model for Postoperative Risk Prediction)
Yaqiang Wang (王亚强) | Xiao Yang (杨潇) | Xuechao Hao (郝学超) | Hongping Shu (舒红平) | Guo Chen (陈果) | Tao Zhu (朱涛)

“准确的术后风险预测对临床资源规划和应急方案准备以及降低患者的术后风险和死亡率具有积极作用。术后风险预测目前主要基于术前和术中的患者基本信息、实验室检查、生命体征等结构化数据,而蕴含丰富语义信息的非结构化术前诊断的价值还有待验证。针对该问题,本文提出一种非结构化数据表征增强的术后风险预测模型,利用自注意力机制,精巧的将结构化数据与术前诊断数据进行信息加权融合。基于临床数据,将本文方法与术后风险预测常用的统计机器学习模型以及最新的深度神经网络进行对比,本文方法不仅提升了术后风险预测的性能,同时也为预测模型带来了良好的可解释性。”

pdf
融合外部语言知识的流式越南语语音识别(Streaming Vietnamese Speech Recognition Based on Fusing External Vietnamese Language Knowledge)
Junqiang Wang (王俊强) | Zhengtao Yu (余正涛) | Ling Dong (董凌) | Shengxiang Gao (高盛祥) | Wenjun Wang (王文君)

“越南语为低资源语言,训练语料难以获取;流式端到端模型在训练过程中难以学习到外部大量文本中的语言知识,这些问题在一定程度上都限制了流式越南语语音识别模型的性能。因此,本文以越南语音节作为语言模型和流式越南语语音识别模型的建模单元,提出了一种将预训练越南语语言模型在训练阶段融合到流式语音识别模型的方法。在训练阶段,通过最小化预训练越南语语言模型和解码器的输出计算一个新的损失函数LAE D−LM ,帮助流式越南语语音识别模型学习一些越南语语言知识从而优化其模型参数;在解码阶段,使用孓孨孡孬孬孯孷 孆孵孳孩孯孮或者字孆孓孔技术再次融合预训练语言模型进一步提升模型识别率。实验结果表明,在孖孉孖孏孓数据集上,相比基线模型,在训练阶段融合语言模型可以将流式越南语语音识别模型的词错率提升嬲嬮嬴嬵嬥;在解码阶段使用孓孨孡孬孬孯孷 孆孵孳孩孯孮或字孆孓孔再次融合语言模型,还可以将模型词错率分别提升嬱嬮嬳嬵嬥和嬴嬮嬷嬵嬥。”

pdf
针对古代经典文献的引用查找问题的数据构建与匹配方法(Data Construction and Matching Method for the Task of Ancient Classics Reference Detection)
Wei Li (李炜) | Yanqiu Shao (邵艳秋) | Mengxi Bi (毕梦曦)

“中国古代思想家的思想建构往往建立在对更早期经典的创造性诠释中,将这些诠释中包含的引用查找出来对思想史研究意义重大。但一些体量较大的文献如果完全依靠手工标记引用将耗费大量时间与人力成本,因此找到一种自动化的方法辅助专家进行引用标记查找非常重要。以预训练语言模型为代表的自然语言处理技术的发展提升了计算机对于文本处理和语义理解的能力。据此,本文提出多种利用专家知识或深度学习语义理解能力的无监督基线方法来自动查找古代思想家著作中对早期经典的引用。为了验证本文提出的方法的效果并推动自然语言处理技术在数字人文领域的应用,本文以宋代具有重大影响力的理学家二程(程颢、程颐)对早期儒家经典的引用为例进行研究,并构建和发布相应的引用查找数据集1。实验结果表明本文提出的基于预训练语言模型和对比学习目标的复合方法可以较为准确地判断是否存在引用关系。基于短句的引用探测ROC-AUC值达到了87.83,基于段落的引用探测ROC-AUC值达到了91.02。进一步的分析表明本文的方法不仅有利于自动化找到引用关系,更能够有效帮助专家提高引用查找判断效率。本方法在注释整理、文本溯源、重出文献查找、引用统计分析、索引文献集制作等方面具有广阔的应用前景。”

pdf
基于批数据过采样的中医临床记录四诊描述抽取方法(Four Diagnostic Description Extraction in Clinical Records of Traditional Chinese Medicine with Batch Data Oversampling)
Yaqiang Wang (王亚强) | Kailun Li (李凯伦) | Yongguang Jiang (蒋永光) | Hongping Shu (舒红平)

“中医临床记录四诊描述抽取对中医临床辨证论治的提质增效具有重要的应用价值,然而该抽取任务尚有待探索,类别分布不均衡是该任务的关键挑战之一。本文围绕该任务展开研究,构建了中医临床四诊描述抽取语料库;基于无标注中医临床记录微调通用预训练语言模型实现领域适应;利用小规模标注数据,采用批数据过采样算法,实现中医临床记录四诊描述抽取模型的训练。实验结果表明本文提出方法的总体性能均优于对比方法,与对比方法的最优结果相比,本文提出的方法将少见类别的抽取性能F1值平均提升了2.13%。”

pdf
篇章级小句复合体结构自动分析(Chinese Clause Complex Structure Automatic Analysis on Passage)
Zhiyong Luo (罗智勇) | Ruifang Han (韩瑞昉) | Mingming Zhang (张明明) | Yujiao Han (韩玉蛟) | Zhilin Zhao (赵志琳)

“话头话身共享关系是小句组合成小句复合体的重要语法手段,也是汉语篇章级句法语义分析的重要基础。本文通过引入窗口滑动机制,将篇章文本及其成分共享关系转换为文本片段及片段内部的成分共享关系预测问题,并针对预测结果合并与选择问题,依据话头话身共享关系的语法限定性,提出了多种候选项消除策略。实验结果表明,本文方法在缺少小句复合体边界信息条件下仍取得了与传统基于NTC的方法可比的实验结果,尤其是在确实缺失共享成分的待预测位置处的召回率提高了约0.4个百分点。”

pdf
基于话头话体共享结构信息的机器阅读理解研究(Rearch on Machine reading comprehension based on shared structure information between Naming and Telling)
Yujiao Han (韩玉蛟) | Zhiyong Luo (罗智勇) | Mingming Zhang (张明明) | Zhilin Zhao (赵志琳) | Qing Zhang (张青)

“机器阅读理解(Machine Reading Comprehension, MRC)任务旨在让机器回答给定上下文的问题来测试机器理解自然语言的能力。目前,基于大规模预训练语言模型的神经机器阅读理解模型已经取得重要进展,但在涉及答案要素、线索要素和问题要素跨标点句、远距离关联时,答案抽取的准确率还有待提升。本文通过篇章内话头话体结构分析,建立标点句间远距离关联关系、补全共享缺失成分,辅助机器阅读理解答案抽取;设计和实现融合话头话体结构信息的机器阅读理解模型,在公开数据集CMRC2018上的实验结果表明,模型的F1值相对于基线模型提升2.4%,EM值提升6%。”

pdf
基于神经网络的半监督CRF中文分词(Semi-supervised CRF Chinese Word Segmentation based on Neural Network)
Zhiyong Luo (罗智勇) | Mingming Zhang (张明明) | Yujiao Han (韩玉蛟) | Zhilin Zhao (赵志琳)

“分词是中文信息处理的基础任务之一。目前全监督中文分词技术已相对成熟并在通用领域取得较好效果,但全监督方法存在依赖大规模标注语料且领域迁移能力差的问题,特别是跨领域未登录词识别性能不佳。为缓解上述问题,本文提出了一种充分利用相对易得的目标领域无标注文本、实现跨领域迁移的半监督中文分词框架;并设计实现了基于词记忆网络和序列条件熵的半监督权杒杆中文分词模型。实验结果表明本该模型在多个领域数据集上杆札值和杒杏杏杖值分别取得最高朲.朳朵朥和朱朲.朱朲朥的提升,并在多个数据集上成为当前好结果。”

pdf
数字人文视角下的《史记》《汉书》比较研究(A Comparative Study of Shiji and Hanshu from the Perspective of Digital Humanities)
Zekun Deng (邓泽琨) | Hao Yang (杨浩) | Jun Wang (王军)

“《史记》和《汉书》具有经久不衰的研究价值。尽管两书异同的研究已经较为丰富,但研究的全面性、完备性、科学性、客观性均仍显不足。在数字人文的视角下,本文利用计算语言学方法,通过对字、词、命名实体、段落等的多粒度、多角度分析,开展对于《史》《汉》的比较研究。首先,本文对于《史》《汉》中的字、词、命名实体的分布和特点进行对比,以遍历穷举的考察方式提炼出两书在主要内容上的相同点与不同点,揭示了汉武帝之前和汉武帝到西汉灭亡两段历史时期在政治、文化、思想上的重要变革与承袭。其次,本文使用一种融入命名实体作为外部特征的文本相似度算法对于《史记》《汉书》的异文进行自动发现,成功识别出过去研究者通过人工手段没有发现的袭用段落,使得我们对于《史》《汉》的承袭关系形成更加完整和立体的认识。再次,本文通过计算异文段落之间的最长公共子序列来自动得出两段异文之间存在的差异,从宏观统计上证明了《汉书》文字风格《史记》的差别,并从微观上进一步对二者语言特点进行了阐释,为理解《史》《汉》异文特点提供了新的角度和启发。本研究站在数字人文的视域下,利用先进的计算方法对于传世千年的中国古代经典进行了再审视、再发现,其方法对于今人研究古籍有一定的借鉴价值。”

pdf
生成模型在层次结构极限多标签文本分类中的应用(Generation Model for Hierarchical Extreme Multi-label Text Classification)
Linqing Chen (陈林卿) | Dawang He (何大望) | Yansi Xiao (肖燕思) | Yilin Liu (刘依林) | Jianping Lu (陆剑平) | Weilei Wang (王为磊)

“层次结构极限多标签文本分类是自然语言处理研究领域中一个重要而又具有挑战性的课题。该任务类别标签数量巨大且自成体系,标签与标签之间还具有不同层级间的依赖关系或同层次间的相关性,这些特性进一步增加了任务难度。该文提出将层次结构极限多标签文本分类任务视为序列转换问题,将输出标签视为序列,从而可以直接从数十万标签中生成与文本相关的类别标签。通过软约束机制和词表复合映射在解码过程中利用标签之间的层次结构与相关信息。实验结果表明,该文提出的方法与基线模型相比取得了有意义的性能提升。进一步分析表明,该方法不仅可以捕获利用不同层级标签之间的上下位关系,还对极限多标签体系自身携带的噪声具有一定容错能力。”

pdf
基于多源知识融合的领域情感词典表示学习研究(Domain Sentiment Lexicon Representation Learning Based on Multi-source Knowledge Fusion)
Ruihua Qi (祁瑞华) | Jia Wei (魏佳) | Zhen Shao (邵震) | Xu Guo (郭旭) | Heng Chen (陈恒)

“本文旨在解决领域情感词典构建任务中标注数据资源相对匮乏以及情感语义表示不充分问题,通过多源数据领域差异计算联合权重,融合先验情感知识和Fasttext词向量表示学习,将情感语义知识映射到新的词向量空间,从无标注数据中自动构建适应大数据多领域和多语言环境的领域情感词典。在中英文多领域公开数据集上的对比实验表明,与情感词典方法和预训练词向量方法相比,本文提出的多源知识融合的领域情感词典表示学习方法在实验数据集上的分类正确率均有明显提升,并在多种算法、多语言、多领域和多数据集上具有较好的鲁棒性。本文还通过消融实验验证了所提出模型的各个模块在提升情感分类效果中的作用。”

pdf
俄语网络仇恨言论语料库研究与构建(An Russian Internet Corpus for Hate Speech Detection)
Xin Wen (温昕) | Minjiao Zheng (郑敏娇)

“近年来,网络科技的飞速发展在为整个社会带来极大便利的同时,也加剧了仇恨言论的传播。仇恨言论可能会构成网络暴力,诱发仇恨性的犯罪行为,对社会公共文明和网络空间秩序造成极大的威胁。因此,对网络仇恨言论进行主动的监管和制约具有重大意义。而当前学术界针对俄语的网络仇恨言论研究不足,尤其缺乏俄语网络仇恨言论语料库,这极大地限制了相关技术和应用的发展。2022年俄乌冲突爆发以后,对于俄语网络仇恨言论语料库的研究与构建显得更加迫切。在本文中,作者提出了一种细粒度的俄语网络仇恨言论语料库构建及标注方案,并基于该方案首次创建了包含20476条文本数据,具有针对性、话题统一的俄语仇恨性言论语料库。”

pdf
基于强化学习的古今汉语句子对齐研究(Research on Sentence Alignment of Ancient and Modern Chinese based on Reinforcement Learning)
Kuai Yu (喻快) | Yanqiu Shao (邵艳秋) | Wei Li (李炜)

“基于深度学习的有监督机器翻译取得了良好的效果,但训练过程中需要大量质量较高的对齐语料。对于中文古今翻译场景,高质量的平行语料并不多,而粗对齐的篇章、段语料比较容易获得,因此语料对齐很有研究价值和研究必要。在传统双语平行语料的句子对齐研究中,传统方法根据双语文本中的长度、词汇、共现文字等语法信息,建立一个综合评判标准来衡量两个句对之间相似度。此类方法虽然在单句对齐上取得了较好的效果,但是对于句子语义匹配的能力有限,并且在一些多对多的对齐模式上的性能表现不佳。在本文中我们提出尝试利用现在发展迅速且具有强大语义表示能力的预训练语言模型来考虑双语的语义信息,但是单独使用预训练语言模型只能考虑相对局部的信息,因此我们提出采用基于动态规划算法的强化学习训练目标来整合段落全局信息,并且进行无监督训练。实验结果证明我们提出的方法训练得到的模型性能优于此前获得最好表现的基线模型,尤其相较于传统模型难以处理的多对多对齐模式下,性能提升较大。”

pdf
基于情感增强非参数模型的社交媒体观点聚类(A Sentiment Enhanced Nonparametric Model for Social Media Opinion Clustering)
Kan Liu (刘勘) | Yu Chen (陈昱) | Jiarui He (何佳瑞)

“本文旨在使用文本聚类技术,将社交媒体文本根据用户主张的观点汇总,直观呈现网民群体所持有的不同立场。针对社交媒体文本模式复杂与情感丰富等特点,本文提出使用情感分布增强方法改进现有的非参数短文本聚类算法,以高斯分布建模文本情感,捕获文本情感特征的同时能够自动确定聚类簇数量并实现观点聚类。在公开数据集上的实验显示,该方法在多项聚类指标上取得了超越现有模型的聚类表现,并在主观性较强的数据集中具有更显著的优势。”

pdf
Discourse Markers as the Classificatory Factors of Speech Acts
Da Qi | Chenliang Zhou | Haitao Liu

“Since the debut of the speech act theory, the classification standards of speech acts have been in dispute. Traditional abstract taxonomies seem insufficient to meet the needs of artificial intelligence for identifying and even understanding speech acts. To facilitate the automatic identification of the communicative intentions in human dialogs, scholars have tried some data-driven methods based on speech-act annotated corpora. However, few studies have objectively evaluated those classification schemes. In this regard, the current study applied the frequencies of the eleven discourse markers (oh, well, and, but, or, so, because, now, then, I mean, and you know) proposed by Schiffrin (1987) to investigate whether they can be effective indicators of speech act variations. The results showed that the five speech acts of Agreement can be well classified in terms of their functions by the frequencies of discourse markers. Moreover, it was found that the discourse markers well and oh are rather efficacious in differentiating distinct speech acts. This paper indicates that quantitative indexes can reflect the characteristics of human speech acts, and more objective and data-based classification schemes might be achieved based on these metrics.”

pdf
DIFM:An effective deep interaction and fusion model for sentence matching
Jiang Kexin | Zhao Yahui | Cui Rongyi

“Natural language sentence matching is the task of comparing two sentences and identifying the relationship between them. It has a wide range of applications in natural language processing tasks such as reading comprehension, question and answer systems. The main approach is to compute the interaction between text representations and sentence pairs through an attention mechanism, which can extract the semantic information between sentence pairs well. However, this kind of methods fail to capture deep semantic information and effectively fuse the semantic information of the sentence. To solve this problem, we propose a sentence matching method based on deep interaction and fusion. We first use pre-trained word vectors Glove and characterlevel word vectors to obtain word embedding representations of the two sentences. In the encoding layer, we use bidirectional LSTM to encode the sentence pairs. In the interaction layer, we initially fuse the information of the sentence pairs to obtain low-level semantic information; at the same time, we use the bi-directional attention in the machine reading comprehension model and self-attention to obtain the high-level semantic information. We use a heuristic fusion function to fuse the low-level semantic information and the high-level semantic information to obtain the final semantic information, and finally we use the convolutional neural network to predict the answer. We evaluate our model on two tasks: text implication recognition and paraphrase recognition. We conducted experiments on the SNLI datasets for the recognizing textual entailment task, the Quora dataset for the paraphrase recognition task. The experimental results show that the proposed algorithm can effectively fuse different semantic information that verify the effectiveness of the algorithm on sentence matching tasks.”

pdf
ConIsI: A Contrastive Framework with Inter-sentence Interaction for Self-supervised Sentence Representation
Sun Meng | Huang Degen

“Learning sentence representation is a fundamental task in natural language processing and has been studied extensively. Recently, many works have obtained high-quality sentence representation based on contrastive learning from pre-trained models. However, these works suffer the inconsistency of input forms between the pre-training and fine-tuning stages. Also, they typically encode a sentence independently and lack feature interaction between sentences. To conquer these issues, we propose a novel Contrastive framework with Inter-sentence Interaction (ConIsI), which introduces a sentence-level objective to improve sentence representation based on contrastive learning by fine-grained interaction between sentences. The sentence-level objective guides the model to focus on fine-grained semantic information by feature interaction between sentences, and we design three different sentence construction strategies to explore its effect. We conduct experiments on seven Semantic Textual Similarity (STS) tasks. The experimental results show that our ConIsI models based on BERTbase and RoBERTabase achieve state-ofthe-art performance, substantially outperforming previous best models SimCSE-BERTbase and SimCSE-RoBERTabase by 2.05% and 0.77% respectively.”

pdf
Data Synthesis and Iterative Refinement for Neural Semantic Parsing without Annotated Logical Forms
Wu Shan | Chen Bo | Han Xianpei | Sun Le

“Semantic parsing aims to convert natural language utterances to logical forms. A critical challenge for constructing semantic parsers is the lack of labeled data. In this paper, we propose a data synthesis and iterative refinement framework for neural semantic parsing, which can build semantic parsers without annotated logical forms. We first generate a naive corpus by sampling logic forms from knowledge bases and synthesizing their canonical utterances. Then, we further propose a bootstrapping algorithm to iteratively refine data and model, via a denoising language model and knowledge-constrained decoding. Experimental results show that our approach achieves competitive performance on GEO, ATIS and OVERNIGHT datasets in both unsupervised and semi-supervised data settings.”

pdf
EventBERT: Incorporating Event-based Semantics for Natural Language Understanding
Zou Anni | Zhang Zhuosheng | Zhao Hai

“Natural language understanding tasks require a comprehensive understanding of natural language and further reasoning about it, on the basis of holistic information at different levels to gain comprehensive knowledge. In recent years, pre-trained language models (PrLMs) have shown impressive performance in natural language understanding. However, they rely mainly on extracting context-sensitive statistical patterns without explicitly modeling linguistic information, such as semantic relationships entailed in natural language. In this work, we propose EventBERT, an event-based semantic representation model that takes BERT as the backbone and refines with event-based structural semantics in terms of graph convolution networks. EventBERT benefits simultaneously from rich event-based structures embodied in the graph and contextual semantics learned in pre-trained model BERT. Experimental results on the GLUE benchmark show that the proposed model consistently outperforms the baseline model.”

pdf
An Exploration of Prompt-Based Zero-Shot Relation Extraction Method
Zhao Jun | Hu Yuan | Xu Nuo | Gui Tao | Zhang Qi | Chen Yunwen | Gao Xiang

“Zero-shot relation extraction is an important method for dealing with the newly emerging relations in the real world which lacks labeled data. However, the mainstream two-tower zero-shot methods usually rely on large-scale and in-domain labeled data of predefined relations. In this work, we view zero-shot relation extraction as a semantic matching task optimized by prompt-tuning, which still maintains superior generalization performance when the labeled data of predefined relations are extremely scarce. To maximize the efficiency of data exploitation, instead of directly fine-tuning, we introduce a prompt-tuning technique to elicit the existing relational knowledge in pre-trained language model (PLMs). In addition, very few relation descriptions are exposed to the model during training, which we argue is the performance bottleneck of two-tower methods. To break through the bottleneck, we model the semantic interaction between relational instances and their descriptions directly during encoding. Experiment results on two academic datasets show that (1) our method outperforms the previous state-of-the-art method by a large margin with different samples of predefined relations; (2) this advantage will be further amplified in the low-resource scenario.”

pdf
Abstains from Prediction: Towards Robust Relation Extraction in Real World
Zhao Jun | Zhang Yongxin | Xu Nuo | Gui Tao | Zhang Qi | Chen Yunwen | Gao Xiang

“Supervised learning is a classic paradigm of relation extraction (RE). However, a well-performing model can still confidently make arbitrarily wrong predictions when exposed to samples of unseen relations. In this work, we propose a relation extraction method with rejection option to improve robustness to unseen relations. To enable the classifier to reject unseen relations, we introduce contrastive learning techniques and carefully design a set of class-preserving transformations to improve the discriminability between known and unseen relations. Based on the learned representation, inputs of unseen relations are assigned a low confidence score and rejected. Off-the-shelf open relation extraction (OpenRE) methods can be adopted to discover the potential relations in these rejected inputs. In addition, we find that the rejection can be further improved via readily available distantly supervised data. Experiments on two public datasets prove the effectiveness of our method capturing discriminative representations for unseen relation rejection.”

pdf
Using Extracted Emotion Cause to Improve Content-Relevance for Empathetic Conversation Generation
Zou Minghui | Pan Rui | Zhang Sai | Zhang Xiaowang

“Empathetic conversation generation intends to endow the open-domain conversation model with the capability for understanding, interpreting, and expressing emotion. Humans express not only their emotional state but also the stimulus that caused the emotion, i.e., emotion cause, during a conversation. Most existing approaches focus on emotion modeling, emotion recognition and prediction, and emotion fusion generation, ignoring the critical aspect of the emotion cause, which results in generating responses with irrelevant content. Emotion cause can help the model understand the user’s emotion and make the generated responses more content-relevant. However, using the emotion cause to enhance empathetic conversation generation is challenging. Firstly, the model needs to accurately identify the emotion cause without large-scale labeled data. Second, the model needs to effectively integrate the emotion cause into the generation process. To this end, we present an emotion cause extractor using a semi-supervised training method and an empathetic conversation generator using a biased self-attention mechanism to overcome these two issues. Experimental results indicate that our proposed emotion cause extractor improves recall scores markedly compared to the baselines, and the proposed empathetic conversation generator has superior performance and improves the content-relevance of generated responses.”

pdf
To Adapt or to Fine-tune: A Case Study on Abstractive Summarization
Zheng Zhao | Pinzhen Chen

“Recent advances in the field of abstractive summarization leverage pre-trained language models rather than train a model from scratch. However, such models are sluggish to train and accompanied by a massive overhead. Researchers have proposed a few lightweight alternatives such as smaller adapters to mitigate the drawbacks. Nonetheless, it remains uncertain whether using adapters benefits the task of summarization, in terms of improved efficiency without an unpleasant sacrifice in performance. In this work, we carry out multifaceted investigations on fine-tuning and adapters for summarization tasks with varying complexity: language, domain, and task transfer. In our experiments, fine-tuning a pre-trained language model generally attains a better performance than using adapters; the performance gap positively correlates with the amount of training data used. Notably, adapters exceed fine-tuning under extremely low-resource conditions. We further provide insights on multilinguality, model convergence, and robustness, hoping to shed light on the pragmatic choice of fine-tuning or adapters in abstractive summarization.”

pdf
MRC-based Medical NER with Multi-task Learning and Multi-strategies
Xiaojing Du | Jia Yuxiang | Zan Hongying

“Medical named entity recognition (NER), a fundamental task of medical information extraction, is crucial for medical knowledge graph construction, medical question answering, and automatic medical record analysis, etc. Compared with named entities (NEs) in general domain, medical named entities are usually more complex and prone to be nested. To cope with both flat NEs and nested NEs, we propose a MRC-based approach with multi-task learning and multi-strategies. NER can be treated as a sequence labeling (SL) task or a span boundary detection (SBD) task. We integrate MRC-CRF model for SL and MRC-Biaffine model for SBD into the multi-task learning architecture, and select the more efficient MRC-CRF as the final decoder. To further improve the model, we employ multi-strategies, including adaptive pre-training, adversarial training, and model stacking with cross validation. Experiments on both nested NER corpus CMeEE and flat NER corpus CCKS2019 show the effectiveness of the MRC-based model with multi-task learning and multi-strategies.”

pdf
A Multi-Gate Encoder for Joint Entity and Relation Extraction
Xiong Xiong | Liu Yunfei | Liu Anqi | Gong Shuai | Li Shengyang

“Named entity recognition and relation extraction are core sub-tasks of relational triple extraction. Recent studies have used parameter sharing or joint decoding to create interaction between these two tasks. However, ensuring the specificity of task-specific traits while the two tasks interact properly is a huge difficulty. We propose a multi-gate encoder that models bidirectional task interaction while keeping sufficient feature specificity based on gating mechanism in this paper. Precisely, we design two types of independent gates: task gates to generate task-specific features and interaction gates to generate instructive features to guide the opposite task. Our experiments show that our method increases the state-of-the-art (SOTA) relation F1 scores on ACE04, ACE05 and SciERC datasets to 63.8% (+1.3%), 68.2% (+1.4%), 39.4% (+1.0%), respectively, with higher inference speed over previous SOTA model.”

pdf
Improving Event Temporal Relation Classification via Auxiliary Label-Aware Contrastive Learning
Sun Tiesen | Li Lishuang

“Event Temporal Relation Classification (ETRC) is crucial to natural language understanding. In recent years, the mainstream ETRC methods may not take advantage of lots of semantic information contained in golden temporal relation labels, which is lost by the discrete one-hot labels. To alleviate the loss of semantic information, we propose learning Temporal semantic information of the golden labels by Auxiliary Contrastive Learning (TempACL). Different from traditional contrastive learning methods, which further train the PreTrained Language Model (PTLM) with unsupervised settings before fine-tuning on target tasks, we design a supervised contrastive learning framework and make three improvements. Firstly, we design a new data augmentation method that generates augmentation data via matching templates established by us with golden labels. Secondly, we propose patient contrastive learning and design three patient strategies. Thirdly we design a label-aware contrastive learning loss function. Extensive experimental results show that our TempACL effectively adapts contrastive learning to supervised learning tasks which remain a challenge in practice. TempACL achieves new state-of-the-art results on TB-Dense and MATRES and outperforms the baseline model with up to 5.37%F1 on TB-Dense and 1.81%F1 on MATRES.”

pdf
Towards Making the Most of Pre-trained Translation Model for Quality Estimation
Li Chunyou | Di Hui | Huang Hui | Ouchi Kazushige | Chen Yufeng | Liu Jian | Xu Jinan

“Machine translation quality estimation (QE) aims to evaluate the quality of machine translation automatically without relying on any reference. One common practice is applying the translation model as a feature extractor. However, there exist several discrepancies between the translation model and the QE model. The translation model is trained in an autoregressive manner, while the QE model is performed in a non-autoregressive manner. Besides, the translation model only learns to model human-crafted parallel data, while the QE model needs to model machinetranslated noisy data. In order to bridge these discrepancies, we propose two strategies to posttrain the translation model, namely Conditional Masked Language Modeling (CMLM) and Denoising Restoration (DR). Specifically, CMLM learns to predict masked tokens at the target side conditioned on the source sentence. DR firstly introduces noise to the target side of parallel data, and the model is trained to detect and recover the introduced noise. Both strategies can adapt the pre-trained translation model to the QE-style prediction task. Experimental results show that our model achieves impressive results, significantly outperforming the baseline model, verifying the effectiveness of our proposed methods.”

pdf
Supervised Contrastive Learning for Cross-lingual Transfer Learning
Wang Shuaibo | Di Hui | Huang Hui | Lai Siyu | Ouchi Kazushige | Chen Yufeng | Xu Jinan

“Multilingual pre-trained representations are not well-aligned by nature, which harms their performance on cross-lingual tasks. Previous methods propose to post-align the multilingual pretrained representations by multi-view alignment or contrastive learning. However, we argue that both methods are not suitable for the cross-lingual classification objective, and in this paper we propose a simple yet effective method to better align the pre-trained representations. On the basis of cross-lingual data augmentations, we make a minor modification to the canonical contrastive loss, to remove false-negative examples which should not be contrasted. Augmentations with the same class are brought close to the anchor sample, and augmentations with different class are pushed apart. Experiment results on three cross-lingual tasks from XTREME benchmark show our method could improve the transfer performance by a large margin with no additional resource needed. We also provide in-detail analysis and comparison between different post-alignment strategies.”

pdf
Interactive Mongolian Question Answer Matching Model Based on Attention Mechanism in the Law Domain
Peng Yutao | Wang Weihua | Bao Feilong

“Mongolian question answer matching task is challenging, since Mongolian is a kind of lowresource language and its complex morphological structures lead to data sparsity. In this work, we propose an Interactive Mongolian Question Answer Matching Model (IMQAMM) based on attention mechanism for Mongolian question answering system. The key parts of the model are interactive information enhancement and max-mean pooling matching. Interactive information enhancement contains sequence enhancement and multi-cast attention. Sequence enhancement aims to provide a subsequent encoder with an enhanced sequence representation, and multi-cast attention is designed to generate scalar features through multiple attention mechanisms. MaxMean pooling matching is to obtain the matching vectors for aggregation. Moreover, we introduce Mongolian morpheme representation to better learn the semantic feature. The model experimented on the Mongolian corpus, which contains question-answer pairs of various categories in the law domain. Experimental results demonstrate that our proposed Mongolian question answer matching model significantly outperforms baseline models.”

pdf
TCM-SD: A Benchmark for Probing Syndrome Differentiation via Natural Language Processing
Ren Mucheng | Huang Heyan | Zhou Yuxiang | Cao Qianwen | Bu Yuan | Gao Yang

“Traditional Chinese Medicine (TCM) is a natural, safe, and effective therapy that has spread and been applied worldwide. The unique TCM diagnosis and treatment system requires a comprehensive analysis of a patient’s symptoms hidden in the clinical record written in free text. Prior studies have shown that this system can be informationized and intelligentized with the aid of artificial intelligence (AI) technology, such as natural language processing (NLP). However, existing datasets are not of sufficient quality nor quantity to support the further development of data-driven AI technology in TCM. Therefore, in this paper, we focus on the core task of the TCM diagnosis and treatment system—syndrome differentiation (SD)—and we introduce the first public large-scale benchmark for SD, called TCM-SD. Our benchmark contains 54,152 real-world clinical records covering 148 syndromes. Furthermore, we collect a large-scale unlabelled textual corpus in the field of TCM and propose a domain-specific pre-trained language model, called ZYBERT. We conducted experiments using deep neural networks to establish a strong performance baseline, reveal various challenges in SD, and prove the potential of domain-specific pre-trained language model. Our study and analysis reveal opportunities for incorporating computer science and linguistics knowledge to explore the empirical validity of TCM theories.”

pdf
COMPILING: A Benchmark Dataset for Chinese Complexity Controllable Definition Generation
Yuan Jiaxin | Kong Cunliang | Xie Chenhui | Yang Liner | Yang Erhong

“The definition generation task aims to generate a word’s definition within a specific context automatically. However, owing to the lack of datasets for different complexities, the definitions produced by models tend to keep the same complexity level. This paper proposes a novel task of generating definitions for a word with controllable complexity levels. Correspondingly, we introduce COMPILING, a dataset given detailed information about Chinese definitions, and each definition is labeled with its complexity levels. The COMPILING dataset includes 74,303 words and 106,882 definitions. To the best of our knowledge, it is the largest dataset of the Chinese definition generation task. We select various representative generation methods as baselines for this task and conduct evaluations, which illustrates that our dataset plays an outstanding role in assisting models in generating different complexity-level definitions. We believe that the COMPILING dataset will benefit further research in complexity controllable definition generation.”

pdf
Can We Really Trust Explanations? Evaluating the Stability of Feature Attribution Explanation Methods via Adversarial Attack
Yang Zhao | Zhang Yuanzhe | Jiang Zhongtao | Ju Yiming | Zhao Jun | Liu Kang

“Explanations can increase the transparency of neural networks and make them more trustworthy. However, can we really trust explanations generated by the existing explanation methods? If the explanation methods are not stable enough, the credibility of the explanation will be greatly reduced. Previous studies seldom considered such an important issue. To this end, this paper proposes a new evaluation frame to evaluate the stability of current typical feature attribution explanation methods via textual adversarial attack. Our frame could generate adversarial examples with similar textual semantics. Such adversarial examples will make the original models have the same outputs, but make most current explanation methods deduce completely different explanations. Under this frame, we test five classical explanation methods and show their performance on several stability-related metrics. Experimental results show our evaluation is effective and could reveal the stability performance of existing explanation methods.”

pdf
Dynamic Negative Example Construction for Grammatical Error Correction using Contrastive Learning
He Junyi | Zhuang Junbin | Li Xia

“Grammatical error correction (GEC) aims at correcting texts with different types of grammatical errors into natural and correct forms. Due to the difference of error type distribution and error density, current grammatical error correction systems may over-correct writings and produce a low precision. To address this issue, in this paper, we propose a dynamic negative example construction method for grammatical error correction using contrastive learning. The proposed method can construct sufficient negative examples with diverse grammatical errors, and can be dynamically used during model training. The constructed negative examples are beneficial for the GEC model to correct sentences precisely and suppress the model from over-correction. Experimental results show that our proposed method enhances model precision, proving the effectiveness of our method.”

pdf
SPACL: Shared-Private Architecture based on Contrastive Learning for Multi-domain Text Classification
Xiong Guoding | Zhou Yongmei | Wang Deheng | Ouyang Zhouhao

“With the development of deep learning in recent years, text classification research has achieved remarkable results. However, text classification task often requires a large amount of annotated data, and data in different fields often force the model to learn different knowledge. It is often difficult for models to distinguish data labeled in different domains. Sometimes data from different domains can even damage the classification ability of the model and reduce the overall performance of the model. To address these issues, we propose a shared-private architecture based on contrastive learning for multi-domain text classification which can improve both the accuracy and robustness of classifiers. Extensive experiments are conducted on two public datasets. The results of experiments show that the our approach achieves the state-of-the-art performance in multi-domain text classification.”

pdf
Low-Resource Named Entity Recognition Based on Multi-hop Dependency Trigger
Wu Jiangxu | Yan Peiqi

“This paper introduces DepTrigger, a simple and effective model in low-resource named entity recognition (NER) based on multi-hop dependency triggers. Dependency triggers refer to salient nodes relative to an entity in the dependency graph of a context sentence. Our main observation is that triggers generally play an important role in recognizing the location and the type of entity in a sentence. Instead of exploiting the manual labeling of triggers, we use the syntactic parser to annotate triggers automatically. We train DepTrigger using an independent model architectures which are Match Network encoder and Entity Recognition Network encoder. Compared to the previous model TriggerNER, DepTrigger outperforms for long sentences, while still maintain good performance for short sentences as usual. Our framework is significantly more cost-effective in real business.”

pdf
Fundamental Analysis based Neural Network for Stock Movement Prediction
Zheng Yangjia | Li Xia | Ma Junteng | Chen Yuan

“Stock movements are influenced not only by historical prices, but also by information outside the market such as social media and news about the stock or related stock. In practice, news or prices of a stock in one day are normally impacted by different days with different weights, and they can influence each other. In terms of this issue, in this paper, we propose a fundamental analysis based neural network for stock movement prediction. First, we propose three new technical indicators based on raw prices according to the finance theory as the basic encode of the prices of each day. Then, we introduce a coattention mechanism to capture the sufficient context information between text and prices across every day within a time window. Based on the mutual promotion and influence of text and price at different times, we obtain more sufficient stock representation. We perform extensive experiments on the real-world StockNet dataset and the experimental results demonstrate the effectiveness of our method.”