2024
pdf
bib
abs
面向中文多方对话的机器阅读理解研究(Research on Machine Reading Comprehension for Chinese Multi-party Dialogues)
Yuru Jiang (蒋玉茹)
|
Yu Li (李宇)
|
Tingting Na (那婷婷)
|
Yangsen Zhang (张仰森)
Proceedings of the 23rd Chinese National Conference on Computational Linguistics (Volume 1: Main Conference)
“在机器阅读理解领域,处理和分析多方对话一直是一项具有挑战性的研究任务。鉴于中文语境下相关数据资源的缺乏,本研究构建了DialogueMRC数据集,旨在促进该领域的研究进展。DialogueMRC数据集作为首个面向中文多方对话的机器阅读理解数据集,包含705个多方对话实例,涵盖24451个话语单元以及8305个问答对。区别于以往的MRC数据集,DialogueMRC数据集强调深入理解动态的对话过程,对模型应对多方对话中的复杂性及篇章解析能力提出了更高的要求。为应对中文多方对话机器阅读理解的挑战,本研究提出了融合篇章结构感知能力的中文多方对话问答模型(DiscourseStructure-aware QA Model for Chinese Multi-party Dialogue,DSQA-CMD),该模型融合了问答和篇章解析任务,以提升对话上下文的理解能力。实验结果表明,相较于典型的基于微调的预训练语言模型,DSQA-CMD模型表现出明显优势,对比基于Longformer的方法,DSQA-CMD模型在MRC任务的F1和EM评价指标上分别提升了5.4%和10.0%;与当前主流的大型语言模型相比,本模型也展现了更佳的性能,表明了本文所提出方法的有效性。”
pdf
bib
abs
DLM: A Decoupled Learning Model for Long-tailed Polyphone Disambiguation in Mandarin
Beibei Gao
|
Yangsen Zhang
|
Ga Xiang
|
Yushan Jiang
Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)
Grapheme-to-phoneme conversion (G2P) is a critical component of the text-to-speech system (TTS), where polyphone disambiguation is the most crucial task. However, polyphone disambiguation datasets often suffer from the long-tail problem, and context learning for polyphonic characters commonly stems from a single dimension. In this paper, we propose a novel model DLM: a Decoupled Learning Model for long-tailed polyphone disambiguation in Mandarin. Firstly, DLM decouples representation and classification learnings. It can apply different data samplers for each stage to obtain an optimal training data distribution. This can mitigate the long-tail problem. Secondly, two improved attention mechanisms and a gradual conversion strategy are integrated into the DLM, which achieve transition learning of context from local to global. Finally, to evaluate the effectiveness of DLM, we construct a balanced polyphone disambiguation corpus via in-context learning. Experiments on the benchmark CPP dataset demonstrate that DLM achieves a boosted accuracy of 99.07%. Moreover, DLM improves the disambiguation performance of long-tailed polyphonic characters. For many long-tailed characters, DLM even achieves an accuracy of 100%.
2023
pdf
bib
abs
基于RoBERTa的中文仇恨言论侦测方法研究(Chinese Hate Speech detection method Based on RoBERTa-WWM)
Xiaojun Rao
|
Yangsen Zhang
|
Qilong Jia
|
Xueyang Liu
|
晓俊 饶
|
仰森 张
|
爽 彭
|
启龙 贾
|
雪阳 刘
Proceedings of the 22nd Chinese National Conference on Computational Linguistics
“随着互联网的普及,社交媒体虽然提供了交流观点的平台,但因其虚拟性和匿名性也加剧了仇恨言论的传播,因此自动侦测仇恨言论对于维护社交媒体平台的文明发展至关重要。针对以上问题,构建了一个中文仇恨言论数据集CHSD,并提出了一种中文仇恨言论侦测模型RoBERTa-CHHSD。该模型首先采用RoBERTa预训练语言模型对中文仇恨言论进行序列化处理,提取文本特征信息;再分别接入TextCNN模型和Bi-GRU模型,提取多层次局部语义特征和句子间全局依赖关系信息;将二者结果融合来提取文本中更深层次的仇恨言论特征,对中文仇恨言论进行分类,从而实现中文仇恨言论的侦测。实验结果表明,本模型在CHSD数据集上的F1值为89.12%,与当前最优主流模型RoBERTa-WWM相比提升了1.76%。”
pdf
bib
abs
CCL23-Eval 任务7系统报告:基于序列标注和指针生成网络的语法纠错方法(System Report for CCL23-Eval Task 7:A Syntactic Error Correction Approach Based on Sequence Labeling and Pointer Generation Networks)
Youren Yu (于右任)
|
Yangsen Zhang (张仰森)
|
Guanguang Chang (畅冠光)
|
Beibei Gao (高贝贝)
|
Yushan Jiang (姜雨杉)
|
Tuo Xiao (肖拓)
Proceedings of the 22nd Chinese National Conference on Computational Linguistics (Volume 3: Evaluations)
“针对当前大多数中文语法纠错模型存在错误边界识别不准确以及过度纠正的问题,我们提出了一种基于序列标注与指针生成网络的中文语法纠错模型。首先,在数据方面,我们使用了官方提供的lang8数据集和历年的CGED数据集,并对该数据集进行了繁体转简体、数据清洗等操作。其次,在模型方面,我们采用了ERNIE+Global Pointer的序列标注模型、基于ERNIE+CRF的序列标注模型、基于BART+指针生成网络的纠错模型以及基于CECToR的纠错模型。最后,在模型集成方面,我们使用了投票和基于ERNIE模型计算困惑度的方法,来生成最终预测结果。根据测试集的结果,我们的乃乏乍指标达到了48.68,位居第二名。”
2010
pdf
bib
A domain adaption Word Segmenter For Sighan Backoff 2010
Jiang Guo
|
Wenjie Su
|
Yangsen Zhang
CIPS-SIGHAN Joint Conference on Chinese Language Processing