Yuqing Sun
2023
End-to-end Adversarial Sample Generation for Data Augmentation
Tianyuan Liu
|
Yuqing Sun
Findings of the Association for Computational Linguistics: EMNLP 2023
Adversarial samples pose a significant challenge to neural inference models. In this paper, we propose a novel enhancing approach A3 for the robustness of the neural NLP models, which combines the adversarial training and data augmentation. We propose an adversarial sample generator that consists of a conditioned paraphrasing model and a condition generator. The latter aims to generate conditions which guides the paraphrasing model to generate adversarial samples. A pretrained discriminator is introduced to help the adversarial sample generator adapt to the data characteristics for different tasks. We adopt a weighted loss to incorporate the generated adversarial samples with the original samples for augmented training. Compared to existing methods, our approach is much efficient since the generation process is independent to the target model and the generated samples are reusable for different models. Experimental results on several tasks show that our approach improves the overall performance of the trained model. Specially, the enhanced model is robust for various attacking techniques.
2022
专业技术文本关键词抽取方法(Keyword Extraction on Professional Technical Text)
Xiangdong Ning (宁祥东)
|
Bin Gong (龚斌)
|
Lin Wan (万林)
|
Yuqing Sun (孙宇清)
Proceedings of the 21st Chinese National Conference on Computational Linguistics
“相关性和特异性对于专业技术文本关键词抽取问题至关重要,本文针对代码检索任务,综合语义信息、序列关系和句法结构提出了专业技术文本关键词抽取模型。采用预训练语言模型BERT提取文本抽象语义信息;采用序列关系和句法结构融合分析的方法构建语义关联图,以捕获词汇之间的长距离语义依赖关系;基于随机游走算法和词汇知识计算关键词权重,以兼顾关键词的相关性和特异性。在两个数据集和其他模型进行了性能比较,结果表明本模型抽取的关键词具有更好地相关性和特异性。”
2021
融合自编码器和对抗训练的中文新词发现方法(Finding Chinese New Word By Combining Self-encoder and Adversarial Training)
Wei Pan (潘韦)
|
Tianyuan Liu (刘天元)
|
Yuqing Sun (孙宇清)
|
Bin Gong (龚斌)
|
Yongman Zhang (张永满)
|
Ping Yang (杨萍)
Proceedings of the 20th Chinese National Conference on Computational Linguistics
新词的不断涌现是语言的自然规律,如在专业领域中新概念和实体名称代表了专业领域中某些共同特征集合的抽象概括,经常作为关键词在句子中承担一定的角色。新词发现问题直接影响中文分词结果和后继文本语义理解任务的性能,是自然语言处理研究领域的重要任务。本文提出了融合自编码器和对抗训练的中文新词发现模型,采用字符级别的自编码器和无监督自学习的方式进行预训练,可以有效提取语义信息,不受分词结果影响,适用于不同领域的文本;同时为了引入通用语言学知识,添加了先验句法分析结果,借助领域共享编码器融合语义和语法信息,以提升划分歧义词的准确性;采用对抗训练机制,以提取领域无关特征,减少对于人工标注语料的依赖。实验选择六个不同的专业领域数据集评估新词发现任务,结果显示本文模型优于其他现有方法;结合模型析构实验,详细验证了各个模块的有效性。同时通过选择不同类型的源域数据和不同数量的目标域数据进行对比实验,验证了模型的鲁棒性。最后以可视化的方式对比了自编码器和共享编码器对不同领域数据的编码结果,显示了对抗训练方法能够有效地提取两者之间的相关性和差异性信息。
Search
Co-authors
- Tianyuan Liu 2
- Bin Gong (龚斌) 2
- Wei Pan 1
- Yongman Zhang (张永满) 1
- Ping Yang (杨萍) 1
- show all...