Chenyang Li


2023

pdf
CCL23-Eval 任务6系统报告:基于深度学习的电信网络诈骗案件分类(System Report for CCL23-Eval Task 6: Classification of Telecom Internet Fraud Cases Based on Deep Learning)
Chenyang Li (李晨阳) | Long Zhang (张龙) | Zhongjie Zhao (赵中杰) | Hui Guo (郭辉)
Proceedings of the 22nd Chinese National Conference on Computational Linguistics (Volume 3: Evaluations)

“文本分类任务作为自然语言处理领域的基础任务,在面向电信网络诈骗领域的案件分类中扮演着至关重要的角色,对于智能化案件分析具有重大意义和深远影响。本任务的目的是对给定案件描述文本进行分类,案件文本包含对案件的经过脱敏处理后的整体描述。我们首先采用Ernie预训练模型对案件内容进行微调的方法得到每个案件的类别,再使用伪标签和模型融合方法对目前的F1值进行提升,最终在CCL23-Eval任务6电信网络诈骗案件分类评测中取得第二名的成绩,该任务的评价指标F1值为0.8628,达到了较为先进的检测效果。”

2021

pdf
Pretrain-Finetune Based Training of Task-Oriented Dialogue Systems in a Real-World Setting
Manisha Srivastava | Yichao Lu | Riley Peschon | Chenyang Li
Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Industry Papers

One main challenge in building task-oriented dialogue systems is the limited amount of supervised training data available. In this work, we present a method for training retrieval-based dialogue systems using a small amount of high-quality, annotated data and a larger, unlabeled dataset. We show that pretraining using unlabeled data can bring better model performance with a 31% boost in Recall@1 compared with no pretraining. The proposed finetuning technique based on a small amount of high-quality, annotated data resulted in 26% offline and 33% online performance improvement in Recall@1 over the pretrained model. The model is deployed in an agent-support application and evaluated on live customer service contacts, providing additional insights into the real-world implications compared with most other publications in the domain often using asynchronous transcripts (e.g. Reddit data). The high performance of 74% Recall@1 shown in the customer service example demonstrates the effectiveness of this pretrain-finetune approach in dealing with the limited supervised data challenge.