Chen Kang
2025
基于多模型协同的儿童互联网新闻风险管理与价值观引导框架
梁宇蓝 梁宇蓝 | 王悦 王悦 | Dong Yu | Pengyuan Liu | Chen Kang
Proceedings of the 24th China National Conference on Computational Linguistics (CCL 2025)
梁宇蓝 梁宇蓝 | 王悦 王悦 | Dong Yu | Pengyuan Liu | Chen Kang
Proceedings of the 24th China National Conference on Computational Linguistics (CCL 2025)
随着互联网在儿童群体中的广泛普及,新闻内容的”毒性遗留”与价值观缺失已成为亟待解决的安全挑战。本文提出了一种多模型协同的儿童新闻改写框架(CRV-LLM),旨在从词汇、事件、标题和价值观四个维度,对原始新闻文本进行深度风险识别与精准改写。CRV-LLM集成了四个轻量化风险检测模型和R1-Distill-Qwen-32B改写模型,通过模型间的协同与反馈,能够在保证儿童可读性的前提下,有效剔除潜在有害信息并植入积极价值引导。实验结果表明,CRV-LLM框架在安全性、教育性等核心指标上优于主流模型,且推理效率提升62%,为儿童互联网内容安全管理提供了一种高效、可扩展的技术方案。
传统价值观成语当代语境表现分析———基于BCC语料库的计量研究
孙浩 孙浩 | 刘洋洋 刘洋洋 | Huidong Du | Pengyuan Liu | Dong Yu | Chen Kang
Proceedings of the 24th China National Conference on Computational Linguistics (CCL 2025)
孙浩 孙浩 | 刘洋洋 刘洋洋 | Huidong Du | Pengyuan Liu | Dong Yu | Chen Kang
Proceedings of the 24th China National Conference on Computational Linguistics (CCL 2025)
"中华优秀传统文化是提升我国新时代文化软实力的重要源泉,将传统价值观和成语相结合,有助于继承和弘扬我们的优秀文明。本文提出了传统价值观成语当代语境表现的研究框架,基于BCC语料库对传统价值观成语语料数量分布和成语传统价值观偏好分布特征、在当代语境中的情感倾向及高频词分布特点、社会话题及道德特征进行计量研究,并提出了传统价值观成语的当代社会话题及道德适应性指数,以系统研究传统价值观成语的当代语境表现。本文为传统文化的当代计量研究提供了新的视角,也为数字人文领域的相关研究提供了参考依据,旨在增强中华优秀传统文化在当今新时代的影响力,为中华文明的传承与创新作出贡献。"
2024
Bridging the Gap between Authentic and Answer-Guided Images for Chinese Vision-Language Understanding Enhancement
Feiyu Wang | Wenyu Guo | Dong Yu | Chen Kang | Pengyuan Liu
Proceedings of the 23rd Chinese National Conference on Computational Linguistics (Volume 3: Evaluations)
Feiyu Wang | Wenyu Guo | Dong Yu | Chen Kang | Pengyuan Liu
Proceedings of the 23rd Chinese National Conference on Computational Linguistics (Volume 3: Evaluations)
“The objective of the Chinese Vision-Language Understanding Evaluation (CVLUE) is to comprehensively assess the performance of Chinese vision-language multimodal pre-trained models in multimodal modeling and understanding across four tasks: Image-Text Retrieval, Visual Question Answering, Visual Grounding, and Visual Dialog. To enhance the models’ performance across various multimodal tasks, this paper propose a multimodal information understanding enhancement method based on answer-guided images. Firstly, we propose task-specific methods for answer-guided image generation. Secondly, the authentic and answer-guided images are fed into the model for multimodal fine-tuning, respectively. Finally, training objectives are set for different tasks to minimize the gap between the answer-guided images and authentic images, thereby supervising the results produced by the authentic images utlizing answer-guided images. The experimental results demonstrate the effectiveness of the proposed method.”