2025
pdf
bib
abs
LLMSR@XLLM25: A Language Model-Based Pipeline for Structured Reasoning Data Construction
Hongrui Xing
|
Xinzhang Liu
|
Zhuo Jiang
|
Zhihao Yang
|
Yitong Yao
|
Zihan Wang
|
Wenmin Deng
|
Chao Wang
|
Shuangyong Song
|
Wang Yang
|
Zhongjiang He
|
Yongxiang Li
Proceedings of the 1st Joint Workshop on Large Language Models and Structure Modeling (XLLM 2025)
In this paper, we present a novel pipeline for the XLLM Shared Task-III: Large Language Model for Structural Reasoning (LLM-SR). Our pipeline addresses key challenges in automatic process-reward training data construction, such as high manual annotation costs, limited accuracy of large models in structured data processing, and dependency on auxiliary information for validation. To overcome these limitations, we first decompose the construction process into extraction and validation phases. Leveraging model-generated annotations, we produce pseudo-labeled data and iteratively refine model performance. Second, by analyzing structured data patterns, we encode structural constraints into a rule-based module and fine-tune the model with Gradient Reward Policy Optimization (GRPO), significantly improving structured data extraction success rates. Finally, we train the model to generate critical responses that assess evidence-conclusion relationships, thus enhancing validation reliability. Experimental results demonstrate that our pipeline outperforms models with an order of magnitude more parameters and achieves the first position on the task.
2023
pdf
bib
abs
CCL23-Eval任务6系统报告:基于原型监督对比学习和模型融合的电信网络诈骗案件分类(System Report for CCL23-Eval Task 6: Classification of Telecom Network Fraud Cases Based on Prototypical Supervised Contrastive Learning and Model Fusion)
Site Xiong (熊思诗)
|
Jili Zhang (张吉力)
|
Yu Zhao (赵宇)
|
Xinzhang Liu (刘欣璋)
|
Yongshuang Song (宋双永)
Proceedings of the 22nd Chinese National Conference on Computational Linguistics (Volume 3: Evaluations)
“本文提出了一种基于原型监督对比学习和模型融合的电信网络诈骗案件分类方法。为了增强模型区分易混淆类别的能力,我们采用特征学习与分类器学习并行的双分支神经网络训练框架,并通过领域预训练、模型融合、后置分类等策略优化分类效果。最终,本文方法在CCL2023-FCC评测任务上取得了Macro-F1为0.8601 的成绩。”
pdf
bib
abs
CCL23-Eval 任务7赛道一系统报告:基于序列到序列模型的自动化文本纠错系统(System Report for CCL23-Eval Task 7 Track 1: Automated text error correction pipeline based on sequence-to-sequence models)
Shixuan Liu (刘世萱)
|
Xinzhang Liu (刘欣璋)
|
Yuyao Huang (黄钰瑶)
|
Chao Wang (王超)
|
Yongshuang Song (宋双永)
Proceedings of the 22nd Chinese National Conference on Computational Linguistics (Volume 3: Evaluations)
“本文介绍了本队伍在CCL-2023汉语学习者文本纠错评测大赛赛道一中提交的参赛系统。近年来,大规模的中文预训练模型在各种任务上表现出色,而不同的预训练模型在特定任务上也各有优势。然而,由于汉语学习者文本纠错任务存在语法错误复杂和纠错语料稀缺等特点,因此采用基于序列标记的预训练文本纠错模型来解决问题是自然的选择。我们的团队采用了序列到序列的纠错模型,并采取了两阶段训练策略,设计了一套基于序列到序列文本纠错的pipeline。首先,我们对训练集数据进行了清洗处理;在第一阶段训练中,我们在训练集上使用数据增强技术;在第二阶段,我们利用验证集进行微调,并最终采用多个模型投票集成的方式完成后处理。在实际的系统测评中,我们提交的结果在封闭任务排行榜上超出baseline模型17.01分(40.59->57.6)。”