Yongxiang Li
2025
LLMSR@XLLM25: A Language Model-Based Pipeline for Structured Reasoning Data Construction
Hongrui Xing
|
Xinzhang Liu
|
Zhuo Jiang
|
Zhihao Yang
|
Yitong Yao
|
Zihan Wang
|
Wenmin Deng
|
Chao Wang
|
Shuangyong Song
|
Wang Yang
|
Zhongjiang He
|
Yongxiang Li
Proceedings of the 1st Joint Workshop on Large Language Models and Structure Modeling (XLLM 2025)
In this paper, we present a novel pipeline for the XLLM Shared Task-III: Large Language Model for Structural Reasoning (LLM-SR). Our pipeline addresses key challenges in automatic process-reward training data construction, such as high manual annotation costs, limited accuracy of large models in structured data processing, and dependency on auxiliary information for validation. To overcome these limitations, we first decompose the construction process into extraction and validation phases. Leveraging model-generated annotations, we produce pseudo-labeled data and iteratively refine model performance. Second, by analyzing structured data patterns, we encode structural constraints into a rule-based module and fine-tune the model with Gradient Reward Policy Optimization (GRPO), significantly improving structured data extraction success rates. Finally, we train the model to generate critical responses that assess evidence-conclusion relationships, thus enhancing validation reliability. Experimental results demonstrate that our pipeline outperforms models with an order of magnitude more parameters and achieves the first position on the task.
2024
Sentence Segmentation and Punctuation for Ancient Books Based on Supervised In-context Training
Shiquan Wang
|
Weiwei Fu
|
Mengxiang Li
|
Zhongjiang He
|
Yongxiang Li
|
Ruiyu Fang
|
Li Guan
|
Shuangyong Song
Proceedings of the Third Workshop on Language Technologies for Historical and Ancient Languages (LT4HALA) @ LREC-COLING-2024
This paper describes the participation of team “TeleAI” in the third International Chinese Ancient Chinese Language Information Processing Evaluation (EvalHan24). The competition comprises a joint task of sentence segmentation and punctuation, categorized into open and closed tracks based on the models and data used. In the final evaluation, our system achieved significantly better results than the baseline. Specifically, in the closed-track sentence segmentation task, we obtained an F1 score of 0.8885, while in the sentence punctuation task, we achieved an F1 score of 0.7129.
Search
Fix author
Co-authors
- Zhongjiang He 2
- Shuangyong Song (宋双永) 2
- Wenmin Deng 1
- Ruiyu Fang (方瑞玉) 1
- Weiwei Fu 1
- show all...