Zhuo Jiang
2025
LLMSR@XLLM25: A Language Model-Based Pipeline for Structured Reasoning Data Construction
Hongrui Xing
|
Xinzhang Liu
|
Zhuo Jiang
|
Zhihao Yang
|
Yitong Yao
|
Zihan Wang
|
Wenmin Deng
|
Chao Wang
|
Shuangyong Song
|
Wang Yang
|
Zhongjiang He
|
Yongxiang Li
Proceedings of the 1st Joint Workshop on Large Language Models and Structure Modeling (XLLM 2025)
In this paper, we present a novel pipeline for the XLLM Shared Task-III: Large Language Model for Structural Reasoning (LLM-SR). Our pipeline addresses key challenges in automatic process-reward training data construction, such as high manual annotation costs, limited accuracy of large models in structured data processing, and dependency on auxiliary information for validation. To overcome these limitations, we first decompose the construction process into extraction and validation phases. Leveraging model-generated annotations, we produce pseudo-labeled data and iteratively refine model performance. Second, by analyzing structured data patterns, we encode structural constraints into a rule-based module and fine-tune the model with Gradient Reward Policy Optimization (GRPO), significantly improving structured data extraction success rates. Finally, we train the model to generate critical responses that assess evidence-conclusion relationships, thus enhancing validation reliability. Experimental results demonstrate that our pipeline outperforms models with an order of magnitude more parameters and achieves the first position on the task.
2023
CAME: Confidence-guided Adaptive Memory Efficient Optimization
Yang Luo
|
Xiaozhe Ren
|
Zangwei Zheng
|
Zhuo Jiang
|
Xin Jiang
|
Yang You
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Adaptive gradient methods, such as Adam and LAMB, have demonstrated excellent performance in the training of large language models. Nevertheless, the need for adaptivity requires maintaining second-moment estimates of the per-parameter gradients, which entails a high cost of extra memory overheads. To solve this problem, several memory-efficient optimizers (e.g., Adafactor) have been proposed to obtain a drastic reduction in auxiliary memory usage, but with a performance penalty. In this paper, we first study a confidence-guided strategy to reduce the instability of existing memory efficient optimizers. Based on this strategy, we propose CAME to simultaneously achieve two goals: fast convergence as in traditional adaptive methods, and low memory usage as in memory-efficient methods. Extensive experiments demonstrate the training stability and superior performance of CAME across various NLP tasks such as BERT and GPT-2 training. Notably, for BERT pre-training on the large batch size of 32,768, our proposed optimizer attains faster convergence and higher accuracy compared with the Adam optimizer. The implementation of CAME is publicly available.
Search
Fix author
Co-authors
- Wenmin Deng 1
- Zhongjiang He 1
- Xin Jiang 1
- Yongxiang Li 1
- Xinzhang Liu (刘欣璋) 1
- show all...