Qian Wan
2025
Empowering Math Problem Generation and Reasoning for Large Language Model via Synthetic Data based Continual Learning Framework
Qian Wan
|
Wangzi Shi
|
Jintian Feng
|
Shengyingjie Liu
|
Luona Wei
|
Zhicheng Dai
|
Jianwen Sun
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
The large language models (LLMs) learning framework for math problem generation (MPG) mostly performs homogeneous training in different epochs on small-scale manually annotated data. This pattern struggles to provide large-scale new quality data to support continual improvement, and fails to stimulate the mutual promotion reaction between generation and reasoning ability of math problem, resulting in the lack of reliable solving process. This paper proposes a synthetic data based continual learning framework to improve LLMs ability for MPG and math reasoning. The framework cycles through three stages, “supervised fine-tuning, data synthesis, direct preference optimization”, continuously and steadily improve performance. We propose a synthetic data method with dual mechanism of model self-play and multi-agent cooperation is proposed, which ensures the consistency and validity of synthetic data through sample filtering and rewriting strategies, and overcomes the dependence of continual learning on manually annotated data. A data replay strategy that assesses sample importance via loss differentials is designed to mitigate catastrophic forgetting. Experimental analysis on abundant authoritative math datasets demonstrates the superiority and effectiveness of our framework.
Search
Fix author
Co-authors
- Zhicheng Dai 1
- Jintian Feng 1
- Shengyingjie Liu 1
- Wangzi Shi 1
- Jianwen Sun 1
- show all...