Mingyang Ling
2025
PLAN-TUNING: Post-Training Language Models to Learn Step-by-Step Planning for Complex Problem Solving
Mihir Parmar
|
Palash Goyal
|
Xin Liu
|
Yiwen Song
|
Mingyang Ling
|
Chitta Baral
|
Hamid Palangi
|
Tomas Pfister
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Recently, decomposing complex problems into simple subtasks–a crucial part of human-like natural planning–to solve the given problem has significantly boosted the performance of large language models (LLMs). However, leveraging such planning structures during post-training to boost the performance of smaller open-source LLMs remains underexplored. Motivated by this, we introduce PLAN-TUNING, a unified post-training framework that (i) distills synthetic task decompositions (termed “planning trajectories”) from large-scale LLMs and (ii) fine-tunes smaller models via supervised and reinforcement-learning objectives designed to mimic these planning processes to improve complex reasoning. On GSM8k and the MATH benchmarks, plan-tuned models outperform strong baselines by an average ~7%. Furthermore, plan-tuned models show better generalization capabilities on out-of-domain datasets, with average ~10% and ~12% performance improvements on OlympiadBench and AIME 2024, respectively. Our detailed analysis demonstrates how planning trajectories improves complex reasoning capabilities, showing that PLAN-TUNING is an effective strategy for improving task-specific performance of smaller LLMs.
Search
Fix author
Co-authors
- Chitta Baral 1
- Palash Goyal 1
- Xin Liu (刘鑫) 1
- Hamid Palangi 1
- Mihir Parmar 1
- show all...