Leyan Pan
2025
Superficial Self-Improved Reasoners Benefit from Model Merging
Xiangchi Yuan
|
Chunhui Zhang
|
Zheyuan Liu
|
Dachuan Shi
|
Leyan Pan
|
Soroush Vosoughi
|
Wenke Lee
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Large Language Models (LLMs) rely heavily on large-scale reasoning data, but as such data becomes increasingly scarce, model self-improvement offers a promising alternative. However, this process can lead to model collapse, as the model’s output becomes overly deterministic with reduced diversity. In this work, we identify a new risk beyond model collapse, which we term the Superficial Self-Improved Reasoners phenomenon. This phenomenon indicates that while self-improvement enhances in-domain (ID) reasoning accuracy, it degrades the model’s generalized reasoning capability on out-of-domain (OOD) datasets, as the model tends to memorize the training data. Our analyses of layer importance and parameter changes reveal that reasoning-critical layers receive fewer updates compared to less relevant layers during self-improvement. To address this, we propose Iterative Model Merging (IMM), which balances reasoning improvements and generalization by merging the weights of the original and self-improved models. IMM effectively mitigates model collapse and improves generalized reasoning capability. Code is available at https://github.com/xiangchi-yuan/merge_syn
Search
Fix author
Co-authors
- Wenke Lee 1
- Zheyuan Liu 1
- Dachuan Shi 1
- Soroush Vosoughi 1
- Xiangchi Yuan 1
- show all...