Xiaobo Wang
2025
ReflectEvo: Improving Meta Introspection of Small LLMs by Learning Self-Reflection
Jiaqi Li
|
Xinyi Dong
|
Yang Liu
|
Zhizhuo Yang
|
Quansen Wang
|
Xiaobo Wang
|
Song-Chun Zhu
|
Zixia Jia
|
Zilong Zheng
Findings of the Association for Computational Linguistics: ACL 2025
We present a novel pipeline, ReflectEvo, to demonstrate that small language models (SLMs) can enhance meta introspection through reflection learning. This process iteratively generates self-reflection for self-training, fostering a continuous and self-evolving process. Leveraging this pipeline, we construct ReflectEvo-460k, a large-scale, comprehensive, self-generated reflection dataset with broadened instructions and diverse multi-domain tasks. Building upon this dataset, we demonstrate the effectiveness of reflection learning to improve SLMs’ reasoning abilities using SFT and DPO with remarkable performance, substantially boosting Llama-3 from 52.4% to 71.2% and Mistral from 44.4% to 71.1%. It validates that ReflectEvo can rival or even surpass the reasoning capability of the three prominent open-sourced models on BIG-bench without distillation from superior models or fine-grained human annotation. We further conduct a deeper analysis of the high quality of self-generated reflections and their impact on error localization and correction. Our work highlights the potential of continuously enhancing the reasoning performance of SLMs through iterative reflection learning in the long run.
Adaptive Preference Optimization with Uncertainty-aware Utility Anchor
Xiaobo Wang
|
Zixia Jia
|
Jiaqi Li
|
Qi Liu
|
Zilong Zheng
Findings of the Association for Computational Linguistics: EMNLP 2025
Offline preference optimization methods are efficient for large language models (LLMs) alignment. Direct Preference optimization (DPO)-like learning, one of the most popular approaches, stands out for its efficiency in reward modeling. However, these methods typically follow the convention to use Bradley-Terry (BT) reward modeling that faces several critical assumptions, including the requirement for pairwise training data, model distribution shifting, human rationality assumption, etc. To address these limitations, we propose a general framework for offline preference optimization methods, Adaptive Preference Optimization with Utility Anchor (UAPO), which introduces an anchoring function to estimate the uncertainties brought from preference data annotation. Our method enables training even in scenarios where the data is unpaired, significantly enhancing data utilization efficiency. Moreover, the anchor design makes UAPO more robust in the training process. Experimental results demonstrate that UAPO achieves competitive outcomes without the strict dependency on data pairing, paving the way for more flexible and effective preference optimization methods.
Search
Fix author
Co-authors
- Zixia Jia 2
- Jiaqi Li 2
- Zilong Zheng 2
- Xinyi Dong 1
- Yang Liu (刘扬) 1
- show all...