Rongtao Xu
2025
Structured Preference Optimization for Vision-Language Long-Horizon Task Planning
Xiwen Liang
|
Min Lin
|
Weiqi Ruan
|
Rongtao Xu
|
Yuecheng Liu
|
Jiaqi Chen
|
Bingqian Lin
|
Yuzheng Zhuang
|
Xiaodan Liang
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Existing vision-language planning methods perform well on short-horizon tasks but struggle with long-horizon reasoning in dynamic environments due to the difficulty of training models to generate high-quality reasoning processes. To address this, we propose Structured Preference Optimization (SPO), a framework that enhances reasoning and action selection for long-horizon task planning through structured evaluation and optimized training. SPO introduces: 1) Structured Preference Evaluation and Optimization, which evaluates reasoning chains across task relevance, historical consistency (as part of textual coherence), and image awareness (alignment with visual observations) to construct high-quality preference pairs; and 2) Curriculum-Guided Progressive Learning, enabling the model to adapt from simple to complex tasks, thereby improving generalization and robustness. To advance research in vision-language long-horizon task planning, we introduce ExtendaBench, a comprehensive benchmark covering 1,509 tasks across VirtualHome and Habitat 2.0, categorized into ultra-short, short, medium, and long tasks. Experimental results demonstrate that SPO significantly improves reasoning quality and final decision accuracy, outperforming prior methods on long-horizon tasks and underscoring the effectiveness of preference-driven optimization in vision-language task planning. Specifically, SPO achieves a +5.98% GCR and +4.68% SR improvement in VirtualHome and a +3.30% GCR and +2.11% SR improvement in Habitat over the best-performing baselines.
Search
Fix author
Co-authors
- Jiaqi Chen 1
- Xiwen Liang 1
- Xiaodan Liang 1
- Min Lin 1
- Bingqian Lin 1
- show all...