Yuantao Zhang
2025
Self-Improvement Towards Pareto Optimality: Mitigating Preference Conflicts in Multi-Objective Alignment
Moxin Li
|
Yuantao Zhang
|
Wenjie Wang
|
Wentao Shi
|
Zhuo Liu
|
Fuli Feng
|
Tat-Seng Chua
Findings of the Association for Computational Linguistics: ACL 2025
Multi-Objective Alignment (MOA) aims to align LLMs’ responses with multiple human preference objectives, with Direct Preference Optimization (DPO) emerging as a prominent approach. However, we find that DPO-based MOA approaches suffer from widespread preference conflicts in the data, where different objectives favor different responses. This results in conflicting optimization directions, hindering the optimization on the Pareto Front. To address this, we propose to construct Pareto-optimal responses to resolve preference conflicts. To efficiently obtain and utilize such responses, we propose a self-improving DPO framework that enables LLMs to self-generate and select Pareto-optimal responses for self-supervised preference alignment. Extensive experiments on two datasets demonstrate the superior Pareto Front achieved by our framework compared to various baselines
Search
Fix author
Co-authors
- Tat-Seng Chua 1
- Fuli Feng 1
- Moxin Li 1
- Zhuo Liu 1
- Wentao Shi 1
- show all...