Yuanzhao Zhai
2025
COPR: Continual Human Preference Learning via Optimal Policy Regularization
Han Zhang
|
Lin Gui
|
Yu Lei
|
Yuanzhao Zhai
|
Yehong Zhang
|
Zhuo Zhang
|
Yulan He
|
Hui Wang
|
Yue Yu
|
Kam-Fai Wong
|
Bin Liang
|
Ruifeng Xu
Findings of the Association for Computational Linguistics: ACL 2025
Reinforcement Learning from Human Feedback (RLHF) is effective for aligning Large Language Models (LLMs) with human preferences. However, RLHF’s complex process limits its ability to continually learn human feedback, making it impractical for real-world applications where the deployed model continuously receives feedback from users. The non-RL-based method, such as Direct Preference Optimization (DPO), is not primitively favorable for Continual Learning (CL). We observe that when combined with Experiment Relay (ER) for CL, DPO tends to significantly widen the gap in the probability of human-preferred and dispreferred responses. Consequently, this diminishes the diversity in model generation, potentially leading to model collapse. To overcome the above challenges, we propose the Continual Optimal Policy Regularization (COPR), a novel non-RL offline method to convert the historical optimal policies into optimization constraints when continually learning new preferences. We first derive a moderate reward function from the pairwise ranking loss and then use the moderate reward to calculate a new sampling distribution to construct novel learning objectives and constraints. We also provide formal proof of the learnability of COPR. The experimental results show that COPR outperforms strong CL baselines on our proposed benchmark, in terms of reward-based, GPT-4 evaluations and human assessment.