Sequence-level Large Language Model Training with Contrastive Preference Optimization

Zhili Feng, Dhananjay Ram, Cole Hawkins, Aditya Rawal, Jinman Zhao, Sheng Zha


Abstract
The next token prediction loss is the dominant self-supervised training objective for large language models and has achieved promising results in a variety of downstream tasks. However, upon closer investigation of this objective, we find that it lacks an understanding of sequence-level signals, leading to a mismatch between training and inference processes. To bridge this gap, we introduce a contrastive preference optimization (CPO) procedure that can inject sequence-level information into the language model at any training stage without expensive human labeled data. Our experiments show that the proposed objective surpasses the next token prediction in terms of win rate in the instruction-following and text generation tasks.
Anthology ID:
2025.findings-naacl.233
Volume:
Findings of the Association for Computational Linguistics: NAACL 2025
Month:
April
Year:
2025
Address:
Albuquerque, New Mexico
Editors:
Luis Chiruzzo, Alan Ritter, Lu Wang
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
4158–4164
Language:
URL:
https://preview.aclanthology.org/fix-sig-urls/2025.findings-naacl.233/
DOI:
Bibkey:
Cite (ACL):
Zhili Feng, Dhananjay Ram, Cole Hawkins, Aditya Rawal, Jinman Zhao, and Sheng Zha. 2025. Sequence-level Large Language Model Training with Contrastive Preference Optimization. In Findings of the Association for Computational Linguistics: NAACL 2025, pages 4158–4164, Albuquerque, New Mexico. Association for Computational Linguistics.
Cite (Informal):
Sequence-level Large Language Model Training with Contrastive Preference Optimization (Feng et al., Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/fix-sig-urls/2025.findings-naacl.233.pdf