Zehao Li
2025
Context-DPO: Aligning Language Models for Context-Faithfulness
Baolong Bi
|
Shaohan Huang
|
Yiwei Wang
|
Tianchi Yang
|
Zihan Zhang
|
Haizhen Huang
|
Lingrui Mei
|
Junfeng Fang
|
Zehao Li
|
Furu Wei
|
Weiwei Deng
|
Feng Sun
|
Qi Zhang
|
Shenghua Liu
Findings of the Association for Computational Linguistics: ACL 2025
Reliable responses from large language models (LLMs) require adherence to user instructions and retrieved information. While alignment techniques help LLMs align with human intentions and values, improving context-faithfulness through alignment remains underexplored. To address this, we propose Context-DPO, the first alignment method specifically designed to enhance LLMs’ context-faithfulness. We introduce ConFiQA, a benchmark that simulates Retrieval-Augmented Generation (RAG) scenarios with knowledge conflicts to evaluate context-faithfulness. By leveraging faithful and stubborn responses to questions with provided context from ConFiQA, our Context-DPO aligns LLMs through direct preference optimization. Extensive experiments demonstrate that our Context-DPO significantly improves context-faithfulness, achieving 35% to 280% improvements on popular open-source models. Further analysis demonstrates that Context-DPO preserves LLMs’ generative capabilities while providing interpretable insights into context utilization.
Search
Fix author
Co-authors
- Baolong Bi 1
- Weiwei Deng 1
- Junfeng Fang 1
- Shaohan Huang 1
- Haizhen Huang 1
- show all...