Diverse AI Feedback For Large Language Model Alignment
Tianshu Yu, Ting-En Lin, Yuchuan Wu, Min Yang, Fei Huang, Yongbin Li
Abstract
Recent advances in large language models (LLMs) focus on aligning models with human values to minimize harmful content. However, existing methods often rely on a single type of feedback, such as preferences, annotated labels, or critiques, which can lead to overfitting and suboptimal performance. In this paper, we propose Diverse AIFeedback (DAIF), a novel approach that integrates three types of feedback—critique, refinement, and preference—tailored to tasks of varying uncertainty levels. Through an analysis of information gain, we show that critique feedback is most effective for low-uncertainty tasks, refinement feedback for medium-uncertainty tasks, and preference feedback for high-uncertainty tasks. Training with this diversified feedback reduces overfitting and improves alignment. Experimental results across three tasks—question answering, dialog generation, and text summarization–demonstrate that DAIF outperforms traditional methods relying on a single feedback type.1- Anthology ID:
- 2025.tacl-1.19
- Volume:
- Transactions of the Association for Computational Linguistics, Volume 13
- Month:
- Year:
- 2025
- Address:
- Cambridge, MA
- Venue:
- TACL
- SIG:
- Publisher:
- MIT Press
- Note:
- Pages:
- 392–407
- Language:
- URL:
- https://preview.aclanthology.org/corrections-2025-07/2025.tacl-1.19/
- DOI:
- 10.1162/tacl_a_00746
- Cite (ACL):
- Tianshu Yu, Ting-En Lin, Yuchuan Wu, Min Yang, Fei Huang, and Yongbin Li. 2025. Diverse AI Feedback For Large Language Model Alignment. Transactions of the Association for Computational Linguistics, 13:392–407.
- Cite (Informal):
- Diverse AI Feedback For Large Language Model Alignment (Yu et al., TACL 2025)
- PDF:
- https://preview.aclanthology.org/corrections-2025-07/2025.tacl-1.19.pdf