Improving Reward Models with Synthetic Critiques

Zihuiwen Ye, Fraser David Greenlee, Max Bartolo, Phil Blunsom, Jon Ander Campos, Matthias Gallé


Abstract
Reward models (RMs) play a critical role in aligning language models through the process of reinforcement learning from human feedback. RMs are trained to predict a score reflecting human preference, which requires significant time and cost for human annotation. Additionally, RMs tend to quickly overfit on superficial features in the training set, hindering their generalization performance on unseen distributions. We propose a novel approach using synthetic natural language critiques generated by large language models to provide additional feedback, evaluating aspects such as instruction following, correctness, and style. This offers richer signals and more robust features for RMs to assess and score on. We demonstrate that high-quality critiques improve the performance and data efficiency of RMs initialized from different pretrained models, reducing the reliance on costly human annotations. Furthermore, incorporating critiques improves both the interpretability and robustness of RM training.
Anthology ID:
2025.findings-naacl.254
Volume:
Findings of the Association for Computational Linguistics: NAACL 2025
Month:
April
Year:
2025
Address:
Albuquerque, New Mexico
Editors:
Luis Chiruzzo, Alan Ritter, Lu Wang
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
4506–4520
Language:
URL:
https://preview.aclanthology.org/Author-page-Marten-During-lu/2025.findings-naacl.254/
DOI:
Bibkey:
Cite (ACL):
Zihuiwen Ye, Fraser David Greenlee, Max Bartolo, Phil Blunsom, Jon Ander Campos, and Matthias Gallé. 2025. Improving Reward Models with Synthetic Critiques. In Findings of the Association for Computational Linguistics: NAACL 2025, pages 4506–4520, Albuquerque, New Mexico. Association for Computational Linguistics.
Cite (Informal):
Improving Reward Models with Synthetic Critiques (Ye et al., Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/Author-page-Marten-During-lu/2025.findings-naacl.254.pdf