Aligning Black-box Language Models with Human Judgments

Gerrit J.j. Van Den Burg, Gen Suzuki, Wei Liu, Murat Sensoy


Abstract
Large language models (LLMs) are increasingly used as automated judges to evaluate recommendation systems, search engines, and other subjective tasks, where relying on human evaluators can be costly, time-consuming, and unscalable. LLMs offer an efficient solution for continuous, automated evaluation. However, since the systems that are built and improved with these judgments are ultimately designed for human use, it is crucial that LLM judgments align closely with human evaluators to ensure such systems remain human-centered. On the other hand, aligning LLM judgments with human evaluators is challenging due to individual variability and biases in human judgments. We propose a simple yet effective framework to align LLM judgments with individual human evaluators or their aggregated judgments, without retraining or fine-tuning the LLM. Our approach learns a linear mapping between the LLM’s outputs and human judgments, achieving over 142% average improvement in agreement across 29 tasks with only a small number of calibration examples used for training. Notably, our method works in zero-shot and few-shot settings, exceeds inter-human agreement on four out of six tasks, and enables smaller LLMs to achieve performance comparable to that of larger models.
Anthology ID:
2025.findings-naacl.376
Volume:
Findings of the Association for Computational Linguistics: NAACL 2025
Month:
April
Year:
2025
Address:
Albuquerque, New Mexico
Editors:
Luis Chiruzzo, Alan Ritter, Lu Wang
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
6737–6749
Language:
URL:
https://preview.aclanthology.org/fix-sig-urls/2025.findings-naacl.376/
DOI:
Bibkey:
Cite (ACL):
Gerrit J.j. Van Den Burg, Gen Suzuki, Wei Liu, and Murat Sensoy. 2025. Aligning Black-box Language Models with Human Judgments. In Findings of the Association for Computational Linguistics: NAACL 2025, pages 6737–6749, Albuquerque, New Mexico. Association for Computational Linguistics.
Cite (Informal):
Aligning Black-box Language Models with Human Judgments (Van Den Burg et al., Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/fix-sig-urls/2025.findings-naacl.376.pdf