Mark Cusick
2025
DHP Benchmark: Are LLMs Good NLG Evaluators?
Yicheng Wang
|
Jiayi Yuan
|
Yu-Neng Chuang
|
Zhuoer Wang
|
Yingchi Liu
|
Mark Cusick
|
Param Kulkarni
|
Zhengping Ji
|
Yasser Ibrahim
|
Xia Hu
Findings of the Association for Computational Linguistics: NAACL 2025
Large Language Models (LLMs) are increasingly serving as evaluators in Natural Language Generation (NLG) tasks; this is often referred to as “LLM-as-a-judge” paradigm. However, the capabilities of LLMs in evaluating NLG quality remain underexplored. Current studies depend on human assessments and simple metrics that fail to capture the discernment of LLMs across diverse NLG tasks. To address this gap, we propose the Discernment of Hierarchical Perturbation (DHP) benchmarking framework, which provides quantitative discernment scores for LLMs. This framework leverages hierarchically perturbed text data and statistical tests to systematically measure the NLG evaluation capabilities of LLMs. We re-established six evaluation datasets for this benchmark, covering four NLG tasks: Summarization, Story Completion, Question Answering, and Translation. Our comprehensive benchmarking of five major LLM families provides critical insight into their strengths and limitations as NLG evaluators. Our dataset is available at https://huggingface.co/datasets/YCWANGVINCE/DHP_Benchmark.
Search
Fix data
Co-authors
- Yu-Neng Chuang 1
- Xia Hu 1
- Yasser Ibrahim 1
- Zhengping Ji 1
- Param Kulkarni 1
- show all...