Debatable Intelligence: Benchmarking LLM Judges via Debate Speech Evaluation

Noy Sternlicht, Ariel Gera, Roy Bar-Haim, Tom Hope, Noam Slonim


Abstract
We introduce Debate Speech Evaluation as a novel and challenging benchmark for assessing LLM judges. Evaluating debate speeches requires a deep understanding of the speech at multiple levels, including argument strength and relevance, the coherence and organization of the speech, the appropriateness of its style and tone, and so on. This task involves a unique set of cognitive abilities that previously received limited attention in systematic LLM benchmarking. To explore such skills, we leverage a dataset of over 600 meticulously annotated debate speeches and present the first in-depth analysis of how state-of-the-art LLMs compare to human judges on this task. Our findings reveal a nuanced picture: while larger models can approximate individual human judgments in some respects, they differ substantially in their overall judgment behavior. We also investigate the ability of frontier LLMs to generate persuasive, opinionated speeches, showing that models may perform at a human level on this task.
Anthology ID:
2025.emnlp-main.953
Volume:
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
18861–18880
Language:
URL:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.953/
DOI:
Bibkey:
Cite (ACL):
Noy Sternlicht, Ariel Gera, Roy Bar-Haim, Tom Hope, and Noam Slonim. 2025. Debatable Intelligence: Benchmarking LLM Judges via Debate Speech Evaluation. In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, pages 18861–18880, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
Debatable Intelligence: Benchmarking LLM Judges via Debate Speech Evaluation (Sternlicht et al., EMNLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.953.pdf
Checklist:
 2025.emnlp-main.953.checklist.pdf