2025
pdf
bib
abs
JuStRank: Benchmarking LLM Judges for System Ranking
Ariel Gera
|
Odellia Boni
|
Yotam Perlitz
|
Roy Bar-Haim
|
Lilach Eden
|
Asaf Yehudai
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Given the rapid progress of generative AI, there is a pressing need to systematically compare and choose between the numerous models and configurations available. The scale and versatility of such evaluations make the use of LLM-based judges a compelling solution for this challenge. Crucially, this approach requires first to validate the quality of the LLM judge itself. Previous work has focused on instance-based assessment of LLM judges, where a judge is evaluated over a set of responses, or response pairs, while being agnostic to their source systems. We argue that this setting overlooks critical factors affecting system-level ranking, such as a judge’s positive or negative bias towards certain systems. To address this gap, we conduct the first large-scale study of LLM judges as system rankers. System scores are generated by aggregating judgment scores over multiple system outputs, and the judge’s quality is assessed by comparing the resulting system ranking to a human-based ranking. Beyond overall judge assessment, our analysis provides a fine-grained characterization of judge behavior, including their decisiveness and bias.
pdf
bib
abs
InspectorRAGet: An Introspection Platform for RAG Evaluation
Kshitij P Fadnis
|
Siva Sankalp Patel
|
Odellia Boni
|
Yannis Katsis
|
Sara Rosenthal
|
Benjamin Sznajder
|
Marina Danilevsky
Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (System Demonstrations)
Large Language Models (LLM) have become a popular approach for implementing Retrieval Augmented Generation (RAG) systems, and a significant amount of effort has been spent on building good models and metrics. In spite of increased recognition of the need for rigorous evaluation of RAG systems, few tools exist that go beyond the creation of model output and automatic calculation. We present InspectorRAGet, an introspection platform for performing a comprehensive analysis of the quality of RAG system output. InspectorRAGet allows the user to analyze aggregate and instance-level performance of RAG systems, using both human and algorithmicmetrics as well as annotator quality. InspectorRAGet is suitable for multiple use cases and is available publicly to the community.A live instance of the platform is available at https://ibm.biz/InspectorRAGet
2019
pdf
bib
abs
A Summarization System for Scientific Documents
Shai Erera
|
Michal Shmueli-Scheuer
|
Guy Feigenblat
|
Ora Peled Nakash
|
Odellia Boni
|
Haggai Roitman
|
Doron Cohen
|
Bar Weiner
|
Yosi Mass
|
Or Rivlin
|
Guy Lev
|
Achiya Jerbi
|
Jonathan Herzig
|
Yufang Hou
|
Charles Jochim
|
Martin Gleize
|
Francesca Bonin
|
Francesca Bonin
|
David Konopnicki
Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP): System Demonstrations
We present a novel system providing summaries for Computer Science publications. Through a qualitative user study, we identified the most valuable scenarios for discovery, exploration and understanding of scientific documents. Based on these findings, we built a system that retrieves and summarizes scientific documents for a given information need, either in form of a free-text query or by choosing categorized values such as scientific tasks, datasets and more. Our system ingested 270,000 papers, and its summarization module aims to generate concise yet detailed summaries. We validated our approach with human experts.