Fernando Martínez-Plumed
2025
PredictaBoard: Benchmarking LLM Score Predictability
Lorenzo Pacchiardi
|
Konstantinos Voudouris
|
Ben Slater
|
Fernando Martínez-Plumed
|
Jose Hernandez-Orallo
|
Lexin Zhou
|
Wout Schellaert
Findings of the Association for Computational Linguistics: ACL 2025
Despite possessing impressive skills, Large Language Models (LLMs) often fail unpre-dictably, demonstrating inconsistent success in even basic common sense reasoning tasks. This unpredictability poses a significant challenge to ensuring their safe deployment, as identifying and operating within a reliable “safe zone” is essential for mitigating risks. To address this, we present PredictaBoard, a novel collabo-rative benchmarking framework designed to evaluate the ability of score predictors (referred to as assessors) to anticipate LLM errors on specific task instances (i.e., prompts) from existing datasets. PredictaBoard evaluates pairs of LLMs and assessors by considering the rejection rate at different tolerance errors. As such, PredictaBoard stimulates research into developing better assessors and making LLMs more predictable, not only with a higher average performance. We conduct illustrative experiments using baseline assessors and state-of-the-art LLMs. PredictaBoard highlights the critical need to evaluate predictability alongside performance, paving the way for safer AI systems where errors are not only minimised but also anticipated and effectively mitigated. Code for our bench-mark can be found at https://github. com/Kinds-of-Intelligence-CFI/PredictaBoard
2024
Confounders in Instance Variation for the Analysis of Data Contamination
Behzad Mehrbakhsh
|
Dario Garigliotti
|
Fernando Martínez-Plumed
|
Jose Hernandez-Orallo
Proceedings of the 1st Workshop on Data Contamination (CONDA)
Test contamination is a serious problem for the evaluation of large language models (LLMs) because it leads to the overestimation of their performance and a quick saturation of benchmarks, even before the actual capability is achieved. One strategy to address this issue is the (adversarial) generation of variations, by including different exemplars and different rephrasings of the questions. However, these two interventions can lead to instances that can be more difficult (accumulating on the expected loss of performance by partly removing the contamination) but also to instances that can be less difficult (cancelling the expected loss of performance), which would make contamination undetectable. Understanding these two phenomena in terms of instance difficulty is critical to determine and measure contamination. In this paper we conduct a comprehensive analysis of these two interventions on an addition task with fine-tuned LLAMA-2 models.
A Proposal for Scaling the Scaling Laws
Wout Schellaert
|
Ronan Hamon
|
Fernando Martínez-Plumed
|
Jose Hernandez-Orallo
Proceedings of the First edition of the Workshop on the Scaling Behavior of Large Language Models (SCALE-LLM 2024)
Scaling laws are predictable relations between the performance of AI systems and various scalable design choices such as model or dataset size. In order to keep predictions interpretable, scaling analysis has traditionally relied on heavy summarisation of both the system design and its performance. We argue this summarisation and aggregation is a major source of predictive inaccuracy and lack of generalisation. With a synthetic example we show how scaling analysis needs to be _instance-based_ to accurately model realistic benchmark behaviour, highlighting the need for richer evaluation datasets and more complex inferential tools, for which we outline an actionable proposal.
Search
Fix author
Co-authors
- Jose Hernandez-Orallo 3
- Wout Schellaert 2
- Dario Garigliotti 1
- Ronan Hamon 1
- Behzad Mehrbakhsh 1
- show all...