Benchmark Transparency: Measuring the Impact of Data on Evaluation

Venelin Kovatchev, Matthew Lease


Abstract
In this paper we present an exploratory research on quantifying the impact that data distribution has on the performance and evaluation of NLP models. We propose an automated framework that measures the data point distribution across 6 different dimensions: ambiguity, difficulty, discriminability, length, noise, and perplexity.We use disproportional stratified sampling to measure how much the data distribution affects absolute (Acc/F1) and relative (Rank) model performance. We experiment on 2 different datasets (SQUAD and MNLI) and test a total of 135 different models (125 on SQUAD and 10 on MNLI). We demonstrate that without explicit control of the data distribution, standard evaluation frameworks are inconsistent and unreliable. We find that the impact of the data is statistically significant and is often larger than the impact of changing the metric. In a second set of experiments, we demonstrate that the impact of data on evaluation is not just observable, but also predictable. We propose to use benchmark transparency as a method for comparing datasets and quantifying the similarity between them. We find that the “dataset similarity vector” can be used to predict how well a model generalizes out of distribution.
Anthology ID:
2024.naacl-long.86
Volume:
Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)
Month:
June
Year:
2024
Address:
Mexico City, Mexico
Editors:
Kevin Duh, Helena Gomez, Steven Bethard
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1536–1551
Language:
URL:
https://aclanthology.org/2024.naacl-long.86
DOI:
Bibkey:
Cite (ACL):
Venelin Kovatchev and Matthew Lease. 2024. Benchmark Transparency: Measuring the Impact of Data on Evaluation. In Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers), pages 1536–1551, Mexico City, Mexico. Association for Computational Linguistics.
Cite (Informal):
Benchmark Transparency: Measuring the Impact of Data on Evaluation (Kovatchev & Lease, NAACL 2024)
Copy Citation:
PDF:
https://preview.aclanthology.org/naacl24-info/2024.naacl-long.86.pdf
Copyright:
 2024.naacl-long.86.copyright.pdf