PaCoST: Paired Confidence Significance Testing for Benchmark Contamination Detection in Large Language Models

Huixuan Zhang, Yun Lin, Xiaojun Wan


Abstract
Large language models (LLMs) are known to be trained on vast amounts of data, which may unintentionally or intentionally include data from commonly used benchmarks. This inclusion can lead to cheatingly high scores on model leaderboards, yet result in disappointing performance in real-world applications. To address this benchmark contamination problem, we first propose a set of requirements that practical contamination detection methods should follow. Following these proposed requirements, we introduce PaCoST, a Paired Confidence Significance Testing to effectively detect benchmark contamination in LLMs. Our method constructs a counterpart for each piece of data with the same distribution, and performs statistical analysis of the corresponding confidence to test whether the model is significantly more confident under the original benchmark. We validate the effectiveness of PaCoST and apply it on popular open-source models and benchmarks. We find that almost all models and benchmarks we tested are suspected contaminated more or less. We finally call for new LLM evaluation methods.
Anthology ID:
2024.findings-emnlp.97
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2024
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1794–1809
Language:
URL:
https://preview.aclanthology.org/build-pipeline-with-new-library/2024.findings-emnlp.97/
DOI:
10.18653/v1/2024.findings-emnlp.97
Bibkey:
Cite (ACL):
Huixuan Zhang, Yun Lin, and Xiaojun Wan. 2024. PaCoST: Paired Confidence Significance Testing for Benchmark Contamination Detection in Large Language Models. In Findings of the Association for Computational Linguistics: EMNLP 2024, pages 1794–1809, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
PaCoST: Paired Confidence Significance Testing for Benchmark Contamination Detection in Large Language Models (Zhang et al., Findings 2024)
Copy Citation:
PDF:
https://preview.aclanthology.org/build-pipeline-with-new-library/2024.findings-emnlp.97.pdf