Youzhi Wang
2024
VarBench: Robust Language Model Benchmarking Through Dynamic Variable Perturbation
Kun Qian
|
Shunji Wan
|
Claudia Tang
|
Youzhi Wang
|
Xuanming Zhang
|
Maximillian Chen
|
Zhou Yu
Findings of the Association for Computational Linguistics: EMNLP 2024
As large language models achieve impressive scores on traditional benchmarks, an increasing number of researchers are becoming concerned about benchmark data leakage during pre-training, commonly known as the data contamination problem. To ensure fair evaluation, recent benchmarks release only the training and validation sets, keeping the test set labels closed-source. They require anyone wishing to evaluate his language model to submit the model’s predictions for centralized processing and then publish the model’s result on their leaderboard. However, this submission process is inefficient and prevents effective error analysis. To address this issue, we propose to variabilize benchmarks and evaluate language models dynamically. Specifically, we extract variables from each test case and define a value range for each variable. For each evaluation, we sample new values from these value ranges to create unique test cases, thus ensuring a fresh evaluation each time. We applied this variable perturbation method to four datasets: GSM8K, ARC, CommonsenseQA, and TruthfulQA, which cover mathematical generation and multiple-choice tasks. Our experimental results demonstrate that this approach provides a more accurate assessment of the true capabilities of language models, effectively mitigating the contamination problem.
Search
Co-authors
- Claudia Tang 1
- Kun Qian 1
- Maximillian Chen 1
- Shunji Wan 1
- Xuanming Zhang 1
- show all...
- Zhou Yu 1