Liu Ka Wai


2025

pdf bib
C2LEVA: Toward Comprehensive and Contamination-Free Language Model Evaluation
Yanyang Li | Wong Tin Long | Cheung To Hung | Jianqiao Zhao | Duo Zheng | Liu Ka Wai | Michael R. Lyu | Liwei Wang
Findings of the Association for Computational Linguistics: ACL 2025

Recent advances in large language models (LLMs) have shown significant promise, yet their evaluation raises concerns, particularly regarding data contamination due to the lack of access to proprietary training data. To address this issue, we present C2LEVA, a comprehensive bilingual benchmark featuring systematic contamination prevention. C2LEVA firstly offers a holistic evaluation encompassing 22 tasks, each targeting a specific application or ability of LLMs, and secondly a trustworthy assessment due to our contamination-free tasks, ensured by a systematic contamination prevention strategy that fully automates test data renewal and enforces data protection during benchmark data release. Our large-scale evaluation of 15 open-source and proprietary models demonstrates the effectiveness of C2LEVA.