Lalit K Jain
2025
BIG-Bench Extra Hard
Mehran Kazemi
|
Bahare Fatemi
|
Hritik Bansal
|
John Palowitch
|
Chrysovalantis Anastasiou
|
Sanket Vaibhav Mehta
|
Lalit K Jain
|
Virginia Aglietti
|
Disha Jindal
|
Peter Chen
|
Nishanth Dikkala
|
Gladys Tyen
|
Xin Liu
|
Uri Shalit
|
Silvia Chiappa
|
Kate Olszewska
|
Yi Tay
|
Vinh Q. Tran
|
Quoc V Le
|
Orhan Firat
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Current benchmarks for large language model (LLM) reasoning predominantly focus on mathematical and coding abilities, leaving a gap in evaluating broader reasoning proficiencies. One particular exception is the BIG-Bench dataset, which has served as a crucial benchmark for evaluating the general reasoning capabilities of LLMs, thanks to its diverse set of challenging tasks that allowed for a comprehensive assessment of general reasoning across various skills within a unified framework. However, recent advances in LLMs have led to saturation on BIG-Bench, and its harder version BIG-Bench Hard (BBH). State-of-the-art models achieve near-perfect scores on many tasks in BBH, thus diminishing its utility. To address this limitation, we introduce BIG-Bench Extra Hard (BBEH), a new benchmark designed to push the boundaries of LLM reasoning evaluation. BBEH replaces each task in BBH with a novel task that probes a similar reasoning capability but exhibits significantly increased difficulty. We evaluate various general-purpose and reasoning-specialized models on BBEH and observe an accuracy of 23.9% for the best general-purpose model and 54.2% for the best reasoning-specialized model, indicating substantial room for improvement and highlighting the ongoing challenge of achieving robust general reasoning in LLMs. We release BBEH publicly at: https://github.com/google-deepmind/bbeh.