2025
pdf
bib
abs
BIG-Bench Extra Hard
Mehran Kazemi
|
Bahare Fatemi
|
Hritik Bansal
|
John Palowitch
|
Chrysovalantis Anastasiou
|
Sanket Vaibhav Mehta
|
Lalit K Jain
|
Virginia Aglietti
|
Disha Jindal
|
Peter Chen
|
Nishanth Dikkala
|
Gladys Tyen
|
Xin Liu
|
Uri Shalit
|
Silvia Chiappa
|
Kate Olszewska
|
Yi Tay
|
Vinh Q. Tran
|
Quoc V Le
|
Orhan Firat
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Current benchmarks for large language model (LLM) reasoning predominantly focus on mathematical and coding abilities, leaving a gap in evaluating broader reasoning proficiencies. One particular exception is the BIG-Bench dataset, which has served as a crucial benchmark for evaluating the general reasoning capabilities of LLMs, thanks to its diverse set of challenging tasks that allowed for a comprehensive assessment of general reasoning across various skills within a unified framework. However, recent advances in LLMs have led to saturation on BIG-Bench, and its harder version BIG-Bench Hard (BBH). State-of-the-art models achieve near-perfect scores on many tasks in BBH, thus diminishing its utility. To address this limitation, we introduce BIG-Bench Extra Hard (BBEH), a new benchmark designed to push the boundaries of LLM reasoning evaluation. BBEH replaces each task in BBH with a novel task that probes a similar reasoning capability but exhibits significantly increased difficulty. We evaluate various general-purpose and reasoning-specialized models on BBEH and observe an accuracy of 23.9% for the best general-purpose model and 54.2% for the best reasoning-specialized model, indicating substantial room for improvement and highlighting the ongoing challenge of achieving robust general reasoning in LLMs. We release BBEH publicly at: https://github.com/google-deepmind/bbeh.
2021
pdf
bib
Proceedings of the First Workshop on Causal Inference and NLP
Amir Feder
|
Katherine Keith
|
Emaad Manzoor
|
Reid Pryzant
|
Dhanya Sridhar
|
Zach Wood-Doughty
|
Jacob Eisenstein
|
Justin Grimmer
|
Roi Reichart
|
Molly Roberts
|
Uri Shalit
|
Brandon Stewart
|
Victor Veitch
|
Diyi Yang
Proceedings of the First Workshop on Causal Inference and NLP
pdf
bib
abs
CausaLM: Causal Model Explanation Through Counterfactual Language Models
Amir Feder
|
Nadav Oved
|
Uri Shalit
|
Roi Reichart
Computational Linguistics, Volume 47, Issue 2 - June 2021
Understanding predictions made by deep neural networks is notoriously difficult, but also crucial to their dissemination. As all machine learning–based methods, they are as good as their training data, and can also capture unwanted biases. While there are tools that can help understand whether such biases exist, they do not distinguish between correlation and causation, and might be ill-suited for text-based models and for reasoning about high-level language concepts. A key problem of estimating the causal effect of a concept of interest on a given model is that this estimation requires the generation of counterfactual examples, which is challenging with existing generation technology. To bridge that gap, we propose CausaLM, a framework for producing causal model explanations using counterfactual language representation models. Our approach is based on fine-tuning of deep contextualized embedding models with auxiliary adversarial tasks derived from the causal graph of the problem. Concretely, we show that by carefully choosing auxiliary adversarial pre-training tasks, language representation models such as BERT can effectively learn a counterfactual representation for a given concept of interest, and be used to estimate its true causal effect on model performance. A byproduct of our method is a language representation model that is unaffected by the tested concept, which can be useful in mitigating unwanted bias ingrained in the data.1