Yuanzhu Peter Chen
Also published as: Peter Chen
2025
BIG-Bench Extra Hard
Mehran Kazemi | Bahare Fatemi | Hritik Bansal | John Palowitch | Chrysovalantis Anastasiou | Sanket Vaibhav Mehta | Lalit K Jain | Virginia Aglietti | Disha Jindal | Peter Chen | Nishanth Dikkala | Gladys Tyen | Xin Liu | Uri Shalit | Silvia Chiappa | Kate Olszewska | Yi Tay | Vinh Q. Tran | Quoc V Le | Orhan Firat
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Mehran Kazemi | Bahare Fatemi | Hritik Bansal | John Palowitch | Chrysovalantis Anastasiou | Sanket Vaibhav Mehta | Lalit K Jain | Virginia Aglietti | Disha Jindal | Peter Chen | Nishanth Dikkala | Gladys Tyen | Xin Liu | Uri Shalit | Silvia Chiappa | Kate Olszewska | Yi Tay | Vinh Q. Tran | Quoc V Le | Orhan Firat
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Current benchmarks for large language model (LLM) reasoning predominantly focus on mathematical and coding abilities, leaving a gap in evaluating broader reasoning proficiencies. One particular exception is the BIG-Bench dataset, which has served as a crucial benchmark for evaluating the general reasoning capabilities of LLMs, thanks to its diverse set of challenging tasks that allowed for a comprehensive assessment of general reasoning across various skills within a unified framework. However, recent advances in LLMs have led to saturation on BIG-Bench, and its harder version BIG-Bench Hard (BBH). State-of-the-art models achieve near-perfect scores on many tasks in BBH, thus diminishing its utility. To address this limitation, we introduce BIG-Bench Extra Hard (BBEH), a new benchmark designed to push the boundaries of LLM reasoning evaluation. BBEH replaces each task in BBH with a novel task that probes a similar reasoning capability but exhibits significantly increased difficulty. We evaluate various general-purpose and reasoning-specialized models on BBEH and observe an accuracy of 23.9% for the best general-purpose model and 54.2% for the best reasoning-specialized model, indicating substantial room for improvement and highlighting the ongoing challenge of achieving robust general reasoning in LLMs. We release BBEH publicly at: https://github.com/google-deepmind/bbeh.
2024
LLMs cannot find reasoning errors, but can correct them given the error location
Gladys Tyen | Hassan Mansoor | Victor Carbune | Peter Chen | Tony Mak
Findings of the Association for Computational Linguistics: ACL 2024
Gladys Tyen | Hassan Mansoor | Victor Carbune | Peter Chen | Tony Mak
Findings of the Association for Computational Linguistics: ACL 2024
While self-correction has shown promise in improving LLM outputs in terms of style and quality (e.g. Chen et al., 2023b; Madaan et al.,2023), recent attempts to self-correct logical or reasoning errors often cause correct answers to become incorrect, resulting in worse performances overall (Huang et al., 2023). In this paper, we show that poor self-correction performance stems from LLMs’ inability tofind logical mistakes, rather than their ability to correct a known mistake. Firstly, we benchmark several state-of-the-art LLMs ontheir mistake-finding ability and demonstrate that they generally struggle with the task, even in highly objective, unambiguous cases. Secondly, we test the correction abilities of LLMs – separately from mistake finding – using a backtracking setup that feeds ground truth mistake location information to the model. We show that this boosts downstream task performance across our 5 reasoning tasks, indicating that LLMs’ correction abilities are robust. Finally, we show that it is possible to obtain mistake location information without ground truth labels or in-domain training data. We train a small classifier with out-of-domain data, which exhibits stronger mistake-finding performance than prompting a large model. We release our dataset of LLM-generated logical mistakes, BIG-Bench Mistake, to enable further research into locating LLM reasoning mistakes.
2018
A Cross-lingual Messenger with Keyword Searchable Phrases for the Travel Domain
Shehroze Khan | Jihyun Kim | Tarik Zulfikarpasic | Peter Chen | Nizar Habash
Proceedings of the 27th International Conference on Computational Linguistics: System Demonstrations
Shehroze Khan | Jihyun Kim | Tarik Zulfikarpasic | Peter Chen | Nizar Habash
Proceedings of the 27th International Conference on Computational Linguistics: System Demonstrations
We present Qutr (Query Translator), a smart cross-lingual communication application for the travel domain. Qutr is a real-time messaging app that automatically translates conversations while supporting keyword-to-sentence matching. Qutr relies on querying a database that holds commonly used pre-translated travel-domain phrases and phrase templates in different languages with the use of keywords. The query matching supports paraphrases, incomplete keywords and some input spelling errors. The application addresses common cross-lingual communication issues such as translation accuracy, speed, privacy, and personalization.
2010
Search
Fix author
Co-authors
- Gladys Tyen 2
- Virginia Aglietti 1
- Chrysovalantis Anastasiou 1
- Hritik Bansal 1
- Silvia Chiappa 1
- Victor Cărbune 1
- Nishanth Dikkala 1
- Bahare Fatemi 1
- Orhan Firat 1
- Nizar Habash 1
- Lalit K Jain 1
- Disha Jindal 1
- Mehran Kazemi 1
- Shehroze Khan 1
- Jihyun Kim 1
- Quoc Le 1
- Qing Li 1
- Zhangxi Lin 1
- Xin Liu 1
- Tony Mak 1
- Hassan Mansoor 1
- Sanket Vaibhav Mehta 1
- Kate Olszewska 1
- John Palowitch 1
- Uri Shalit 1
- Yi Tay 1
- Vinh Q. Tran 1
- Jia Wang 1
- Tarik Zulfikarpasic 1