Jianyu Xu
2023
TheoremQA: A Theorem-driven Question Answering Dataset
Wenhu Chen
|
Ming Yin
|
Max Ku
|
Pan Lu
|
Yixin Wan
|
Xueguang Ma
|
Jianyu Xu
|
Xinyi Wang
|
Tony Xia
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
The recent LLMs like GPT-4 and PaLM-2 have made tremendous progress in solving fundamental math problems like GSM8K by achieving over 90% accuracy. However, their capabilities to solve more challenging math problems which require domain-specific knowledge (i.e. theorem) have yet to be investigated. In this paper, we introduce TheoremQA, the first theorem-driven question-answering dataset designed to evaluate AI models’ capabilities to apply theorems to solve challenging science problems. TheoremQA is curated by domain experts containing 800 high-quality questions covering 350 theorems from Math, Physics, EE&CS, and Finance. We evaluate a wide spectrum of 16 large language and code models with different prompting strategies like Chain-of-Thoughts and Program-of-Thoughts. We found that GPT-4’s capabilities to solve these problems are unparalleled, achieving an accuracy of 51% with Program-of-Thoughts Prompting. All the existing open-sourced models are below 15%, barely surpassing the random-guess baseline. Given the diversity and broad coverage of TheoremQA, we believe it can be used as a better benchmark to evaluate LLMs’ capabilities to solve challenging science problems.
Search
Co-authors
- Wenhu Chen 1
- Ming Yin (尹明) 1
- Max Ku 1
- Pan Lu 1
- Yixin Wan 1
- show all...