Tarun Suresh
2025
SATBench: Benchmarking LLMs’ Logical Reasoning via Automated Puzzle Generation from SAT Formulas
Anjiang Wei
|
Yuheng Wu
|
Yingjia Wan
|
Tarun Suresh
|
Huanmi Tan
|
Zhanke Zhou
|
Sanmi Koyejo
|
Ke Wang
|
Alex Aiken
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
We introduce SATBench, a benchmark for evaluating the logical reasoning capabilities of large language models (LLMs) through logical puzzles derived from Boolean satisfiability (SAT) problems.Unlike prior work that focuses on inference rule-based reasoning, which often involves deducing conclusions from a set of premises, our approach leverages the search-based nature of SAT problems, where the objective is to find a solution that fulfills a specified set of logical constraints. Each instance in SATBench is generated from a SAT formula, then translated into a puzzle using LLMs. The generation process is fully automated and allows for adjustable difficulty by varying the number of clauses. All 2100 puzzles are validated through both LLM-based and solver-based consistency checks, with human validation on a subset. Experimental results show that even the strongest model, o4-mini, achieves only 65.0% accuracy on hard UNSAT problems, close to the random baseline of 50%. Our error analysis reveals systematic failures such as satisfiability bias, context inconsistency, and condition omission, highlighting limitations of current LLMs in search-based logical reasoning. Our code and data are publicly available at https://github.com/Anjiang-Wei/SATBench
Search
Fix author
Co-authors
- Alex Aiken 1
- Sanmi Koyejo 1
- Huanmi Tan 1
- Yingjia Wan 1
- Ke Wang 1
- show all...