DivLogicEval: A Framework for Benchmarking Logical Reasoning Evaluation in Large Language Models

Tsz Ting Chung, Lemao Liu, Mo Yu, Dit-Yan Yeung


Abstract
Logic reasoning in natural language has been recognized as an important measure of human intelligence for Large Language Models (LLMs). Popular benchmarks may entangle multiple reasoning skills and thus provide unfaithful evaluations on the logic reasoning skill. Meanwhile, existing logic reasoning benchmarks are limited in language diversity and their distributions are deviated from the distribution of an ideal logic reasoning benchmark, which may lead to biased evaluation results. This paper thereby proposes a new classical logic benchmark DivLogicEval, consisting of natural sentences composed of diverse statements in a counterintuitive way. To ensure a more reliable evaluation, we also introduce a new evaluation metric that mitigates the influence of bias and randomness inherent in LLMs. Through experiments, we demonstrate the extent to which logical reasoning is required to answer the questions in DivLogicEval and compare the performance of different popular LLMs in conducting logical reasoning.
Anthology ID:
2025.findings-emnlp.47
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2025
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
901–915
Language:
URL:
https://preview.aclanthology.org/name-variant-enfa-fane/2025.findings-emnlp.47/
DOI:
10.18653/v1/2025.findings-emnlp.47
Bibkey:
Cite (ACL):
Tsz Ting Chung, Lemao Liu, Mo Yu, and Dit-Yan Yeung. 2025. DivLogicEval: A Framework for Benchmarking Logical Reasoning Evaluation in Large Language Models. In Findings of the Association for Computational Linguistics: EMNLP 2025, pages 901–915, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
DivLogicEval: A Framework for Benchmarking Logical Reasoning Evaluation in Large Language Models (Chung et al., Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/name-variant-enfa-fane/2025.findings-emnlp.47.pdf
Checklist:
 2025.findings-emnlp.47.checklist.pdf