Liar, Liar, Logical Mire: A Benchmark for Suppositional Reasoning in Large Language Models

Philipp Mondorf, Barbara Plank


Abstract
Knights and knaves problems represent a classic genre of logical puzzles where characters either tell the truth or lie. The objective is to logically deduce each character's identity based on their statements. The challenge arises from the truth-telling or lying behavior, which influences the logical implications of each statement. Solving these puzzles requires not only direct deductions from individual statements, but the ability to assess the truthfulness of statements by reasoning through various hypothetical scenarios. As such, knights and knaves puzzles serve as compelling examples of suppositional reasoning. In this paper, we introduce TruthQuest, a benchmark for suppositional reasoning based on the principles of knights and knaves puzzles. Our benchmark presents problems of varying complexity, considering both the number of characters and the types of logical statements involved. Evaluations on TruthQuest show that large language models like Llama 3 and Mixtral-8x7B exhibit significant difficulties solving these tasks. A detailed error analysis of the models' output reveals that lower-performing models exhibit a diverse range of reasoning errors, frequently failing to grasp the concept of truth and lies. In comparison, more proficient models primarily struggle with accurately inferring the logical implications of potentially false statements.
Anthology ID:
2024.emnlp-main.404
Volume:
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
7114–7137
Language:
URL:
https://preview.aclanthology.org/manual-author-scripts/2024.emnlp-main.404/
DOI:
10.18653/v1/2024.emnlp-main.404
Bibkey:
Cite (ACL):
Philipp Mondorf and Barbara Plank. 2024. Liar, Liar, Logical Mire: A Benchmark for Suppositional Reasoning in Large Language Models. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 7114–7137, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
Liar, Liar, Logical Mire: A Benchmark for Suppositional Reasoning in Large Language Models (Mondorf & Plank, EMNLP 2024)
Copy Citation:
PDF:
https://preview.aclanthology.org/manual-author-scripts/2024.emnlp-main.404.pdf
Software:
 2024.emnlp-main.404.software.zip
Data:
 2024.emnlp-main.404.data.zip