Evaluating Large Language Models with NeuBAROCO: Syllogistic Reasoning Ability and Human-like Biases
Risako Ando, Takanobu Morishita, Hirohiko Abe, Koji Mineshima, Mitsuhiro Okada
Abstract
This paper investigates whether current large language models exhibit biases in logical reasoning, similar to humans. Specifically, we focus on syllogistic reasoning, a well-studied form of inference in the cognitive science of human deduction. To facilitate our analysis, we introduce a dataset called NeuBAROCO, originally designed for psychological experiments that assess human logical abilities in syllogistic reasoning. The dataset consists of syllogistic inferences in both English and Japanese. We examine three types of biases observed in human syllogistic reasoning: belief biases, conversion errors, and atmosphere effects. Our findings demonstrate that current large language models struggle more with problems involving these three types of biases.- Anthology ID:
- 2023.naloma-1.1
- Volume:
- Proceedings of the 4th Natural Logic Meets Machine Learning Workshop
- Month:
- June
- Year:
- 2023
- Address:
- Nancy, France
- Editors:
- Stergios Chatzikyriakidis, Valeria de Paiva
- Venues:
- NALOMA | WS
- SIG:
- SIGSEM
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 1–11
- Language:
- URL:
- https://preview.aclanthology.org/build-pipeline-with-new-library/2023.naloma-1.1/
- DOI:
- Cite (ACL):
- Risako Ando, Takanobu Morishita, Hirohiko Abe, Koji Mineshima, and Mitsuhiro Okada. 2023. Evaluating Large Language Models with NeuBAROCO: Syllogistic Reasoning Ability and Human-like Biases. In Proceedings of the 4th Natural Logic Meets Machine Learning Workshop, pages 1–11, Nancy, France. Association for Computational Linguistics.
- Cite (Informal):
- Evaluating Large Language Models with NeuBAROCO: Syllogistic Reasoning Ability and Human-like Biases (Ando et al., NALOMA 2023)
- PDF:
- https://preview.aclanthology.org/build-pipeline-with-new-library/2023.naloma-1.1.pdf