Is Large Language Model Performance on Reasoning Tasks Impacted by Different Ways Questions Are Asked?
Seok Hwan Song, Mohna Chakraborty, Qi Li, Wallapak Tavanapong
Abstract
Large Language Models (LLMs) have been evaluated using diverse question types, e.g., multiple-choice, true/false, and short/long answers. This study answers an unexplored question about the impact of different question types on LLM accuracy on reasoning tasks. We investigate the performance of five LLMs on three different types of questions using quantitative and deductive reasoning tasks. The performance metrics include accuracy in the reasoning steps and choosing the final answer. Key Findings: (1) Significant differences exist in LLM performance across different question types. (2) Reasoning accuracy does not necessarily correlate with the final selection accuracy. (3) The number of options and the choice of words, influence LLM performance.- Anthology ID:
- 2025.findings-acl.1138
- Volume:
- Findings of the Association for Computational Linguistics: ACL 2025
- Month:
- July
- Year:
- 2025
- Address:
- Vienna, Austria
- Editors:
- Wanxiang Che, Joyce Nabende, Ekaterina Shutova, Mohammad Taher Pilehvar
- Venue:
- Findings
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 22066–22081
- Language:
- URL:
- https://preview.aclanthology.org/landing_page/2025.findings-acl.1138/
- DOI:
- Cite (ACL):
- Seok Hwan Song, Mohna Chakraborty, Qi Li, and Wallapak Tavanapong. 2025. Is Large Language Model Performance on Reasoning Tasks Impacted by Different Ways Questions Are Asked?. In Findings of the Association for Computational Linguistics: ACL 2025, pages 22066–22081, Vienna, Austria. Association for Computational Linguistics.
- Cite (Informal):
- Is Large Language Model Performance on Reasoning Tasks Impacted by Different Ways Questions Are Asked? (Song et al., Findings 2025)
- PDF:
- https://preview.aclanthology.org/landing_page/2025.findings-acl.1138.pdf