Beyond Lines and Circles: Unveiling the Geometric Reasoning Gap in Large Language Models

Spyridon Mouselinos, Henryk Michalewski, Mateusz Malinowski


Abstract
Large Language Models (LLMs) demonstrate ever-increasing abilities in mathematical and algorithmic tasks, yet their geometric reasoning skills are underexplored. We investigate LLMs’ abilities in constructive geometric problem-solving, – one of the most fundamental steps in developing human mathematical reasoning, revealing notable challenges in this domain. LLMs exhibit biases in variable names, struggle with 2D spatial relationships and planning, and hallucinate object placements. To this end, we introduce a framework that enhances LLMs’ reasoning potential through a multi-agent system conducting internal dialogue. This work underscores LLMs’ limitations in geometric reasoning and improves their capabilities through self-correction, collaboration, and diverse role specializations.
Anthology ID:
2024.findings-emnlp.360
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2024
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
6192–6222
Language:
URL:
https://preview.aclanthology.org/fix-sig-urls/2024.findings-emnlp.360/
DOI:
10.18653/v1/2024.findings-emnlp.360
Bibkey:
Cite (ACL):
Spyridon Mouselinos, Henryk Michalewski, and Mateusz Malinowski. 2024. Beyond Lines and Circles: Unveiling the Geometric Reasoning Gap in Large Language Models. In Findings of the Association for Computational Linguistics: EMNLP 2024, pages 6192–6222, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
Beyond Lines and Circles: Unveiling the Geometric Reasoning Gap in Large Language Models (Mouselinos et al., Findings 2024)
Copy Citation:
PDF:
https://preview.aclanthology.org/fix-sig-urls/2024.findings-emnlp.360.pdf