LTRAG: Enhancing Autoformalization and Self-refinement for Logical Reasoning with Thought-Guided RAG

Ruikang Hu, Shaoyu Lin, Yeliang Xiu, Yongmei Liu


Abstract
Logical reasoning is fundamental to intelligent systems. Large language models (LLMs) have demonstrated promise in natural language (NL) reasoning, especially with techniques like chain-of-thought (CoT) prompting. Neuro-symbolic methods like Logic-LM and LINC further enhance performance on challenging datasets FOLIO and AR-LSAT by integrating formalization with LLMs and symbolic solvers, and possibly refinement with LLMs. However, these methods still struggle with the accurate formalization of complex NL problems.In this paper, we introduce LTRAG, a framework to enhance autoformalization and self-refinement for logical reasoning with Retrieval-Augmented Generation (RAG), by building knowledge bases of thought-guided examples (https://github.com/sysulic/LTRAG ).Experimental results on FOLIO and AR-LSAT show that LTRAG consistently outperforms Logic-LM and LINC across different models. On GPT-4 and AR-LSAT, it achieves an accuracy gain of 13% over Logic-LM.
Anthology ID:
2025.findings-acl.126
Volume:
Findings of the Association for Computational Linguistics: ACL 2025
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Wanxiang Che, Joyce Nabende, Ekaterina Shutova, Mohammad Taher Pilehvar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2483–2493
Language:
URL:
https://preview.aclanthology.org/display_plenaries/2025.findings-acl.126/
DOI:
Bibkey:
Cite (ACL):
Ruikang Hu, Shaoyu Lin, Yeliang Xiu, and Yongmei Liu. 2025. LTRAG: Enhancing Autoformalization and Self-refinement for Logical Reasoning with Thought-Guided RAG. In Findings of the Association for Computational Linguistics: ACL 2025, pages 2483–2493, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
LTRAG: Enhancing Autoformalization and Self-refinement for Logical Reasoning with Thought-Guided RAG (Hu et al., Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/display_plenaries/2025.findings-acl.126.pdf