Tasha Kim


2025

pdf bib
Autoformalizing Natural Language to First-Order Logic: A Case Study in Logical Fallacy Detection
Abhinav Lalwani | Tasha Kim | Lovish Chopra | Christopher Hahn | Zhijing Jin | Mrinmaya Sachan
Proceedings of the 14th International Joint Conference on Natural Language Processing and the 4th Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics

Translating natural language into formal language such as First-Order Logic (FOL) is a foundational challenge in NLP with wide-ranging applications in automated reasoning, misinformation tracking, and knowledge validation. In this paper, we introduce Natural Language to First-Order Logic (NL2FOL), a framework to autoformalize natural language to FOL step-by-step using Large Language Models (LLMs). Our approach addresses key challenges in this translation process, including the integration of implicit background knowledge. By leveraging structured representations generated by NL2FOL, we use Satisfiability Modulo Theory (SMT) solvers to reason about the logical validity of natural language statements. We present logical fallacy detection as a case study to evaluate the efficacy of NL2FOL. Being neurosymbolic, our approach also provides interpretable insights into the reasoning process and demonstrates robustness without requiring model fine-tuning or labeled training data. Our framework achieves good performance on multiple datasets–on the Logic dataset, NL2FOL achieves an F1-score of 78%, while generalizing effectively to the LogicClimate dataset with an F1-score of 80%.