Abhinav Lalwani
2025
Autoformalizing Natural Language to First-Order Logic: A Case Study in Logical Fallacy Detection
Abhinav Lalwani
|
Tasha Kim
|
Lovish Chopra
|
Christopher Hahn
|
Zhijing Jin
|
Mrinmaya Sachan
Proceedings of the 14th International Joint Conference on Natural Language Processing and the 4th Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics
Translating natural language into formal language such as First-Order Logic (FOL) is a foundational challenge in NLP with wide-ranging applications in automated reasoning, misinformation tracking, and knowledge validation. In this paper, we introduce Natural Language to First-Order Logic (NL2FOL), a framework to autoformalize natural language to FOL step-by-step using Large Language Models (LLMs). Our approach addresses key challenges in this translation process, including the integration of implicit background knowledge. By leveraging structured representations generated by NL2FOL, we use Satisfiability Modulo Theory (SMT) solvers to reason about the logical validity of natural language statements. We present logical fallacy detection as a case study to evaluate the efficacy of NL2FOL. Being neurosymbolic, our approach also provides interpretable insights into the reasoning process and demonstrates robustness without requiring model fine-tuning or labeled training data. Our framework achieves good performance on multiple datasets–on the Logic dataset, NL2FOL achieves an F1-score of 78%, while generalizing effectively to the LogicClimate dataset with an F1-score of 80%.
2022
Logical Fallacy Detection
Zhijing Jin
|
Abhinav Lalwani
|
Tejas Vaidhya
|
Xiaoyu Shen
|
Yiwen Ding
|
Zhiheng Lyu
|
Mrinmaya Sachan
|
Rada Mihalcea
|
Bernhard Schölkopf
Findings of the Association for Computational Linguistics: EMNLP 2022
Reasoning is central to human intelligence. However, fallacious arguments are common, and some exacerbate problems such as spreading misinformation about climate change. In this paper, we propose the task of logical fallacy detection, and provide a new dataset (Logic) of logical fallacies generally found in text, together with an additional challenge set for detecting logical fallacies in climate change claims (LogicClimate). Detecting logical fallacies is a hard problem as the model must understand the underlying logical structure of the argument. We find that existing pretrained large language models perform poorly on this task. In contrast, we show that a simple structure-aware classifier outperforms the best language model by 5.46% F1 scores on Logic and 4.51% on LogicClimate. We encourage future work to explore this task since (a) it can serve as a new reasoning challenge for language models, and (b) it can have potential applications in tackling the spread of misinformation. Our dataset and code are available at https://github.com/causalNLP/logical-fallacy
Search
Fix author
Co-authors
- Zhijing Jin 2
- Mrinmaya Sachan 2
- Lovish Chopra 1
- Yiwen Ding 1
- Christopher Hahn 1
- show all...