Abhinav Lalwani


2022

pdf
Logical Fallacy Detection
Zhijing Jin | Abhinav Lalwani | Tejas Vaidhya | Xiaoyu Shen | Yiwen Ding | Zhiheng Lyu | Mrinmaya Sachan | Rada Mihalcea | Bernhard Schoelkopf
Findings of the Association for Computational Linguistics: EMNLP 2022

Reasoning is central to human intelligence. However, fallacious arguments are common, and some exacerbate problems such as spreading misinformation about climate change. In this paper, we propose the task of logical fallacy detection, and provide a new dataset (Logic) of logical fallacies generally found in text, together with an additional challenge set for detecting logical fallacies in climate change claims (LogicClimate). Detecting logical fallacies is a hard problem as the model must understand the underlying logical structure of the argument. We find that existing pretrained large language models perform poorly on this task. In contrast, we show that a simple structure-aware classifier outperforms the best language model by 5.46% F1 scores on Logic and 4.51% on LogicClimate. We encourage future work to explore this task since (a) it can serve as a new reasoning challenge for language models, and (b) it can have potential applications in tackling the spread of misinformation. Our dataset and code are available at https://github.com/causalNLP/logical-fallacy