Identifying and Answering Questions with False Assumptions: An Interpretable Approach

Zijie Wang, Eduardo Blanco


Abstract
People often ask questions with false assumptions, a type of question that does not have regular answers. Answering such questions requires first identifying the false assumptions. Large Language Models (LLMs) often generate misleading answers to these questions because of hallucinations. In this paper, we focus on identifying and answering questions with false assumptions in several domains. We first investigate whether the problem reduces to fact verification. Then, we present an approach leveraging external evidence to mitigate hallucinations. Experiments with five LLMs demonstrate that (1) incorporating retrieved evidence is beneficial and (2) generating and validating atomic assumptions yields more improvements and provides an interpretable answer by pinpointing the false assumptions.
Anthology ID:
2025.emnlp-main.1228
Volume:
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
24080–24098
Language:
URL:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.1228/
DOI:
Bibkey:
Cite (ACL):
Zijie Wang and Eduardo Blanco. 2025. Identifying and Answering Questions with False Assumptions: An Interpretable Approach. In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, pages 24080–24098, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
Identifying and Answering Questions with False Assumptions: An Interpretable Approach (Wang & Blanco, EMNLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.1228.pdf
Checklist:
 2025.emnlp-main.1228.checklist.pdf