SQUAB: Evaluating LLM robustness to Ambiguous and Unanswerable Questions in Semantic Parsing

Simone Papicchio, Luca Cagliero, Paolo Papotti


Abstract
Large Language Models (LLMs) have demonstrated robust performance in Semantic Parsing (SP) for well-defined queries with unambiguous intent and answerable responses. However, practical user questions frequently deviate from these ideal conditions, challenging the applicability of existing benchmarks. To address this issue, we introduce SQUAB, an automatic dataset generator of Ambiguous and Unanswerable questions. SQUAB generates complex, annotated SP tests using a blend of SQL and LLM capabilities. Results show that SQUAB reduces test generation costs by up to 99% compared to human-based solutions while aligning with real-world question patterns. Furthermore, these tests challenge LLM performance while revealing disparities between public and proprietary datasets. This highlights the need for a dynamic, automatic dataset generator as SQUAB. The code is designed for user extension to accommodate new ambiguous and unanswerable patterns and is available at https://anonymous.4open.science/r/squab-8716/.
Anthology ID:
2025.emnlp-main.906
Volume:
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
17937–17957
Language:
URL:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.906/
DOI:
Bibkey:
Cite (ACL):
Simone Papicchio, Luca Cagliero, and Paolo Papotti. 2025. SQUAB: Evaluating LLM robustness to Ambiguous and Unanswerable Questions in Semantic Parsing. In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, pages 17937–17957, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
SQUAB: Evaluating LLM robustness to Ambiguous and Unanswerable Questions in Semantic Parsing (Papicchio et al., EMNLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.906.pdf
Checklist:
 2025.emnlp-main.906.checklist.pdf