@inproceedings{papicchio-etal-2025-squab,
    title = "{SQUAB}: Evaluating {LLM} robustness to Ambiguous and Unanswerable Questions in Semantic Parsing",
    author = "Papicchio, Simone  and
      Cagliero, Luca  and
      Papotti, Paolo",
    editor = "Christodoulopoulos, Christos  and
      Chakraborty, Tanmoy  and
      Rose, Carolyn  and
      Peng, Violet",
    booktitle = "Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing",
    month = nov,
    year = "2025",
    address = "Suzhou, China",
    publisher = "Association for Computational Linguistics",
    url = "https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.906/",
    pages = "17937--17957",
    ISBN = "979-8-89176-332-6",
    abstract = "Large Language Models (LLMs) have demonstrated robust performance in Semantic Parsing (SP) for well-defined queries with unambiguous intent and answerable responses. However, practical user questions frequently deviate from these ideal conditions, challenging the applicability of existing benchmarks. To address this issue, we introduce SQUAB, an automatic dataset generator of Ambiguous and Unanswerable questions. SQUAB generates complex, annotated SP tests using a blend of SQL and LLM capabilities. Results show that SQUAB reduces test generation costs by up to 99{\%} compared to human-based solutions while aligning with real-world question patterns. Furthermore, these tests challenge LLM performance while revealing disparities between public and proprietary datasets. This highlights the need for a dynamic, automatic dataset generator as SQUAB. The code is designed for user extension to accommodate new ambiguous and unanswerable patterns and is available at https://anonymous.4open.science/r/squab-8716/."
}Markdown (Informal)
[SQUAB: Evaluating LLM robustness to Ambiguous and Unanswerable Questions in Semantic Parsing](https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.906/) (Papicchio et al., EMNLP 2025)
ACL