Semantic Evaluation for Text-to-SQL with Distilled Test Suites

Ruiqi Zhong, Tao Yu, Dan Klein


Abstract
We propose test suite accuracy to approximate semantic accuracy for Text-to-SQL models. Our method distills a small test suite of databases that achieves high code coverage for the gold query from a large number of randomly generated databases. At evaluation time, it computes the denotation accuracy of the predicted queries on the distilled test suite, hence calculating a tight upper-bound for semantic accuracy efficiently. We use our proposed method to evaluate 21 models submitted to the Spider leader board and manually verify that our method is always correct on 100 examples. In contrast, the current Spider metric leads to a 2.5% false negative rate on average and 8.1% in the worst case, indicating that test suite accuracy is needed. Our implementation, along with distilled test suites for eleven Text-to-SQL datasets, is publicly available.
Anthology ID:
2020.emnlp-main.29
Volume:
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)
Month:
November
Year:
2020
Address:
Online
Editors:
Bonnie Webber, Trevor Cohn, Yulan He, Yang Liu
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
396–411
Language:
URL:
https://aclanthology.org/2020.emnlp-main.29
DOI:
10.18653/v1/2020.emnlp-main.29
Bibkey:
Cite (ACL):
Ruiqi Zhong, Tao Yu, and Dan Klein. 2020. Semantic Evaluation for Text-to-SQL with Distilled Test Suites. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 396–411, Online. Association for Computational Linguistics.
Cite (Informal):
Semantic Evaluation for Text-to-SQL with Distilled Test Suites (Zhong et al., EMNLP 2020)
Copy Citation:
PDF:
https://preview.aclanthology.org/landing_page/2020.emnlp-main.29.pdf
Video:
 https://slideslive.com/38939361
Code
 ruiqi-zhong/TestSuiteEval +  additional community code
Data
SParCWikiSQL