Zheng Ning
2023
Interactive Text-to-SQL Generation via Editable Step-by-Step Explanations
Yuan Tian
|
Zheng Zhang
|
Zheng Ning
|
Toby Li
|
Jonathan K. Kummerfeld
|
Tianyi Zhang
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
Relational databases play an important role in business, science, and more. However, many users cannot fully unleash the analytical power of relational databases, because they are not familiar with database languages such as SQL. Many techniques have been proposed to automatically generate SQL from natural language, but they suffer from two issues: (1) they still make many mistakes, particularly for complex queries, and (2) they do not provide a flexible way for non-expert users to validate and refine incorrect queries. To address these issues, we introduce a new interaction mechanism that allows users to directly edit a step-by-step explanation of a query to fix errors. Our experiments on multiple datasets, as well as a user study with 24 participants, demonstrate that our approach can achieve better performance than multiple SOTA approaches. Our code and datasets are available at https://github.com/magic-YuanTian/STEPS.
Exploring Contrast Consistency of Open-Domain Question Answering Systems on Minimally Edited Questions
Zhihan Zhang
|
Wenhao Yu
|
Zheng Ning
|
Mingxuan Ju
|
Meng Jiang
Transactions of the Association for Computational Linguistics, Volume 11
Contrast consistency, the ability of a model to make consistently correct predictions in the presence of perturbations, is an essential aspect in NLP. While studied in tasks such as sentiment analysis and reading comprehension, it remains unexplored in open-domain question answering (OpenQA) due to the difficulty of collecting perturbed questions that satisfy factuality requirements. In this work, we collect minimally edited questions as challenging contrast sets to evaluate OpenQA models. Our collection approach combines both human annotation and large language model generation. We find that the widely used dense passage retriever (DPR) performs poorly on our contrast sets, despite fitting the training set well and performing competitively on standard test sets. To address this issue, we introduce a simple and effective query-side contrastive loss with the aid of data augmentation to improve DPR training. Our experiments on the contrast sets demonstrate that DPR’s contrast consistency is improved without sacrificing its accuracy on the standard test sets.1
Search
Co-authors
- Yuan Tian 1
- Zheng Zhang 1
- Toby Li 1
- Jonathan K. Kummerfeld 1
- Tianyi Zhang 1
- show all...