Hao Yan
2023
Learning to Simulate Natural Language Feedback for Interactive Semantic Parsing
Hao Yan
|
Saurabh Srivastava
|
Yintao Tai
|
Sida I. Wang
|
Wen-tau Yih
|
Ziyu Yao
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Interactive semantic parsing based on natural language (NL) feedback, where users provide feedback to correct the parser mistakes, has emerged as a more practical scenario than the traditional one-shot semantic parsing. However, prior work has heavily relied on human-annotated feedback data to train the interactive semantic parser, which is prohibitively expensive and not scalable. In this work, we propose a new task of simulating NL feedback for interactive semantic parsing. We accompany the task with a novel feedback evaluator. The evaluator is specifically designed to assess the quality of the simulated feedback, based on which we decide the best feedback simulator from our proposed variants. On a text-to-SQL dataset, we show that our feedback simulator can generate high-quality NL feedback to boost the error correction ability of a specific parser. In low-data settings, our feedback simulator can help achieve comparable error correction performance as trained using the costly, full set of human annotations.
2000
Coordination and context-dependence in the generation of embodied conversation
Justine Cassell
|
Matthew Stone
|
Hao Yan
INLG’2000 Proceedings of the First International Conference on Natural Language Generation
Search
Co-authors
- Justine Cassell 1
- Matthew Stone 1
- Saurabh Srivastava 1
- Yintao Tai 1
- Sida I. Wang 1
- show all...