Yujin Huang
2023
On Robustness of Prompt-based Semantic Parsing with Large Pre-trained Language Model: An Empirical Study on Codex
Terry Yue Zhuo
|
Zhuang Li
|
Yujin Huang
|
Fatemeh Shiri
|
Weiqing Wang
|
Gholamreza Haffari
|
Yuan-Fang Li
Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics
Semantic parsing is a technique aimed at constructing a structured representation of the meaning of a natural-language question. Recent advances in language models trained on code have shown superior performance in generating these representations compared to language models trained solely on natural language text. The existing fine-tuned neural semantic parsers are vulnerable to adversarial attacks on natural-language inputs. While it has been established that the robustness of smaller semantic parsers can be enhanced through adversarial training, this approach is not feasible for large language models in real-world scenarios, as it requires both substantial computational resources and expensive human annotation on in-domain semantic parsing data. This paper presents the first empirical study on the adversarial robustness of a prompt-based semantic parser based on CODEX, a stateof-the-art (SOTA) language model trained on code. Our results demonstrate that the large language model of code is vulnerable to carefully crafted adversarial examples. To overcome this challenge, we propose methods for enhancing robustness without requiring substantial amounts of labelled data or intensive computational resources.
Search
Co-authors
- Terry Yue Zhuo 1
- Zhuang Li 1
- Fatemeh Shiri 1
- Weiqing Wang 1
- Gholamreza Haffari 1
- show all...
Venues
- eacl1