On Robustness of Neural Semantic Parsers

Shuo Huang, Zhuang Li, Lizhen Qu, Lei Pan


Abstract
Semantic parsing maps natural language (NL) utterances into logical forms (LFs), which underpins many advanced NLP problems. Semantic parsers gain performance boosts with deep neural networks, but inherit vulnerabilities against adversarial examples. In this paper, we provide the first empirical study on the robustness of semantic parsers in the presence of adversarial attacks. Formally, adversaries of semantic parsing are considered to be the perturbed utterance-LF pairs, whose utterances have exactly the same meanings as the original ones. A scalable methodology is proposed to construct robustness test sets based on existing benchmark corpora. Our results answered five research questions in measuring the sate-of-the-art parsers’ performance on robustness test sets, and evaluating the effect of data augmentation.
Anthology ID:
2021.eacl-main.292
Volume:
Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume
Month:
April
Year:
2021
Address:
Online
Venue:
EACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
3333–3342
Language:
URL:
https://aclanthology.org/2021.eacl-main.292
DOI:
10.18653/v1/2021.eacl-main.292
Bibkey:
Cite (ACL):
Shuo Huang, Zhuang Li, Lizhen Qu, and Lei Pan. 2021. On Robustness of Neural Semantic Parsers. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 3333–3342, Online. Association for Computational Linguistics.
Cite (Informal):
On Robustness of Neural Semantic Parsers (Huang et al., EACL 2021)
Copy Citation:
PDF:
https://preview.aclanthology.org/auto-file-uploads/2021.eacl-main.292.pdf