Measuring Alignment Bias in Neural Seq2seq Semantic Parsers

Davide Locatelli, Ariadna Quattoni


Abstract
Prior to deep learning the semantic parsing community has been interested in understanding and modeling the range of possible word alignments between natural language sentences and their corresponding meaning representations. Sequence-to-sequence models changed the research landscape suggesting that we no longer need to worry about alignments since they can be learned automatically by means of an attention mechanism. More recently, researchers have started to question such premise. In this work we investigate whether seq2seq models can handle both simple and complex alignments. To answer this question we augment the popular Geo semantic parsing dataset with alignment annotations and create Geo-Aligned. We then study the performance of standard seq2seq models on the examples that can be aligned monotonically versus examples that require more complex alignments. Our empirical study shows that performance is significantly better over monotonic alignments.
Anthology ID:
2022.starsem-1.17
Volume:
Proceedings of the 11th Joint Conference on Lexical and Computational Semantics
Month:
July
Year:
2022
Address:
Seattle, Washington
Venue:
*SEM
SIG:
SIGSEM
Publisher:
Association for Computational Linguistics
Note:
Pages:
200–207
Language:
URL:
https://aclanthology.org/2022.starsem-1.17
DOI:
10.18653/v1/2022.starsem-1.17
Bibkey:
Cite (ACL):
Davide Locatelli and Ariadna Quattoni. 2022. Measuring Alignment Bias in Neural Seq2seq Semantic Parsers. In Proceedings of the 11th Joint Conference on Lexical and Computational Semantics, pages 200–207, Seattle, Washington. Association for Computational Linguistics.
Cite (Informal):
Measuring Alignment Bias in Neural Seq2seq Semantic Parsers (Locatelli & Quattoni, *SEM 2022)
Copy Citation:
PDF:
https://preview.aclanthology.org/auto-file-uploads/2022.starsem-1.17.pdf
Code
 interact-erc/geo-aligned