Bidirectional Masked Self-attention and N-gram Span Attention for Constituency Parsing

Soohyeong Kim, Whanhee Cho, Minji Kim, Yong Choi


Abstract
Attention mechanisms have become a crucial aspect of deep learning, particularly in natural language processing (NLP) tasks. However, in tasks such as constituency parsing, attention mechanisms can lack the directional information needed to form sentence spans. To address this issue, we propose a Bidirectional masked and N-gram span Attention (BNA) model, which is designed by modifying the attention mechanisms to capture the explicit dependencies between each word and enhance the representation of the output span vectors. The proposed model achieves state-of-the-art performance on the Penn Treebank and Chinese Penn Treebank datasets, with F1 scores of 96.47 and 94.15, respectively. Ablation studies and analysis show that our proposed BNA model effectively captures sentence structure by contextualizing each word in a sentence through bidirectional dependencies and enhancing span representation.
Anthology ID:
2023.findings-emnlp.25
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2023
Month:
December
Year:
2023
Address:
Singapore
Editors:
Houda Bouamor, Juan Pino, Kalika Bali
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
326–338
Language:
URL:
https://aclanthology.org/2023.findings-emnlp.25
DOI:
10.18653/v1/2023.findings-emnlp.25
Bibkey:
Cite (ACL):
Soohyeong Kim, Whanhee Cho, Minji Kim, and Yong Choi. 2023. Bidirectional Masked Self-attention and N-gram Span Attention for Constituency Parsing. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 326–338, Singapore. Association for Computational Linguistics.
Cite (Informal):
Bidirectional Masked Self-attention and N-gram Span Attention for Constituency Parsing (Kim et al., Findings 2023)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-acl-2023-videos/2023.findings-emnlp.25.pdf