Supervised Attention for Sequence-to-Sequence Constituency Parsing

Hidetaka Kamigaito, Katsuhiko Hayashi, Tsutomu Hirao, Hiroya Takamura, Manabu Okumura, Masaaki Nagata


Abstract
The sequence-to-sequence (Seq2Seq) model has been successfully applied to machine translation (MT). Recently, MT performances were improved by incorporating supervised attention into the model. In this paper, we introduce supervised attention to constituency parsing that can be regarded as another translation task. Evaluation results on the PTB corpus showed that the bracketing F-measure was improved by supervised attention.
Anthology ID:
I17-2002
Volume:
Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers)
Month:
November
Year:
2017
Address:
Taipei, Taiwan
Editors:
Greg Kondrak, Taro Watanabe
Venue:
IJCNLP
SIG:
Publisher:
Asian Federation of Natural Language Processing
Note:
Pages:
7–12
Language:
URL:
https://aclanthology.org/I17-2002
DOI:
Bibkey:
Cite (ACL):
Hidetaka Kamigaito, Katsuhiko Hayashi, Tsutomu Hirao, Hiroya Takamura, Manabu Okumura, and Masaaki Nagata. 2017. Supervised Attention for Sequence-to-Sequence Constituency Parsing. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 7–12, Taipei, Taiwan. Asian Federation of Natural Language Processing.
Cite (Informal):
Supervised Attention for Sequence-to-Sequence Constituency Parsing (Kamigaito et al., IJCNLP 2017)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-bitext-workshop/I17-2002.pdf