A neural parser as a direct classifier for head-final languages

Hiroshi Kanayama, Masayasu Muraoka, Ryosuke Kohita


Abstract
This paper demonstrates a neural parser implementation suitable for consistently head-final languages such as Japanese. Unlike the transition- and graph-based algorithms in most state-of-the-art parsers, our parser directly selects the head word of a dependent from a limited number of candidates. This method drastically simplifies the model so that we can easily interpret the output of the neural model. Moreover, by exploiting grammatical knowledge to restrict possible modification types, we can control the output of the parser to reduce specific errors without adding annotated corpora. The neural parser performed well both on conventional Japanese corpora and the Japanese version of Universal Dependency corpus, and the advantages of distributed representations were observed in the comparison with the non-neural conventional model.
Anthology ID:
W18-2906
Volume:
Proceedings of the Workshop on the Relevance of Linguistic Structure in Neural Architectures for NLP
Month:
July
Year:
2018
Address:
Melbourne, Australia
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
38–46
Language:
URL:
https://aclanthology.org/W18-2906
DOI:
10.18653/v1/W18-2906
Bibkey:
Cite (ACL):
Hiroshi Kanayama, Masayasu Muraoka, and Ryosuke Kohita. 2018. A neural parser as a direct classifier for head-final languages. In Proceedings of the Workshop on the Relevance of Linguistic Structure in Neural Architectures for NLP, pages 38–46, Melbourne, Australia. Association for Computational Linguistics.
Cite (Informal):
A neural parser as a direct classifier for head-final languages (Kanayama et al., ACL 2018)
Copy Citation:
PDF:
https://preview.aclanthology.org/remove-xml-comments/W18-2906.pdf