Neural Transition-based Syntactic Linearization

Linfeng Song, Yue Zhang, Daniel Gildea


Abstract
The task of linearization is to find a grammatical order given a set of words. Traditional models use statistical methods. Syntactic linearization systems, which generate a sentence along with its syntactic tree, have shown state-of-the-art performance. Recent work shows that a multilayer LSTM language model outperforms competitive statistical syntactic linearization systems without using syntax. In this paper, we study neural syntactic linearization, building a transition-based syntactic linearizer leveraging a feed forward neural network, observing significantly better results compared to LSTM language models on this task.
Anthology ID:
W18-6553
Volume:
Proceedings of the 11th International Conference on Natural Language Generation
Month:
November
Year:
2018
Address:
Tilburg University, The Netherlands
Editors:
Emiel Krahmer, Albert Gatt, Martijn Goudbeek
Venue:
INLG
SIG:
SIGGEN
Publisher:
Association for Computational Linguistics
Note:
Pages:
431–440
Language:
URL:
https://aclanthology.org/W18-6553
DOI:
10.18653/v1/W18-6553
Bibkey:
Cite (ACL):
Linfeng Song, Yue Zhang, and Daniel Gildea. 2018. Neural Transition-based Syntactic Linearization. In Proceedings of the 11th International Conference on Natural Language Generation, pages 431–440, Tilburg University, The Netherlands. Association for Computational Linguistics.
Cite (Informal):
Neural Transition-based Syntactic Linearization (Song et al., INLG 2018)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-5/W18-6553.pdf