The Importance of Being Recurrent for Modeling Hierarchical Structure

Ke Tran, Arianna Bisazza, Christof Monz


Abstract
Recent work has shown that recurrent neural networks (RNNs) can implicitly capture and exploit hierarchical information when trained to solve common natural language processing tasks (Blevins et al., 2018) such as language modeling (Linzen et al., 2016; Gulordava et al., 2018) and neural machine translation (Shi et al., 2016). In contrast, the ability to model structured data with non-recurrent neural networks has received little attention despite their success in many NLP tasks (Gehring et al., 2017; Vaswani et al., 2017). In this work, we compare the two architectures—recurrent versus non-recurrent—with respect to their ability to model hierarchical structure and find that recurrency is indeed important for this purpose. The code and data used in our experiments is available at https://github.com/ketranm/fan_vs_rnn
Anthology ID:
D18-1503
Volume:
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing
Month:
October-November
Year:
2018
Address:
Brussels, Belgium
Editors:
Ellen Riloff, David Chiang, Julia Hockenmaier, Jun’ichi Tsujii
Venue:
EMNLP
SIG:
SIGDAT
Publisher:
Association for Computational Linguistics
Note:
Pages:
4731–4736
Language:
URL:
https://aclanthology.org/D18-1503
DOI:
10.18653/v1/D18-1503
Bibkey:
Cite (ACL):
Ke Tran, Arianna Bisazza, and Christof Monz. 2018. The Importance of Being Recurrent for Modeling Hierarchical Structure. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4731–4736, Brussels, Belgium. Association for Computational Linguistics.
Cite (Informal):
The Importance of Being Recurrent for Modeling Hierarchical Structure (Tran et al., EMNLP 2018)
Copy Citation:
PDF:
https://preview.aclanthology.org/fix-dup-bibkey/D18-1503.pdf
Video:
 https://preview.aclanthology.org/fix-dup-bibkey/D18-1503.mp4
Code
 ketranm/fan_vs_rnn
Data
SNLI