Closing Brackets with Recurrent Neural Networks

Natalia Skachkova, Thomas Trost, Dietrich Klakow


Abstract
Many natural and formal languages contain words or symbols that require a matching counterpart for making an expression well-formed. The combination of opening and closing brackets is a typical example of such a construction. Due to their commonness, the ability to follow such rules is important for language modeling. Currently, recurrent neural networks (RNNs) are extensively used for this task. We investigate whether they are capable of learning the rules of opening and closing brackets by applying them to synthetic Dyck languages that consist of different types of brackets. We provide an analysis of the statistical properties of these languages as a baseline and show strengths and limits of Elman-RNNs, GRUs and LSTMs in experiments on random samples of these languages. In terms of perplexity and prediction accuracy, the RNNs get close to the theoretical baseline in most cases.
Anthology ID:
W18-5425
Volume:
Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP
Month:
November
Year:
2018
Address:
Brussels, Belgium
Venues:
EMNLP | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
232–239
Language:
URL:
https://aclanthology.org/W18-5425
DOI:
10.18653/v1/W18-5425
Bibkey:
Cite (ACL):
Natalia Skachkova, Thomas Trost, and Dietrich Klakow. 2018. Closing Brackets with Recurrent Neural Networks. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 232–239, Brussels, Belgium. Association for Computational Linguistics.
Cite (Informal):
Closing Brackets with Recurrent Neural Networks (Skachkova et al., 2018)
Copy Citation:
PDF:
https://preview.aclanthology.org/update-css-js/W18-5425.pdf