Abstract
In this paper, we introduce the novel concept of densely connected layers into recurrent neural networks. We evaluate our proposed architecture on the Penn Treebank language modeling task. We show that we can obtain similar perplexity scores with six times fewer parameters compared to a standard stacked 2-layer LSTM model trained with dropout (Zaremba et al., 2014). In contrast with the current usage of skip connections, we show that densely connecting only a few stacked layers with skip connections already yields significant perplexity reductions.- Anthology ID:
- W17-2622
- Volume:
- Proceedings of the 2nd Workshop on Representation Learning for NLP
- Month:
- August
- Year:
- 2017
- Address:
- Vancouver, Canada
- Venue:
- RepL4NLP
- SIG:
- SIGREP
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 186–190
- Language:
- URL:
- https://aclanthology.org/W17-2622
- DOI:
- 10.18653/v1/W17-2622
- Cite (ACL):
- Fréderic Godin, Joni Dambre, and Wesley De Neve. 2017. Improving Language Modeling using Densely Connected Recurrent Neural Networks. In Proceedings of the 2nd Workshop on Representation Learning for NLP, pages 186–190, Vancouver, Canada. Association for Computational Linguistics.
- Cite (Informal):
- Improving Language Modeling using Densely Connected Recurrent Neural Networks (Godin et al., RepL4NLP 2017)
- PDF:
- https://preview.aclanthology.org/remove-xml-comments/W17-2622.pdf