Abstract
We reformulate the problem of encoding a multi-scale representation of a sequence in a language model by casting it in a continuous learning framework. We propose a hierarchical multi-scale language model in which short time-scale dependencies are encoded in the hidden state of a lower-level recurrent neural network while longer time-scale dependencies are encoded in the dynamic of the lower-level network by having a meta-learner update the weights of the lower-level neural network in an online meta-learning fashion. We use elastic weights consolidation as a higher-level to prevent catastrophic forgetting in our continuous learning framework.- Anthology ID:
- P18-2001
- Volume:
- Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)
- Month:
- July
- Year:
- 2018
- Address:
- Melbourne, Australia
- Venue:
- ACL
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 1–7
- Language:
- URL:
- https://aclanthology.org/P18-2001
- DOI:
- 10.18653/v1/P18-2001
- Cite (ACL):
- Thomas Wolf, Julien Chaumond, and Clement Delangue. 2018. Continuous Learning in a Hierarchical Multiscale Neural Network. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 1–7, Melbourne, Australia. Association for Computational Linguistics.
- Cite (Informal):
- Continuous Learning in a Hierarchical Multiscale Neural Network (Wolf et al., ACL 2018)
- PDF:
- https://preview.aclanthology.org/auto-file-uploads/P18-2001.pdf
- Data
- WikiText-2