Abstract
Advances in variational inference enable parameterisation of probabilistic models by deep neural networks. This combines the statistical transparency of the probabilistic modelling framework with the representational power of deep learning. Yet, due to a problem known as posterior collapse, it is difficult to estimate such models in the context of language modelling effectively. We concentrate on one such model, the variational auto-encoder, which we argue is an important building block in hierarchical probabilistic models of language. This paper contributes a sober view of the problem, a survey of techniques to address it, novel techniques, and extensions to the model. To establish a ranking of techniques, we perform a systematic comparison using Bayesian optimisation and find that many techniques perform reasonably similar, given enough resources. Still, a favourite can be named based on convenience. We also make several empirical observations and recommendations of best practices that should help researchers interested in this exciting field.- Anthology ID:
- 2020.acl-main.646
- Volume:
- Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
- Month:
- July
- Year:
- 2020
- Address:
- Online
- Editors:
- Dan Jurafsky, Joyce Chai, Natalie Schluter, Joel Tetreault
- Venue:
- ACL
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 7220–7236
- Language:
- URL:
- https://aclanthology.org/2020.acl-main.646
- DOI:
- 10.18653/v1/2020.acl-main.646
- Cite (ACL):
- Tom Pelsmaeker and Wilker Aziz. 2020. Effective Estimation of Deep Generative Language Models. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7220–7236, Online. Association for Computational Linguistics.
- Cite (Informal):
- Effective Estimation of Deep Generative Language Models (Pelsmaeker & Aziz, ACL 2020)
- PDF:
- https://preview.aclanthology.org/nschneid-patch-2/2020.acl-main.646.pdf
- Code
- tom-pelsmaeker/deep-generative-lm