Deep Latent Variable Models of Natural Language
Abstract
The proposed tutorial will cover deep latent variable models both in the case where exact inference over the latent variables is tractable and when it is not. The former case includes neural extensions of unsupervised tagging and parsing models. Our discussion of the latter case, where inference cannot be performed tractably, will restrict itself to continuous latent variables. In particular, we will discuss recent developments both in neural variational inference (e.g., relating to Variational Auto-encoders) and in implicit density modeling (e.g., relating to Generative Adversarial Networks). We will highlight the challenges of applying these families of methods to NLP problems, and discuss recent successes and best practices.- Anthology ID:
- D18-3004
- Volume:
- Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: Tutorial Abstracts
- Month:
- October-November
- Year:
- 2018
- Address:
- Melbourne, Australia
- Editors:
- Mausam, Lu Wang
- Venue:
- EMNLP
- SIG:
- SIGDAT
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- Language:
- URL:
- https://aclanthology.org/D18-3004
- DOI:
- Cite (ACL):
- Alexander Rush, Yoon Kim, and Sam Wiseman. 2018. Deep Latent Variable Models of Natural Language. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: Tutorial Abstracts, Melbourne, Australia. Association for Computational Linguistics.
- Cite (Informal):
- Deep Latent Variable Models of Natural Language (Rush et al., EMNLP 2018)