Better Exploiting Latent Variables in Text Modeling

Canasai Kruengkrai


Abstract
We show that sampling latent variables multiple times at a gradient step helps in improving a variational autoencoder and propose a simple and effective method to better exploit these latent variables through hidden state averaging. Consistent gains in performance on two different datasets, Penn Treebank and Yahoo, indicate the generalizability of our method.
Anthology ID:
P19-1553
Volume:
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics
Month:
July
Year:
2019
Address:
Florence, Italy
Editors:
Anna Korhonen, David Traum, Lluís Màrquez
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
5527–5532
Language:
URL:
https://aclanthology.org/P19-1553
DOI:
10.18653/v1/P19-1553
Bibkey:
Cite (ACL):
Canasai Kruengkrai. 2019. Better Exploiting Latent Variables in Text Modeling. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5527–5532, Florence, Italy. Association for Computational Linguistics.
Cite (Informal):
Better Exploiting Latent Variables in Text Modeling (Kruengkrai, ACL 2019)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-1/P19-1553.pdf
Data
Penn Treebank