A Hybrid Convolutional Variational Autoencoder for Text Generation

Stanislau Semeniuta, Aliaksei Severyn, Erhardt Barth


Abstract
In this paper we explore the effect of architectural choices on learning a variational autoencoder (VAE) for text generation. In contrast to the previously introduced VAE model for text where both the encoder and decoder are RNNs, we propose a novel hybrid architecture that blends fully feed-forward convolutional and deconvolutional components with a recurrent language model. Our architecture exhibits several attractive properties such as faster run time and convergence, ability to better handle long sequences and, more importantly, it helps to avoid the issue of the VAE collapsing to a deterministic model.
Anthology ID:
D17-1066
Volume:
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
Month:
September
Year:
2017
Address:
Copenhagen, Denmark
Editors:
Martha Palmer, Rebecca Hwa, Sebastian Riedel
Venue:
EMNLP
SIG:
SIGDAT
Publisher:
Association for Computational Linguistics
Note:
Pages:
627–637
Language:
URL:
https://aclanthology.org/D17-1066
DOI:
10.18653/v1/D17-1066
Bibkey:
Cite (ACL):
Stanislau Semeniuta, Aliaksei Severyn, and Erhardt Barth. 2017. A Hybrid Convolutional Variational Autoencoder for Text Generation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 627–637, Copenhagen, Denmark. Association for Computational Linguistics.
Cite (Informal):
A Hybrid Convolutional Variational Autoencoder for Text Generation (Semeniuta et al., EMNLP 2017)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-2/D17-1066.pdf
Attachment:
 D17-1066.Attachment.zip
Video:
 https://preview.aclanthology.org/nschneid-patch-2/D17-1066.mp4
Code
 stas-semeniuta/textvae +  additional community code