VOLTA: Improving Generative Diversity by Variational Mutual Information Maximizing Autoencoder

Yueen Ma, DaFeng Chi, Jingjing Li, Kai Song, Yuzheng Zhuang, Irwin King


Abstract
The natural language generation domain has witnessed great success thanks to Transformer models. Although they have achieved state-of-the-art generative quality, they often neglect generative diversity. Prior attempts to tackle this issue suffer from either low model capacity or over-complicated architectures. Some recent methods employ the VAE framework to enhance diversity, but their latent variables fully depend on the input context, restricting exploration of the latent space. In this paper, we introduce VOLTA, a framework that elevates generative diversity by bridging Transformer with VAE via a more effective cross-attention-based connection, departing from conventional embedding concatenation or summation. Additionally, we propose integrating InfoGAN-style latent codes to enable input-independent variability, further diversifying the generation. Moreover, our framework accommodates discrete inputs alongside its existing support for continuous inputs. We perform comprehensive experiments with two types of Transformers on six datasets from three different NLG tasks to show that our approach can significantly improve generative diversity while maintaining generative quality.
Anthology ID:
2024.findings-naacl.26
Volume:
Findings of the Association for Computational Linguistics: NAACL 2024
Month:
June
Year:
2024
Address:
Mexico City, Mexico
Editors:
Kevin Duh, Helena Gomez, Steven Bethard
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
364–378
Language:
URL:
https://aclanthology.org/2024.findings-naacl.26
DOI:
Bibkey:
Cite (ACL):
Yueen Ma, DaFeng Chi, Jingjing Li, Kai Song, Yuzheng Zhuang, and Irwin King. 2024. VOLTA: Improving Generative Diversity by Variational Mutual Information Maximizing Autoencoder. In Findings of the Association for Computational Linguistics: NAACL 2024, pages 364–378, Mexico City, Mexico. Association for Computational Linguistics.
Cite (Informal):
VOLTA: Improving Generative Diversity by Variational Mutual Information Maximizing Autoencoder (Ma et al., Findings 2024)
Copy Citation:
PDF:
https://preview.aclanthology.org/naacl24-info/2024.findings-naacl.26.pdf
Copyright:
 2024.findings-naacl.26.copyright.pdf