Fixing Gaussian Mixture VAEs for Interpretable Text Generation

06/16/2019
by   Wenxian Shi, et al.
0

Variational auto-encoder (VAE) with Gaussian priors is effective in text generation. To improve the controllability and interpretability, we propose to use Gaussian mixture distribution as the prior for VAE (GMVAE), since it includes an extra discrete latent variable in addition to the continuous one. Unfortunately, training GMVAE using standard variational approximation often leads to the mode-collapse problem. We theoretically analyze the root cause --- maximizing the evidence lower bound of GMVAE implicitly aggregates the means of multiple Gaussian priors. We propose Dispersed-GMVAE (DGMVAE), an improved model for text generation. It introduces two extra terms to alleviate mode-collapse and to induce a better structured latent space. Experimental results show that DGMVAE outperforms strong baselines in several language modeling and text generation benchmarks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/30/2019

Implicit Deep Latent Variable Models for Text Generation

Deep latent variable models (LVM) such as variational auto-encoder (VAE)...
research
11/19/2017

Diverse and Accurate Image Description Using a Variational Auto-Encoder with an Additive Gaussian Encoding Space

This paper explores image caption generation using conditional variation...
research
04/26/2020

Towards Multimodal Response Generation with Exemplar Augmentation and Curriculum Optimization

Recently, variational auto-encoder (VAE) based approaches have made impr...
research
11/10/2019

Stylized Text Generation Using Wasserstein Autoencoders with a Mixture of Gaussian Prior

Wasserstein autoencoders are effective for text generation. They do not ...
research
01/06/2021

Cauchy-Schwarz Regularized Autoencoder

Recent work in unsupervised learning has focused on efficient inference ...
research
01/07/2020

Paraphrase Generation with Latent Bag of Words

Paraphrase generation is a longstanding important problem in natural lan...
research
05/22/2023

MacLaSa: Multi-Aspect Controllable Text Generation via Efficient Sampling from Compact Latent Space

Multi-aspect controllable text generation aims to generate fluent senten...

Please sign up or login with your details

Forgot password? Click here to reset