Implicit Deep Latent Variable Models for Text Generation

08/30/2019
by   Le Fang, et al.
11

Deep latent variable models (LVM) such as variational auto-encoder (VAE) have recently played an important role in text generation. One key factor is the exploitation of smooth latent structures to guide the generation. However, the representation power of VAEs is limited due to two reasons: (1) the Gaussian assumption is often made on the variational posteriors; and meanwhile (2) a notorious "posterior collapse" issue occurs. In this paper, we advocate sample-based representations of variational distributions for natural language, leading to implicit latent features, which can provide flexible representation power compared with Gaussian-based posteriors. We further develop an LVM to directly match the aggregated posterior to the prior. It can be viewed as a natural extension of VAEs with a regularization of maximizing mutual information, mitigating the "posterior collapse" issue. We demonstrate the effectiveness and versatility of our models in various text generation scenarios, including language modeling, unaligned style transfer, and dialog response generation. The source code to reproduce our experimental results is available on GitHub.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/16/2019

Fixing Gaussian Mixture VAEs for Interpretable Text Generation

Variational auto-encoder (VAE) with Gaussian priors is effective in text...
research
02/18/2020

SentenceMIM: A Latent Variable Language Model

We introduce sentenceMIM, a probabilistic auto-encoder for language mode...
research
05/07/2020

Learning Implicit Text Generation via Feature Matching

Generative feature matching network (GFMN) is an approach for training i...
research
04/04/2019

Riemannian Normalizing Flow on Variational Wasserstein Autoencoder for Text Modeling

Recurrent Variational Autoencoder has been widely used for language mode...
research
03/03/2022

Deep Latent-Variable Models for Text Generation

Text generation aims to produce human-like natural language output for d...
research
11/15/2022

An Overview on Controllable Text Generation via Variational Auto-Encoders

Recent advances in neural-based generative modeling have reignited the h...
research
06/08/2019

Effective Use of Variational Embedding Capacity in Expressive End-to-End Speech Synthesis

Recent work has explored sequence-to-sequence latent variable models for...

Please sign up or login with your details

Forgot password? Click here to reset