Implicit Deep Latent Variable Models for Text Generation

08/30/2019
by   Le Fang, et al.
11

Deep latent variable models (LVM) such as variational auto-encoder (VAE) have recently played an important role in text generation. One key factor is the exploitation of smooth latent structures to guide the generation. However, the representation power of VAEs is limited due to two reasons: (1) the Gaussian assumption is often made on the variational posteriors; and meanwhile (2) a notorious "posterior collapse" issue occurs. In this paper, we advocate sample-based representations of variational distributions for natural language, leading to implicit latent features, which can provide flexible representation power compared with Gaussian-based posteriors. We further develop an LVM to directly match the aggregated posterior to the prior. It can be viewed as a natural extension of VAEs with a regularization of maximizing mutual information, mitigating the "posterior collapse" issue. We demonstrate the effectiveness and versatility of our models in various text generation scenarios, including language modeling, unaligned style transfer, and dialog response generation. The source code to reproduce our experimental results is available on GitHub.

READ FULL TEXT

page 1

page 2

page 3

page 4

06/16/2019

Fixing Gaussian Mixture VAEs for Interpretable Text Generation

Variational auto-encoder (VAE) with Gaussian priors is effective in text...
02/18/2020

SentenceMIM: A Latent Variable Language Model

We introduce sentenceMIM, a probabilistic auto-encoder for language mode...
04/04/2019

Riemannian Normalizing Flow on Variational Wasserstein Autoencoder for Text Modeling

Recurrent Variational Autoencoder has been widely used for language mode...
04/30/2020

APo-VAE: Text Generation in Hyperbolic Space

Natural language often exhibits inherent hierarchical structure ingraine...
03/03/2022

Deep Latent-Variable Models for Text Generation

Text generation aims to produce human-like natural language output for d...
05/31/2019

On the Necessity and Effectiveness of Learning the Prior of Variational Auto-Encoder

Using powerful posterior distributions is a popular approach to achievin...
04/04/2022

Diverse Text Generation via Variational Encoder-Decoder Models with Gaussian Process Priors

Generating high quality texts with high diversity is important for many ...

Code Repositories

Implicit-LVM

This code repository presents the pytorch implementation of the paper “Implicit Deep Latent Variable Models for Text Generation”(EMNLP 2019).


view repo