Riemannian Normalizing Flow on Variational Wasserstein Autoencoder for Text Modeling

04/04/2019
by   Prince Zizhuang Wang, et al.
0

Recurrent Variational Autoencoder has been widely used for language modeling and text generation tasks. These models often face a difficult optimization problem, also known as the Kullback-Leibler (KL) term vanishing issue, where the posterior easily collapses to the prior, and the model will ignore latent codes in generative tasks. To address this problem, we introduce an improved Wasserstein Variational Autoencoder (WAE) with Riemannian Normalizing Flow (RNF) for text modeling. The RNF transforms a latent variable into a space that respects the geometric characteristics of input space, which makes posterior impossible to collapse to the non-informative prior. The Wasserstein objective minimizes the distance between the marginal distribution and the prior directly and therefore does not force the posterior to match the prior. Empirical experiments show that our model avoids KL vanishing over a range of datasets and has better performances in tasks such as language modeling, likelihood approximation, and text generation. Through a series of experiments and analysis over latent space, we show that our model learns latent distributions that respect latent space geometry and is able to generate sentences that are more diverse.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/02/2020

Improving Variational Autoencoder for Text Modelling with Timestep-Wise Regularisation

The Variational Autoencoder (VAE) is a popular and powerful model applie...
research
08/31/2018

Spherical Latent Spaces for Stable Variational Autoencoders

A hallmark of variational autoencoders (VAEs) for text processing is the...
research
10/31/2018

Dirichlet Variational Autoencoder for Text Modeling

We introduce an improved variational autoencoder (VAE) for text modeling...
research
06/28/2022

AS-IntroVAE: Adversarial Similarity Distance Makes Robust IntroVAE

Recently, introspective models like IntroVAE and S-IntroVAE have excelle...
research
08/30/2019

Implicit Deep Latent Variable Models for Text Generation

Deep latent variable models (LVM) such as variational auto-encoder (VAE)...
research
08/28/2018

Hierarchical Quantized Representations for Script Generation

Scripts define knowledge about how everyday scenarios (such as going to ...
research
07/13/2022

Fuse It More Deeply! A Variational Transformer with Layer-Wise Latent Variable Inference for Text Generation

The past several years have witnessed Variational Auto-Encoder's superio...

Please sign up or login with your details

Forgot password? Click here to reset