Stacked Wasserstein Autoencoder

10/04/2019
by   Wenju Xu, et al.
19

Approximating distributions over complicated manifolds, such as natural images, are conceptually attractive. The deep latent variable model, trained using variational autoencoders and generative adversarial networks, is now a key technique for representation learning. However, it is difficult to unify these two models for exact latent-variable inference and parallelize both reconstruction and sampling, partly due to the regularization under the latent variables, to match a simple explicit prior distribution. These approaches are prone to be oversimplified, and can only characterize a few modes of the true distribution. Based on the recently proposed Wasserstein autoencoder (WAE) with a new regularization as an optimal transport. The paper proposes a stacked Wasserstein autoencoder (SWAE) to learn a deep latent variable model. SWAE is a hierarchical model, which relaxes the optimal transport constraints at two stages. At the first stage, the SWAE flexibly learns a representation distribution, i.e., the encoded prior; and at the second stage, the encoded representation distribution is approximated with a latent variable model under the regularization encouraging the latent distribution to match the explicit prior. This model allows us to generate natural textual outputs as well as perform manipulations in the latent space to induce changes in the output space. Both quantitative and qualitative results demonstrate the superior performance of SWAE compared with the state-of-the-art approaches in terms of faithful reconstruction and generation quality.

READ FULL TEXT

page 2

page 12

page 13

page 14

page 15

page 16

research
10/07/2020

Learning Deep-Latent Hierarchies by Stacking Wasserstein Autoencoders

Probabilistic models with hierarchical-latent-variable structures provid...
research
07/07/2020

Benefiting Deep Latent Variable Models via Learning the Prior and Removing Latent Regularization

There exist many forms of deep latent variable models, such as the varia...
research
02/12/2019

Density Estimation and Incremental Learning of Latent Vector for Generative Autoencoders

In this paper, we treat the image generation task using the autoencoder,...
research
06/12/2018

Gaussian mixture models with Wasserstein distance

Generative models with both discrete and continuous latent variables are...
research
06/24/2021

Symmetric Wasserstein Autoencoders

Leveraging the framework of Optimal Transport, we introduce a new family...
research
12/12/2017

GibbsNet: Iterative Adversarial Inference for Deep Graphical Models

Directed latent variable models that formulate the joint distribution as...
research
02/07/2020

Learning Autoencoders with Relational Regularization

A new algorithmic framework is proposed for learning autoencoders of dat...

Please sign up or login with your details

Forgot password? Click here to reset