Wasserstein Auto-Encoders

11/05/2017
by   Ilya Tolstikhin, et al.
0

We propose the Wasserstein Auto-Encoder (WAE)---a new algorithm for building a generative model of the data distribution. WAE minimizes a penalized form of the Wasserstein distance between the model distribution and the target distribution, which leads to a different regularizer than the one used by the Variational Auto-Encoder (VAE). This regularizer encourages the encoded training distribution to match the prior. We compare our algorithm with several other techniques and show that it is a generalization of adversarial auto-encoders (AAE). Our experiments show that WAE shares many of the properties of VAEs (stable training, encoder-decoder architecture, nice latent manifold structure) while generating samples of better quality, as measured by the FID score.

READ FULL TEXT

page 3

page 7

page 8

research
05/20/2020

Tessellated Wasserstein Auto-Encoders

Non-adversarial generative models such as variational auto-encoder (VAE)...
research
02/25/2019

Wasserstein-Wasserstein Auto-Encoders

To address the challenges in learning deep generative models (e.g.,the b...
research
11/12/2018

Gaussian Auto-Encoder

Evaluating distance between sample distribution and the wanted one, usua...
research
04/24/2023

Variational Diffusion Auto-encoder: Deep Latent Variable Model with Unconditional Diffusion Prior

Variational auto-encoders (VAEs) are one of the most popular approaches ...
research
04/06/2020

Variational auto-encoders with Student's t-prior

We propose a new structure for the variational auto-encoders (VAEs) prio...
research
02/12/2023

Vector Quantized Wasserstein Auto-Encoder

Learning deep discrete latent presentations offers a promise of better s...
research
01/20/2019

Modeling the Biological Pathology Continuum with HSIC-regularized Wasserstein Auto-encoders

A crucial challenge in image-based modeling of biomedical data is to ide...

Please sign up or login with your details

Forgot password? Click here to reset