Optimal Transport Based Generative Autoencoders

by   Oliver Zhang, et al.

The field of deep generative modeling is dominated by generative adversarial networks (GANs). However, the training of GANs often lacks stability, fails to converge, and suffers from model collapse. It takes an assortment of tricks to solve these problems, which may be difficult to understand for those seeking to apply generative modeling. Instead, we propose two novel generative autoencoders, AE-OTtrans and AE-OTgen, which rely on optimal transport instead of adversarial training. AE-OTtrans and AEOTgen, unlike VAE and WAE, preserve the manifold of the data; they do not force the latent distribution to match a normal distribution, resulting in greater quality images. AEOTtrans and AE-OTgen also produce images of higher diversity compared to their predecessor, AE-OT. We show that AE-OTtrans and AE-OTgen surpass GANs in the MNIST and FashionMNIST datasets. Furthermore, We show that AE-OTtrans and AE-OTgen do state of the art on the MNIST, FashionMNIST, and CelebA image sets comapred to other non-adversarial generative models.


page 10

page 14

page 15


k-GANs: Ensemble of Generative Models with Semi-Discrete Optimal Transport

Generative adversarial networks (GANs) are the state of the art in gener...

Scalable Unbalanced Optimal Transport using Generative Adversarial Networks

Generative adversarial networks (GANs) are an expressive class of neural...

Wasserstein-2 Generative Networks

Modern generative learning is mainly associated with Generative Adversar...

Optimal Transport using GANs for Lineage Tracing

In this paper, we present Super-OT, a novel approach to computational li...

MaGNET: Uniform Sampling from Deep Generative Network Manifolds Without Retraining

Deep Generative Networks (DGNs) are extensively employed in Generative A...

Re-parameterizing VAEs for stability

We propose a theoretical approach towards the training numerical stabili...

Theoretical Insights into the Use of Structural Similarity Index In Generative Models and Inferential Autoencoders

Generative models and inferential autoencoders mostly make use of ℓ_2 no...

Please sign up or login with your details

Forgot password? Click here to reset