Biadversarial Variational Autoencoder

02/09/2019
by   Arnaud Fickinger, et al.
0

In the original version of the Variational Autoencoder, Kingma et al. assume Gaussian distributions for the approximate posterior during the inference and for the output during the generative process. This assumptions are good for computational reasons, e.g. we can easily optimize the parameters of a neural network using the reparametrization trick and the KL divergence between two Gaussians can be computed in closed form. However it results in blurry images due to its difficulty to represent multimodal distributions. We show that using two adversarial networks, we can optimize the parameters without any Gaussian assumptions.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/14/2018

Variational Autoencoder with Implicit Optimal Priors

The variational autoencoder (VAE) is a powerful generative model that ca...
research
07/11/2017

Least Square Variational Bayesian Autoencoder with Regularization

In recent years Variation Autoencoders have become one of the most popul...
research
08/29/2022

Tackling Multimodal Device Distributions in Inverse Photonic Design using Invertible Neural Networks

Inverse design, the process of matching a device or process parameters t...
research
02/18/2022

Unsupervised Multiple-Object Tracking with a Dynamical Variational Autoencoder

In this paper, we present an unsupervised probabilistic model and associ...
research
07/20/2017

Learning to Draw Samples with Amortized Stein Variational Gradient Descent

We propose a simple algorithm to train stochastic neural networks to dra...
research
07/19/2018

Doubly Stochastic Adversarial Autoencoder

Any autoencoder network can be turned into a generative model by imposin...
research
06/05/2018

Explaining Away Syntactic Structure in Semantic Document Representations

Most generative document models act on bag-of-words input in an attempt ...

Please sign up or login with your details

Forgot password? Click here to reset