Variational Inference: A Unified Framework of Generative Models and Some Revelations

07/16/2018
by   Jianlin Su, et al.
0

We reinterpreting the variational inference in a new perspective. Via this way, we can easily prove that EM algorithm, VAE, GAN, AAE, ALI(BiGAN) are all special cases of variational inference. The proof also reveals the loss of standard GAN is incomplete and it explains why we need to train GAN cautiously. From that, we find out a regularization term to improve stability of GAN training.

READ FULL TEXT
research
05/29/2018

Wasserstein Variational Inference

This paper introduces Wasserstein variational inference, a new form of a...
research
02/27/2017

Variational Inference using Implicit Distributions

Generative adversarial networks (GANs) have given us a great tool to fit...
research
05/11/2022

A Unified f-divergence Framework Generalizing VAE and GAN

Developing deep generative models that flexibly incorporate diverse meas...
research
06/23/2020

On the Relationship Between Active Inference and Control as Inference

Active Inference (AIF) is an emerging framework in the brain sciences wh...
research
04/22/2020

Stabilizing Training of Generative Adversarial Nets via Langevin Stein Variational Gradient Descent

Generative adversarial networks (GANs), famous for the capability of lea...
research
04/01/2021

Variational Inference MPC using Tsallis Divergence

In this paper, we provide a generalized framework for Variational Infere...
research
02/19/2018

Distribution Matching in Variational Inference

The difficulties in matching the latent posterior to the prior, balancin...

Please sign up or login with your details

Forgot password? Click here to reset