GibbsNet: Iterative Adversarial Inference for Deep Graphical Models

12/12/2017
by   Alex Lamb, et al.
0

Directed latent variable models that formulate the joint distribution as p(x,z) = p(z) p(x | z) have the advantage of fast and exact sampling. However, these models have the weakness of needing to specify p(z), often with a simple fixed prior that limits the expressiveness of the model. Undirected latent variable models discard the requirement that p(z) be specified with a prior, yet sampling from them generally requires an iterative procedure such as blocked Gibbs-sampling that may require many steps to draw samples from the joint distribution p(x, z). We propose a novel approach to learning the joint distribution between the data and a latent code which uses an adversarially learned iterative procedure to gradually refine the joint distribution, p(x, z), to better match with the data distribution on each step. GibbsNet is the best of both worlds both in theory and in practice. Achieving the speed and simplicity of a directed latent variable model, it is guaranteed (assuming the adversarial game reaches the virtual training criteria global minimum) to produce samples from p(x, z) with only a few sampling iterations. Achieving the expressiveness and flexibility of an undirected latent variable model, GibbsNet does away with the need for an explicit p(z) and has the ability to do attribute prediction, class-conditional generation, and joint image-attribute modeling in a single model which is not trained for any of these specific tasks. We show empirically that GibbsNet is able to learn a more complex p(z) and show that this leads to improved inpainting and iterative refinement of p(x, z) for dozens of steps and stable generation without collapse for thousands of steps, despite being trained on only a few steps.

READ FULL TEXT

page 4

page 8

page 9

research
03/08/2016

Online but Accurate Inference for Latent Variable Models with Local Gibbs Sampling

We study parameter inference in large-scale latent variable models. We f...
research
03/06/2023

Guiding Energy-based Models via Contrastive Latent Variables

An energy-based model (EBM) is a popular generative framework that offer...
research
07/07/2020

Benefiting Deep Latent Variable Models via Learning the Prior and Removing Latent Regularization

There exist many forms of deep latent variable models, such as the varia...
research
10/04/2019

Stacked Wasserstein Autoencoder

Approximating distributions over complicated manifolds, such as natural ...
research
12/18/2019

Sampling Good Latent Variables via CPP-VAEs: VAEs with Condition Posterior as Prior

In practice, conditional variational autoencoders (CVAEs) perform condit...
research
11/30/2017

An interpretable latent variable model for attribute applicability in the Amazon catalogue

Learning attribute applicability of products in the Amazon catalog (e.g....
research
01/20/2023

Opaque prior distributions in Bayesian latent variable models

We review common situations in Bayesian latent variable models where the...

Please sign up or login with your details

Forgot password? Click here to reset