Characterizing and Avoiding Problematic Global Optima of Variational Autoencoders

03/17/2020
by   Yaniv Yacoby, et al.
0

Variational Auto-encoders (VAEs) are deep generative latent variable models consisting of two components: a generative model that captures a data distribution p(x) by transforming a distribution p(z) over latent space, and an inference model that infers likely latent codes for each data point (Kingma and Welling, 2013). Recent work shows that traditional training methods tend to yield solutions that violate modeling desiderata: (1) the learned generative model captures the observed data distribution but does so while ignoring the latent codes, resulting in codes that do not represent the data (e.g. van den Oord et al. (2017); Kim et al. (2018)); (2) the aggregate of the learned latent codes does not match the prior p(z). This mismatch means that the learned generative model will be unable to generate realistic data with samples from p(z)(e.g. Makhzani et al. (2015); Tomczak and Welling (2017)). In this paper, we demonstrate that both issues stem from the fact that the global optima of the VAE training objective often correspond to undesirable solutions. Our analysis builds on two observations: (1) the generative model is unidentifiable - there exist many generative models that explain the data equally well, each with different (and potentially unwanted) properties and (2) bias in the VAE objective - the VAE objective may prefer generative models that explain the data poorly but have posteriors that are easy to approximate. We present a novel inference method, LiBI, mitigating the problems identified in our analysis. On synthetic datasets, we show that LiBI can learn generative models that capture the data distribution and inference models that better satisfy modeling assumptions when traditional methods struggle to do so.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/15/2020

Unsupervised Learning of Global Factors in Deep Generative Models

We present a novel deep generative model based on non i.i.d. variational...
research
05/30/2017

Generative Models of Visually Grounded Imagination

It is easy for people to imagine what a man with pink hair looks like, e...
research
03/24/2019

Approximate Query Processing using Deep Generative Models

Data is generated at an unprecedented rate surpassing our ability to ana...
research
09/09/2021

Supervising the Decoder of Variational Autoencoders to Improve Scientific Utility

Probabilistic generative models are attractive for scientific modeling b...
research
04/24/2023

Variational Diffusion Auto-encoder: Deep Latent Variable Model with Unconditional Diffusion Prior

Variational auto-encoders (VAEs) are one of the most popular approaches ...
research
05/14/2019

DeepFlow: History Matching in the Space of Deep Generative Models

The calibration of a reservoir model with observed transient data of flu...
research
07/09/2021

The Effects of Invertibility on the Representational Complexity of Encoders in Variational Autoencoders

Training and using modern neural-network based latent-variable generativ...

Please sign up or login with your details

Forgot password? Click here to reset