Bounding Evidence and Estimating Log-Likelihood in VAE

06/19/2022
by   Łukasz Struski, et al.
0

Many crucial problems in deep learning and statistics are caused by a variational gap, i.e., a difference between evidence and evidence lower bound (ELBO). As a consequence, in the classical VAE model, we obtain only the lower bound on the log-likelihood since ELBO is used as a cost function, and therefore we cannot compare log-likelihood between models. In this paper, we present a general and effective upper bound of the variational gap, which allows us to efficiently estimate the true evidence. We provide an extensive theoretical study of the proposed approach. Moreover, we show that by applying our estimation, we can easily obtain lower and upper bounds for the log-likelihood of VAE models.

READ FULL TEXT

page 7

page 8

research
07/21/2019

Tutorial: Deriving the Standard Variational Autoencoder (VAE) Loss Function

In Bayesian machine learning, the posterior distribution is typically co...
research
06/06/2022

Embrace the Gap: VAEs Perform Independent Mechanism Analysis

Variational autoencoders (VAEs) are a popular framework for modeling com...
research
08/18/2016

Parameter Learning for Log-supermodular Distributions

We consider log-supermodular models on binary variables, which are proba...
research
02/14/2012

Lipschitz Parametrization of Probabilistic Graphical Models

We show that the log-likelihood of several probabilistic graphical model...
research
12/30/2014

Accurate and Conservative Estimates of MRF Log-likelihood using Reverse Annealing

Markov random fields (MRFs) are difficult to evaluate as generative mode...
research
06/09/2020

Super-resolution Variational Auto-Encoders

The framework of variational autoencoders (VAEs) provides a principled m...
research
11/01/2021

Bounds all around: training energy-based models with bidirectional bounds

Energy-based models (EBMs) provide an elegant framework for density esti...

Please sign up or login with your details

Forgot password? Click here to reset