Reinterpreting Importance-Weighted Autoencoders

04/10/2017
by   Chris Cremer, et al.
0

The standard interpretation of importance-weighted autoencoders is that they maximize a tighter lower bound on the marginal likelihood than the standard evidence lower bound. We give an alternate interpretation of this procedure: that it optimizes the standard variational lower bound, but using a more complex distribution. We formally derive this result, present a tighter lower bound, and visualize the implicit importance-weighted distribution.

READ FULL TEXT
research
05/13/2019

Hierarchical Importance Weighted Autoencoders

Importance weighted variational inference (Burda et al., 2015) uses mult...
research
05/18/2018

DVAE#: Discrete Variational Autoencoders with Relaxed Boltzmann Priors

Boltzmann machines are powerful distributions that have been shown to be...
research
05/17/2023

Variational Classification

We present a novel extension of the traditional neural network approach ...
research
03/27/2023

A simplified lower bound for implicational logic

We present a streamlined and simplified exponential lower bound on the l...
research
02/04/2019

Stabilization Time in Weighted Minority Processes

A minority process in a weighted graph is a dynamically changing colorin...
research
05/09/2018

Dispersion Bound for the Wyner-Ahlswede-Körner Network via Reverse Hypercontractivity on Types

Using the functional-entropic duality and the reverse hypercontractivity...
research
04/26/2020

Notes on Icebreaker

Icebreaker [1] is new research from MSR that is able to achieve state of...

Please sign up or login with your details

Forgot password? Click here to reset