The Evidence Lower Bound of Variational Autoencoders Converges to a Sum of Three Entropies

10/28/2020
by   Jörg Lücke, et al.
0

The central objective function of a variational autoencoder (VAE) is its variational lower bound. Here we show that for standard VAEs the variational bound is at convergence equal to the sum of three entropies: the (negative) entropy of the latent distribution, the expected (negative) entropy of the observable distribution, and the average entropy of the variational distributions. Our derived analytical results are exact and apply for small as well as complex neural networks for decoder and encoder. Furthermore, they apply for finite and infinitely many data points and at any stationary point (including local and global maxima). As a consequence, we show that the variance parameters of encoder and decoder play the key role in determining the values of variational bounds at convergence. Furthermore, the obtained results can allow for closed-form analytical expressions at convergence, which may be unexpected as neither variational bounds of VAEs nor log-likelihoods of VAEs are closed-form during learning. As our main contribution, we provide the proofs for convergence of standard VAEs to sums of entropies. Furthermore, we numerically verify our analytical results and discuss some potential applications. The obtained equality to entropy sums provides novel information on those points in parameter space that variational learning converges to. As such, we believe they can potentially significantly contribute to our understanding of established as well as novel VAE approaches.

READ FULL TEXT

page 17

page 18

page 19

research
09/07/2022

On the Convergence of the ELBO to Entropy Sums

The variational lower bound (a.k.a. ELBO or free energy) is the central ...
research
12/06/2022

Three Variations on Variational Autoencoders

Variational autoencoders (VAEs) are one class of generative probabilisti...
research
07/19/2023

Symmetric Equilibrium Learning of VAEs

We view variational autoencoders (VAE) as decoder-encoder pairs, which m...
research
12/21/2019

Closed Form Variances for Variational Auto-Encoders

We propose a reformulation of Variational Auto-Encoders eliminating half...
research
11/02/2018

Closed Form Variational Objectives For Bayesian Neural Networks with a Single Hidden Layer

In this note we consider setups in which variational objectives for Baye...
research
12/24/2020

Soft-IntroVAE: Analyzing and Improving the Introspective Variational Autoencoder

The recently introduced introspective variational autoencoder (IntroVAE)...
research
05/13/2019

Hierarchical Importance Weighted Autoencoders

Importance weighted variational inference (Burda et al., 2015) uses mult...

Please sign up or login with your details

Forgot password? Click here to reset