Log In Sign Up

Tutorial: Deriving the Standard Variational Autoencoder (VAE) Loss Function

by   Stephen Odaibo, et al.

In Bayesian machine learning, the posterior distribution is typically computationally intractable, hence variational inference is often required. In this approach, an evidence lower bound on the log likelihood of data is maximized during training. Variational Autoencoders (VAE) are one important example where variational inference is utilized. In this tutorial, we derive the variational lower bound loss function of the standard variational autoencoder. We do so in the instance of a gaussian latent prior and gaussian approximate posterior, under which assumptions the Kullback-Leibler term in the variational lower bound has a closed form solution. We derive essentially everything we use along the way; everything from Bayes' theorem to the Kullback-Leibler divergence.


page 1

page 2

page 3

page 4


Least Square Variational Bayesian Autoencoder with Regularization

In recent years Variation Autoencoders have become one of the most popul...

A lower bound for the ELBO of the Bernoulli Variational Autoencoder

We consider a variational autoencoder (VAE) for binary data. Our main in...

Approximate Variational Inference Based on a Finite Sample of Gaussian Latent Variables

Variational methods are employed in situations where exact Bayesian infe...

Importance Weighted Autoencoders

The variational autoencoder (VAE; Kingma, Welling (2014)) is a recently ...

Variational Bayesian Inference with Stochastic Search

Mean-field variational inference is a method for approximate Bayesian po...

Bounding Evidence and Estimating Log-Likelihood in VAE

Many crucial problems in deep learning and statistics are caused by a va...

Quasi-symplectic Langevin Variational Autoencoder

Variational autoencoder (VAE) as one of the well investigated generative...

Bayes Theorem

Bayes theorem is a way to update one’s belief as new evidence comes into view. The probability of a hypothesis,

, given some new data , is denoted, , and is given by


where is the probability of the data , is the probability of the data given a hypothesis , and is the probability of that hypothesis . While Bayes theorem by itself can appear non-intuitive or at least difficult to intuit, the key to understanding it is to derive it. It arises directly out of the conditional probability axiom, which itself arises out of the definition of the joint probability. The probability of an event and an event Y occurring jointly is,


And since the ‘AND’ is commutative, we have,


Dividing both sides of Equation (4) by yields Bayes theorem,

Symbol Name
Latent variable
Evidence or Data
Evidence probability
Prior probability
Posterior probability
Likelihood probability
Table 1: Bayesian Statistics Glossary

Kullback-Leibler Divergence

When comparing two distributions as we often do in density estimation, the central task of generative models, we need a measure of similarity between both distributions. The Kullback-Leibler divergence is a commonly used similarity measure for this purpose. It is the expectation of the information difference between both distributions. But first, what is information?

To understand what information is and to see its definition, consider the following: The higher the probability of an event, the lower its information content. This makes intuitive sense in that if someone tells us something ‘obvious’ i.e. highly probable i.e. something we and almost everyone else already knew, then that informant has not increased the amount of information we have. Hence the information content of highly probably event is low. Another way to say this is that the information is inversely related to the probability of an event. And since is directly related to , it follows that is inversely related to , and is how we model information:


The difference of information between and is therefore:


And the Kullback-Leibler is the expectation of the above difference, and is given by,




Note that the Kullback-Leibler (KL) is not symmetric, i.e,


In , we are taking the expectation of the information difference with respect to distribution, while in , we are taking the expectation with respect to the distribution.

Hence the Kullback-Leibler is called a ‘divergence’ and not a ‘metric’ as metrics must be symmetric. There recently have been a number of symmetrization devices proposed for KL which have been shown to improve its generative fidelity [Pu et al. (2017)][ Chen et al. (2017)] [Arjovsky et al. (2017)].

Note the KL divergence is always non-negative, i.e.,


To see this, note that as depicted in Figure (1),




We have just shown,


which implies,

Figure 1:

VAE Objective

Consider variational autoencoders [Kingma et al. (2013)]. They have many applications including for finer characterization of disease [Odaibo (2019)]. The encoder portion of a VAE yields an approximate posterior distribution

, and is parametrized on a neural network by weights collectively denoted

. Hence we more properly write the encoder as . Similarly, the decoder portion of the VAE yields a likelihood distribution , and is parametrized on a neural network by weights collectively denoted . Hence we more properly denote the decoder portion of the VAE as . The output of the encoder are parameters of the latent distribution, which is sampled to yield the input into the decoder. A VAE schematic is shown in Figure (2).

Figure 2: VAE

The KL divergence between the approximate and the real posterior distributions is given by,


Applying Bayes’ theorem to the above equation yields,


This can be broken down using laws of logarithms, yielding,


Distributing the integrand then yields,


In the above, we note that is a constant and can therefore be pulled out of the second integral above, yielding,


And since

is a probability distribution it integrates to 1 in the above equation, yielding,


Then carrying the integral over to the other side of the inequality, we get,


Applying rules of logarithms, we get,


Recognizing the right hand side of the above inequality as Expectation, we write,


From Equation (23) it also follows that:


The right hand side of the above equation is the Evidence Lower Bound (ELBO) also known as the variational lower bound. It is so termed because it bounds the likelihood of the data which is the term we seek to maximize. Therefore maximizing the ELBO maximizes the log probability of our data by proxy. This is the core idea of variational inference, since maximization of the log probability directly is typically computationally intractable. The Kullback-Leibler term in the ELBO is a regularizer because it is a constraint on the form of the approximate posterior. The second term is called a reconstruction term because it is a measure of the likelihood of the reconstructed data output at the decoder.

Notably, we have some liberty to choose some structure for our latent variables. We can obtain a closed form for the loss function if we choose a gaussian representation for the latent prior and the approximate posterior, . In addition to yielding a closed form loss function, the gaussian model enforces a form of regularization in which the approximate posterior have variation or spread (like a gaussian).

Closed form VAE Loss: Gaussian Latents

Say we choose:





then the KL or regularization term in the ELBO becomes:


Evaluating the term in the logarithm simplifies the above into,


This further simplifies into,


which further simplifies into,


Expressing the above as an Expectation we get,


And since the variance

is the expectation of the squared distance from the mean, i.e.,


it follows that,


Recall that,




And when we take and , we get,


Recall the ELBO, Equation (28),

From which it follows that the contribution from a given datum and a single stochastic draw towards the objective to be maximized is,


where and are parameters into the approximate distribution, , and

is an index into the latent vector

. For a batch, the objective function is therefore given by,


where is the dimension of the latent vector , and is the number of samples stochastically drawn according to re-parametrization trick.

Because the objective function we obtain in Equation (42) is to be maximized during training, we can think of it as a ‘gain’ function as opposed to a loss function. To obtain the loss function, we simply take the negative of :


Therefore to train the VAE is to seek the optimal network parameters that minimize :



We have done a step-by-step derivation of the VAE loss function. We illustrated the essence of variational inference along the way, and have derived the closed form loss in the special case of gaussian latent.


The author thanks Larry Carin for helpful discussion on consequences of Kullback-Leibler divergence asymmetry, and on KL symmetrization approach.