Bayes theorem is a way to update one’s belief as new evidence comes into view. The probability of a hypothesis,, given some new data , is denoted, , and is given by
where is the probability of the data , is the probability of the data given a hypothesis , and is the probability of that hypothesis . While Bayes theorem by itself can appear non-intuitive or at least difficult to intuit, the key to understanding it is to derive it. It arises directly out of the conditional probability axiom, which itself arises out of the definition of the joint probability. The probability of an event and an event Y occurring jointly is,
And since the ‘AND’ is commutative, we have,
Dividing both sides of Equation (4) by yields Bayes theorem,
When comparing two distributions as we often do in density estimation, the central task of generative models, we need a measure of similarity between both distributions. The Kullback-Leibler divergence is a commonly used similarity measure for this purpose. It is the expectation of the information difference between both distributions. But first, what is information?
To understand what information is and to see its definition, consider the following: The higher the probability of an event, the lower its information content. This makes intuitive sense in that if someone tells us something ‘obvious’ i.e. highly probable i.e. something we and almost everyone else already knew, then that informant has not increased the amount of information we have. Hence the information content of highly probably event is low. Another way to say this is that the information is inversely related to the probability of an event. And since is directly related to , it follows that is inversely related to , and is how we model information:
The difference of information between and is therefore:
And the Kullback-Leibler is the expectation of the above difference, and is given by,
Note that the Kullback-Leibler (KL) is not symmetric, i.e,
In , we are taking the expectation of the information difference with respect to distribution, while in , we are taking the expectation with respect to the distribution.
Hence the Kullback-Leibler is called a ‘divergence’ and not a ‘metric’ as metrics must be symmetric. There recently have been a number of symmetrization devices proposed for KL which have been shown to improve its generative fidelity [Pu et al. (2017)][ Chen et al. (2017)] [Arjovsky et al. (2017)].
Note the KL divergence is always non-negative, i.e.,
To see this, note that as depicted in Figure (1),
We have just shown,
Consider variational autoencoders [Kingma et al. (2013)]. They have many applications including for finer characterization of disease [Odaibo (2019)]. The encoder portion of a VAE yields an approximate posterior distribution
, and is parametrized on a neural network by weights collectively denoted. Hence we more properly write the encoder as . Similarly, the decoder portion of the VAE yields a likelihood distribution , and is parametrized on a neural network by weights collectively denoted . Hence we more properly denote the decoder portion of the VAE as . The output of the encoder are parameters of the latent distribution, which is sampled to yield the input into the decoder. A VAE schematic is shown in Figure (2).
The KL divergence between the approximate and the real posterior distributions is given by,
Applying Bayes’ theorem to the above equation yields,
This can be broken down using laws of logarithms, yielding,
Distributing the integrand then yields,
In the above, we note that is a constant and can therefore be pulled out of the second integral above, yielding,
is a probability distribution it integrates to 1 in the above equation, yielding,
Then carrying the integral over to the other side of the inequality, we get,
Applying rules of logarithms, we get,
Recognizing the right hand side of the above inequality as Expectation, we write,
From Equation (23) it also follows that:
The right hand side of the above equation is the Evidence Lower Bound (ELBO) also known as the variational lower bound. It is so termed because it bounds the likelihood of the data which is the term we seek to maximize. Therefore maximizing the ELBO maximizes the log probability of our data by proxy. This is the core idea of variational inference, since maximization of the log probability directly is typically computationally intractable. The Kullback-Leibler term in the ELBO is a regularizer because it is a constraint on the form of the approximate posterior. The second term is called a reconstruction term because it is a measure of the likelihood of the reconstructed data output at the decoder.
Notably, we have some liberty to choose some structure for our latent variables. We can obtain a closed form for the loss function if we choose a gaussian representation for the latent prior and the approximate posterior, . In addition to yielding a closed form loss function, the gaussian model enforces a form of regularization in which the approximate posterior have variation or spread (like a gaussian).
Closed form VAE Loss: Gaussian Latents
Say we choose:
then the KL or regularization term in the ELBO becomes:
Evaluating the term in the logarithm simplifies the above into,
This further simplifies into,
which further simplifies into,
Expressing the above as an Expectation we get,
And since the varianceis the expectation of the squared distance from the mean, i.e.,
it follows that,
And when we take and , we get,
Recall the ELBO, Equation (28),
From which it follows that the contribution from a given datum and a single stochastic draw towards the objective to be maximized is,
where and are parameters into the approximate distribution, , and
is an index into the latent vector. For a batch, the objective function is therefore given by,
where is the dimension of the latent vector , and is the number of samples stochastically drawn according to re-parametrization trick.
Because the objective function we obtain in Equation (42) is to be maximized during training, we can think of it as a ‘gain’ function as opposed to a loss function. To obtain the loss function, we simply take the negative of :
Therefore to train the VAE is to seek the optimal network parameters that minimize :
We have done a step-by-step derivation of the VAE loss function. We illustrated the essence of variational inference along the way, and have derived the closed form loss in the special case of gaussian latent.
The author thanks Larry Carin for helpful discussion on consequences of Kullback-Leibler divergence asymmetry, and on KL symmetrization approach.
- Odaibo (2019) Odaibo SG. retina-VAE: Variationally Decoding the Spectrum of Macular Disease. arXiv:1907.05195. 2019 Jul 11.
- Kingma et al. (2013) Kingma DP, Welling M. Autoencoding Variational Bayes. arXiv preprint arXiv:1312.6114. 2013 Dec 20.
- Pu et al. (2017) Pu Y, Wang W, Henao R, Chen L, Gan Z, Li C, Carin L. Adversarial Symmetric Variational Autoencoder. InAdvances in Neural Information Processing Systems. 2017 (pp. 4330-4339).
- Chen et al. (2017) Chen L, Dai S, Pu Y, Li C, Su Q, Carin L. Symmetric Variational Autoencoder and Connections to Adversarial Learning. arXiv preprint arXiv:1709.01846. 2017 Sep 6.
- Arjovsky et al. (2017) Arjovsky M, Bottou L. Towards Principled Methods for Training Generative Adversarial Networks. arXiv preprint arXiv:1701.04862. 2017 Jan 17.