Tutorial: Deriving the Standard Variational Autoencoder (VAE) Loss Function

07/21/2019 ∙ by Stephen Odaibo, et al. ∙ 37

In Bayesian machine learning, the posterior distribution is typically computationally intractable, hence variational inference is often required. In this approach, an evidence lower bound on the log likelihood of data is maximized during training. Variational Autoencoders (VAE) are one important example where variational inference is utilized. In this tutorial, we derive the variational lower bound loss function of the standard variational autoencoder. We do so in the instance of a gaussian latent prior and gaussian approximate posterior, under which assumptions the Kullback-Leibler term in the variational lower bound has a closed form solution. We derive essentially everything we use along the way; everything from Bayes' theorem to the Kullback-Leibler divergence.

READ FULL TEXT VIEW PDF

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

Bayes Theorem

Bayes theorem is a way to update one’s belief as new evidence comes into view. The probability of a hypothesis,

, given some new data , is denoted, , and is given by

(1)

where is the probability of the data , is the probability of the data given a hypothesis , and is the probability of that hypothesis . While Bayes theorem by itself can appear non-intuitive or at least difficult to intuit, the key to understanding it is to derive it. It arises directly out of the conditional probability axiom, which itself arises out of the definition of the joint probability. The probability of an event and an event Y occurring jointly is,

(2)

And since the ‘AND’ is commutative, we have,

(3)
(4)

Dividing both sides of Equation (4) by yields Bayes theorem,

(5)
Symbol Name
Latent variable
Evidence or Data
Evidence probability
Prior probability
Posterior probability
Likelihood probability
Table 1: Bayesian Statistics Glossary

Kullback-Leibler Divergence

When comparing two distributions as we often do in density estimation, the central task of generative models, we need a measure of similarity between both distributions. The Kullback-Leibler divergence is a commonly used similarity measure for this purpose. It is the expectation of the information difference between both distributions. But first, what is information?

To understand what information is and to see its definition, consider the following: The higher the probability of an event, the lower its information content. This makes intuitive sense in that if someone tells us something ‘obvious’ i.e. highly probable i.e. something we and almost everyone else already knew, then that informant has not increased the amount of information we have. Hence the information content of highly probably event is low. Another way to say this is that the information is inversely related to the probability of an event. And since is directly related to , it follows that is inversely related to , and is how we model information:

(6)
(7)

The difference of information between and is therefore:

(8)

And the Kullback-Leibler is the expectation of the above difference, and is given by,

(9)

Similarly

(10)

Note that the Kullback-Leibler (KL) is not symmetric, i.e,

(11)

In , we are taking the expectation of the information difference with respect to distribution, while in , we are taking the expectation with respect to the distribution.

Hence the Kullback-Leibler is called a ‘divergence’ and not a ‘metric’ as metrics must be symmetric. There recently have been a number of symmetrization devices proposed for KL which have been shown to improve its generative fidelity [Pu et al. (2017)][ Chen et al. (2017)] [Arjovsky et al. (2017)].

Note the KL divergence is always non-negative, i.e.,

(12)

To see this, note that as depicted in Figure (1),

(13)

Therefore

(14)

We have just shown,

(15)

which implies,

(16)
Figure 1:

VAE Objective

Consider variational autoencoders [Kingma et al. (2013)]. They have many applications including for finer characterization of disease [Odaibo (2019)]. The encoder portion of a VAE yields an approximate posterior distribution

, and is parametrized on a neural network by weights collectively denoted

. Hence we more properly write the encoder as . Similarly, the decoder portion of the VAE yields a likelihood distribution , and is parametrized on a neural network by weights collectively denoted . Hence we more properly denote the decoder portion of the VAE as . The output of the encoder are parameters of the latent distribution, which is sampled to yield the input into the decoder. A VAE schematic is shown in Figure (2).

Figure 2: VAE

The KL divergence between the approximate and the real posterior distributions is given by,

(17)

Applying Bayes’ theorem to the above equation yields,

(18)

This can be broken down using laws of logarithms, yielding,

(19)

Distributing the integrand then yields,

(20)

In the above, we note that is a constant and can therefore be pulled out of the second integral above, yielding,

(21)

And since

is a probability distribution it integrates to 1 in the above equation, yielding,

(22)

Then carrying the integral over to the other side of the inequality, we get,

(23)

Applying rules of logarithms, we get,

(24)

Recognizing the right hand side of the above inequality as Expectation, we write,

(25)
(26)

From Equation (23) it also follows that:

(27)
(28)

The right hand side of the above equation is the Evidence Lower Bound (ELBO) also known as the variational lower bound. It is so termed because it bounds the likelihood of the data which is the term we seek to maximize. Therefore maximizing the ELBO maximizes the log probability of our data by proxy. This is the core idea of variational inference, since maximization of the log probability directly is typically computationally intractable. The Kullback-Leibler term in the ELBO is a regularizer because it is a constraint on the form of the approximate posterior. The second term is called a reconstruction term because it is a measure of the likelihood of the reconstructed data output at the decoder.

Notably, we have some liberty to choose some structure for our latent variables. We can obtain a closed form for the loss function if we choose a gaussian representation for the latent prior and the approximate posterior, . In addition to yielding a closed form loss function, the gaussian model enforces a form of regularization in which the approximate posterior have variation or spread (like a gaussian).

Closed form VAE Loss: Gaussian Latents

Say we choose:

(29)

and

(30)

,

then the KL or regularization term in the ELBO becomes:

(31)

Evaluating the term in the logarithm simplifies the above into,

(32)

This further simplifies into,

(33)

which further simplifies into,

(34)

Expressing the above as an Expectation we get,

(35)

And since the variance

is the expectation of the squared distance from the mean, i.e.,

(36)

it follows that,

(37)

Recall that,

(38)

therefore,

(39)

And when we take and , we get,

(40)

Recall the ELBO, Equation (28),

From which it follows that the contribution from a given datum and a single stochastic draw towards the objective to be maximized is,

(41)

where and are parameters into the approximate distribution, , and

is an index into the latent vector

. For a batch, the objective function is therefore given by,

(42)

where is the dimension of the latent vector , and is the number of samples stochastically drawn according to re-parametrization trick.

Because the objective function we obtain in Equation (42) is to be maximized during training, we can think of it as a ‘gain’ function as opposed to a loss function. To obtain the loss function, we simply take the negative of :

(43)

Therefore to train the VAE is to seek the optimal network parameters that minimize :

(44)

Conclusion

We have done a step-by-step derivation of the VAE loss function. We illustrated the essence of variational inference along the way, and have derived the closed form loss in the special case of gaussian latent.

Acknowledgement

The author thanks Larry Carin for helpful discussion on consequences of Kullback-Leibler divergence asymmetry, and on KL symmetrization approach.

References