Reinterpreting Importance-Weighted Autoencoders

04/10/2017 ∙ by Chris Cremer, et al. ∙ UNIVERSITY OF TORONTO 0

The standard interpretation of importance-weighted autoencoders is that they maximize a tighter lower bound on the marginal likelihood than the standard evidence lower bound. We give an alternate interpretation of this procedure: that it optimizes the standard variational lower bound, but using a more complex distribution. We formally derive this result, present a tighter lower bound, and visualize the implicit importance-weighted distribution.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 3

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Background

The importance-weighted autoencoder (IWAE; Burda et al. (2016)) is a variational inference strategy capable of producing arbitrarily tight evidence lower bounds. IWAE maximizes the following multi-sample evidence lower bound (ELBO):

(IWAE ELBO)

which is a tighter lower bound than the ELBO maximized by the variational autoencoder (VAE; Kingma & Welling (2014)):

(VAE ELBO)

2 Defining the implicit distribution

In this section, we derive the implicit distribution that arises from importance sampling from a distribution using as a proposal distribution. Given a batch of samples from , the following is the unnormalized importance-weighted distribution:

(1)

Here are some properties of the approximate IWAE posterior:

  • When , equals .

  • When , the form of depends on the true posterior .

  • As , approaches the true posterior pointwise.

See the appendix for details. Importantly, is dependent on the batch of samples . See Fig. 3 in the appendix for a visualization of with different batches of .

2.1 Recovering the IWAE bound from the VAE bound

Here we show that the IWAE ELBO is equivalent to the VAE ELBO in expectation, but with a more flexible, unnormalized distribution, implicitly defined by importance reweighting. If we replace with and take an expectation over , then we recover the IWAE ELBO:

For a more detailed derivation, see the appendix. Note that we are abusing the VAE lower bound notation because this implies an expectation over an unnormalized distribution. Consequently, we replace the expectation with an equivalent integral.

2.2 Expected importance weighted distribution

We can achieve a tighter lower bound than by taking the expectation over of . The expected importance-weighted distribution is a distribution given by:

(2)

See section 5.2 for a proof that

is a normalized distribution. Using

in the VAE ELBO, , results in an upper bound of . See section 5.3 for the proof, which is a special case of the proof in Naesseth et al. (2017). The procedure to sample from is shown in Algorithm 1. It is equivalent to sampling-importance-resampling (SIR).

2.3 Visualizing the nonparameteric approximate posterior

The IWAE approximating distribution is nonparametric in the sense that, as the true posterior grows more complex, so does the shape of and

. This makes plotting these distributions challenging. A kernel-density-estimation approach could be used, but requires many samples. Thankfully, equations (

1) and (2) give us a simple and fast way to approximately plot and without introducing artifacts due to kernel density smoothing.

Figure 1 visualizes on a 2D distribution approximation problem using Algorithm 2. The base distribution is a Gaussian. As we increase the number of samples and keep the base distribution fixed, we see that the approximation approaches the true distribution. See section 5.6 for 1D visualizations of and with .

1:
2:for i in 1…k do
3:     
4:     
5:Each
6:
7:Return
Algorithm 1 Sampling
(a)
1:
2:
3:
4:
5:for s in 1…S do
6:     
7:     
8:     for  in  do
9:               
10:Return
Algorithm 2 Plotting
(b)

3 Resampling for prediction

During training, we sample the distribution and implicitly weight them with the IWAE ELBO. After training, we need to explicitly reweight samples from .

Real        Sample            Sample      

Figure 2: Reconstructions of MNIST samples from and . The model was trained by maximizing the IWAE ELBO with K=50 and 2 latent dimensions. The reconstructions from are greatly improved with the sampling-resampling step of .

In figure 2, we demonstrate the need to sample from rather than for reconstructing MNIST digits. We trained the model to maximize the IWAE ELBO with K=50 and 2 latent dimensions, similar to Appendix C in Burda et al. (2016). When we sample from and reconstruct the samples, we see a number of anomalies. However, if we perform the sampling-resampling step (Alg. 1), then the reconstructions are much more accurate. The intuition here is that we trained the model with with then sampled from ( with ), which are very different distributions, as seen in Fig. 1.

4 Discussion

Bachman & Precup (2015) also showed that the IWAE objective is equivalent to stochastic variational inference with a proposal distribution corrected towards the true posterior via normalized importance sampling. We build on this idea by further examining and by providing visualizations to help better grasp the interpretation. To summarize our observations, the following is the ordering of lower bounds given specific proposal distributions,

In light of this, IWAE can be seen as increasing the complexity of the approximate distribution , similar to other methods that increase the complexity of , such as Normalizing Flows (Jimenez Rezende & Mohamed, 2015), Variational Boosting (Miller et al., 2016) or Hamiltonian variational inference (Salimans et al., 2015). With this interpretation in mind, we can possibly generalize to be applicable to other divergence measures. An interesting avenue of future work is the comparison of IW-based variational families with alpha-divergences or operator variational objectives.

Acknowledgments

We’d like to thank an anonymous ICLR reviewer for providing insightful future directions for this work. We’d like to thank Yuri Burda, Christian Naesseth, and Scott Linderman for bringing our attention to oversights in the paper. We’d also like to thank Christian Naesseth for the derivation in section 5.3 and for providing many helpful comments.

References

  • Bachman & Precup (2015) Philip Bachman and Doina Precup. Training Deep Generative Models: Variations on a Theme. NIPS Approximate Inference Workshop, 2015.
  • Burda et al. (2016) Yuri Burda, Roger Grosse, and Ruslan Salakhutdinov. Importance weighted autoencoders. In ICLR, 2016.
  • Jimenez Rezende & Mohamed (2015) Danilo Jimenez Rezende and Shakir Mohamed. Variational Inference with Normalizing Flows. In ICML, 2015.
  • Kingma & Welling (2014) Diederik P. Kingma and Max Welling. Auto-Encoding Variational Bayes. In ICLR, 2014.
  • Miller et al. (2016) Andrew C. Miller, Nicholas Foti, and Ryan P. Adams. Variational Boosting: Iteratively Refining Posterior Approximations.

    Advances in Approximate Bayesian Inference, NIPS Workshop

    , 2016.
  • Naesseth et al. (2017) C. A. Naesseth, S. W. Linderman, R. Ranganath, and D. M. Blei. Variational Sequential Monte Carlo. ArXiv preprint, 2017.
  • Salimans et al. (2015) Tim Salimans, Diederik P. Kingma, and Max Welling. Markov chain monte carlo and variational inference: Bridging the gap. In ICML, 2015.

5 Appendix

5.1 Detailed derivation of the equivalence of VAE and IWAE bound

Here we show that the expectation over of the VAE lower bound with the unnomalized importance-weighted distribution , , is equivalent to the IWAE bound with the original distribution, .

(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(11)
(12)

(8): Change of notation .
(10): has the same expectation as so we can replace with the sum of terms.

5.2 Proof that is a normalized distribution

(13)
(14)
(15)
(16)
(17)
(18)
(19)
(20)
(21)
(22)

(17): Change of notation .
(19): has the same expectation as so we can replace with the sum of terms.
(20): Linearity of expectation.

5.3 Proof that is an upper bound of

Proof provided by Christian Naesseth.

Let

(23)
(24)
(25)
(26)
(27)
(28)
(29)
(30)
(31)
(32)
(33)
(34)
(35)
(36)
(37)
(38)

(28): Given that is concave for , and , then .
(30): Change of notation .
(34): has the same expectation as so we can replace with the sum of terms.

5.4 Proof that is closer to the true posterior than

The previous section showed that . That is, the IWAE ELBO with the base is a lower bound to the VAE ELBO with the importance weighted . Due to Jensen's inequality and as shown in Burda et al. (2016), we know that the IWAE ELBO is an upper bound of the VAE ELBO: . Furthermore, the log marginal likelihood can be factorized into: , and rearranged to: .

Following the observations above and substituting for :

(39)
(40)
(41)

Thus, , meaning is closer to the true posterior than in terms of KL divergence.

5.5 In the limit of the number of samples

Another perspective is in the limit of . Recall that the marginal likelihood can be approximated by importance sampling:

(42)

where is sampled from . We see that the denominator of is approximating . If

is bounded, then it follows from the strong law of large numbers that, as

approaches infinity, converges to the true posterior almost surely. This interpretation becomes clearer when we factor out the true posterior from :

(43)

We see that the closer the denominator becomes to , the closer is to the true posterior.

5.6 Visualizing and in 1D

Figure 3: Visualization of 1D and distributions. The blue distribution and the green distribution are both normalized. The three instances of () have different samples from and we can see that they are unnormalized. is normalized and is the expectation over 30 distributions. The distributions were plotted using Algorithm 2 with .