1 Background
The importanceweighted autoencoder (IWAE; Burda et al. (2016)) is a variational inference strategy capable of producing arbitrarily tight evidence lower bounds. IWAE maximizes the following multisample evidence lower bound (ELBO):
(IWAE ELBO) 
which is a tighter lower bound than the ELBO maximized by the variational autoencoder (VAE; Kingma & Welling (2014)):
(VAE ELBO) 
2 Defining the implicit distribution
In this section, we derive the implicit distribution that arises from importance sampling from a distribution using as a proposal distribution. Given a batch of samples from , the following is the unnormalized importanceweighted distribution:
(1) 
Here are some properties of the approximate IWAE posterior:

When , equals .

When , the form of depends on the true posterior .

As , approaches the true posterior pointwise.
See the appendix for details. Importantly, is dependent on the batch of samples . See Fig. 3 in the appendix for a visualization of with different batches of .
2.1 Recovering the IWAE bound from the VAE bound
Here we show that the IWAE ELBO is equivalent to the VAE ELBO in expectation, but with a more flexible, unnormalized distribution, implicitly defined by importance reweighting. If we replace with and take an expectation over , then we recover the IWAE ELBO:
For a more detailed derivation, see the appendix. Note that we are abusing the VAE lower bound notation because this implies an expectation over an unnormalized distribution. Consequently, we replace the expectation with an equivalent integral.
2.2 Expected importance weighted distribution
We can achieve a tighter lower bound than by taking the expectation over of . The expected importanceweighted distribution is a distribution given by:
(2) 
See section 5.2 for a proof that
is a normalized distribution. Using
in the VAE ELBO, , results in an upper bound of . See section 5.3 for the proof, which is a special case of the proof in Naesseth et al. (2017). The procedure to sample from is shown in Algorithm 1. It is equivalent to samplingimportanceresampling (SIR).2.3 Visualizing the nonparameteric approximate posterior
The IWAE approximating distribution is nonparametric in the sense that, as the true posterior grows more complex, so does the shape of and
. This makes plotting these distributions challenging. A kerneldensityestimation approach could be used, but requires many samples. Thankfully, equations (
1) and (2) give us a simple and fast way to approximately plot and without introducing artifacts due to kernel density smoothing.Figure 1 visualizes on a 2D distribution approximation problem using Algorithm 2. The base distribution is a Gaussian. As we increase the number of samples and keep the base distribution fixed, we see that the approximation approaches the true distribution. See section 5.6 for 1D visualizations of and with .
1:
2:
3:
4:
5:for s in 1…S do
6:
7:
8: for in do
9:
10:Return

3 Resampling for prediction
During training, we sample the distribution and implicitly weight them with the IWAE ELBO. After training, we need to explicitly reweight samples from .
Real Sample Sample
In figure 2, we demonstrate the need to sample from rather than for reconstructing MNIST digits. We trained the model to maximize the IWAE ELBO with K=50 and 2 latent dimensions, similar to Appendix C in Burda et al. (2016). When we sample from and reconstruct the samples, we see a number of anomalies. However, if we perform the samplingresampling step (Alg. 1), then the reconstructions are much more accurate. The intuition here is that we trained the model with with then sampled from ( with ), which are very different distributions, as seen in Fig. 1.
4 Discussion
Bachman & Precup (2015) also showed that the IWAE objective is equivalent to stochastic variational inference with a proposal distribution corrected towards the true posterior via normalized importance sampling. We build on this idea by further examining and by providing visualizations to help better grasp the interpretation. To summarize our observations, the following is the ordering of lower bounds given specific proposal distributions,
In light of this, IWAE can be seen as increasing the complexity of the approximate distribution , similar to other methods that increase the complexity of , such as Normalizing Flows (Jimenez Rezende & Mohamed, 2015), Variational Boosting (Miller et al., 2016) or Hamiltonian variational inference (Salimans et al., 2015). With this interpretation in mind, we can possibly generalize to be applicable to other divergence measures. An interesting avenue of future work is the comparison of IWbased variational families with alphadivergences or operator variational objectives.
Acknowledgments
We’d like to thank an anonymous ICLR reviewer for providing insightful future directions for this work. We’d like to thank Yuri Burda, Christian Naesseth, and Scott Linderman for bringing our attention to oversights in the paper. We’d also like to thank Christian Naesseth for the derivation in section 5.3 and for providing many helpful comments.
References
 Bachman & Precup (2015) Philip Bachman and Doina Precup. Training Deep Generative Models: Variations on a Theme. NIPS Approximate Inference Workshop, 2015.
 Burda et al. (2016) Yuri Burda, Roger Grosse, and Ruslan Salakhutdinov. Importance weighted autoencoders. In ICLR, 2016.
 Jimenez Rezende & Mohamed (2015) Danilo Jimenez Rezende and Shakir Mohamed. Variational Inference with Normalizing Flows. In ICML, 2015.
 Kingma & Welling (2014) Diederik P. Kingma and Max Welling. AutoEncoding Variational Bayes. In ICLR, 2014.

Miller et al. (2016)
Andrew C. Miller, Nicholas Foti, and Ryan P. Adams.
Variational Boosting: Iteratively Refining Posterior
Approximations.
Advances in Approximate Bayesian Inference, NIPS Workshop
, 2016.  Naesseth et al. (2017) C. A. Naesseth, S. W. Linderman, R. Ranganath, and D. M. Blei. Variational Sequential Monte Carlo. ArXiv preprint, 2017.
 Salimans et al. (2015) Tim Salimans, Diederik P. Kingma, and Max Welling. Markov chain monte carlo and variational inference: Bridging the gap. In ICML, 2015.
5 Appendix
5.1 Detailed derivation of the equivalence of VAE and IWAE bound
Here we show that the expectation over of the VAE lower bound with the unnomalized importanceweighted distribution , , is equivalent to the IWAE bound with the original distribution, .
(3)  
(4)  
(5)  
(6)  
(7)  
(8)  
(9)  
(10)  
(11)  
(12) 
5.2 Proof that is a normalized distribution
(13)  
(14)  
(15)  
(16)  
(17)  
(18)  
(19)  
(20)  
(21)  
(22) 
5.3 Proof that is an upper bound of
Proof provided by Christian Naesseth.
Let
(23)  
(24)  
(25)  
(26)  
(27)  
(28)  
(29)  
(30)  
(31)  
(32)  
(33)  
(34)  
(35)  
(36)  
(37)  
(38) 
5.4 Proof that is closer to the true posterior than
The previous section showed that . That is, the IWAE ELBO with the base is a lower bound to the VAE ELBO with the importance weighted . Due to Jensen's inequality and as shown in Burda et al. (2016), we know that the IWAE ELBO is an upper bound of the VAE ELBO: . Furthermore, the log marginal likelihood can be factorized into: , and rearranged to: .
Following the observations above and substituting for :
(39)  
(40)  
(41) 
Thus, , meaning is closer to the true posterior than in terms of KL divergence.
5.5 In the limit of the number of samples
Another perspective is in the limit of . Recall that the marginal likelihood can be approximated by importance sampling:
(42) 
where is sampled from . We see that the denominator of is approximating . If
is bounded, then it follows from the strong law of large numbers that, as
approaches infinity, converges to the true posterior almost surely. This interpretation becomes clearer when we factor out the true posterior from :(43) 
We see that the closer the denominator becomes to , the closer is to the true posterior.