Recent advance in variational inference (Kingma & Welling, 2014; Rezende et al., 2014) makes it efficient to model complex distribution using latent variable model with an intractable marginal likelihood , where
is an unobserved vector distributed by a prior distribution, e.g.. The use of an inference network, or encoder, allows for amortization of inference by directly conditioning on the data , to approximate the true posterior . This is known as the Variational Autoencoder (VAE).
Normally, learning is achieved by maximizing a lower bound on the marginal likelihood, since the latter is not tractable in general. Naturally, one would be interested in reducing the gap between the bound and the desired objective. Burda et al. (2015) devises a new family of lower bounds with progressively smaller gap using multiple i.i.d. samples from the variational distribution , which they call the Importance Weighted Autoencoder (IWAE). Cremer et al. (2017); Bachman & Precup (2015) notice that IWAE can be interpretted as using a corrected variational distribution in the normal variational lower bound. The proposal is corrected towards the true posterior by the importance weighting, and approaches the latter with an increasing number of samples.
Intuitively, when only one sample is drawn to estimate the variational lower bound, the loss function highly penalizes the drawn sample, and thus the encoder. The decoder will be adjusted accordingly to maximize the likelihood in a biased manner, as it treats the sample as the real, observed data. In the IWAE setup, the inference model is allowed to make mistakes, as a sample corresponding to a high loss is penalized less owing to the importance weight.
Drawing multiple samples also allows us to represent the distribution at a higher resolution. This motivates us to construct a joint sampler such that the empirical distribution drawn from the joint sampler can better represent the posterior distribution. More specifically, we consider a hierarchical sampler, whose latent variable acts at a meta-level to decide the salient points of the true posterior, which we call the Hierarchical Importance Weighted Autoencoder (H-IWAE). Doing so allows for (1) summarizing the distribution using the latent variable (Edwards & Storkey, 2017), and (2) counteracting bias induced by each proposal. To analyze the latter effect, we look at the variance of the Monte Carlo estimate of the lower bound, and show that maximizing the lower bound implicitly reduces the variance. Our main contributions are as follows:
We propose a hierarchical model to induce dependency among the samples for approximate inference, and derive a hierarchical importance weighted lower bound.
We analyze the convergence of the variance of the Monte Carlo estimate of the lower bound, and draw a connection to the convergence of the bound itself.
Empirically, we explore different weighting heuristics, validate the hypothesis that the joint samples tend to be negatively correlated, and perform experiments on a suite of standard density estimation tasks.
In a typical setup of latent variable model, we assume a joint density function that factorizes as , where
are the observed and unobserved random variables, respectively. Learning is achieved by maximizing the marginal log-likelihood of the data
, which is in general intractable due to the integration, since a neural network is usually used to parameterize the likelihood function.
2.1 Variational Inference
One way to estimate the marginal likelihood is via the variational lower bound. Concretely, we introduce a variational distribution 111For notational convenience we omit the conditioning on for amortized inference., and maximize the bound on the RHS:
which is known as the evidence lower bound (ELBO). The tightness of the ELBO can be described by the reverse Kullback-Leibler (KL) divergence , which is equal to if and only if .
2.2 Importance Weighted Auto-Encoder
Burda et al. (2015) noticed the likelihood ratio in the ELBO, , resembles the importance weight in importance sampling, and introduced a family of lower bounds called the Importance Weighted Lower Bound (IWLB):
with when . In practice, the expectation over the product measure is estimated by sampling for , and setting
One property of this family of bounds is monotonicity; i.e. for . Strong consistency of can also be proved since ’s are independently and identically distributed (by law of ), in which case almost surely.
This asymptotic property motivates the use of large number of samples in practice, but there are two practical concerns: First, even though evaluation of under the generative model can be done in parallel (sublinear rate in general), memory scales in . An online algorithm can be adopted to reduce memory to , but at the cost of evaluation (Huang & Courville, 2017). Second, for any finite , for any choice of proposal such that is not proportional to 222which is a reasonable assumption given the finite approximating capacity of the chosen family of and the finiteness of the recognition model for amortization., is strictly a lower bound on the marginal likelihood. This motivates the research in improving the surrogate loss functions (such as ) for the intractable .
Nowozin (2018), for instance, interpreted as a biased estimator of , and introduced a family of estimators with reduced bias. The bias of the new estimator can be shown to be reduced to , for (compared to for ). However, the variance of the estimator can be high and is no longer a lower bound.
Alternatively, one can look at the dispersion 333By dispersion, we mean how “spread out” or “stretched” the distribution of the random variable is. Common statistical dispersion indices include variance, mean absolute difference, entropy, etc. of the likelihood ratio , which itself is a random variable. The gap between and can be explained by Jensen’s inequality and the fact that is a strictly concave function: where is a positive random variable. The equality holds if and only if is almost surely a constant. This can be approximately true when is taken to be the likelihood ratio and when . Instead, if we take , we see a direct reduction in variance if ’s are all uncorrelated and identically distributed: . Intuitively, the shape of the distribution of becomes sharper with larger , resulting in a smaller gap when Jensen’s inequality is applied. This idea was also noticed by (Domke & Sheldon, 2018) and explored by (Klys et al., 2018).
However, a clear connection between how well is approximated and the minimization of the variance of and/or the variance of is lacking. We defer the discussion to Section 4 wherein we also provide a theoretical analysis.
2.3 Variance Reduction via Negative Correlation
Let where ’s are random variables and ’s sum to one. The variance of can be written as:
This suggests that the variance of is smaller if ’s are negatively correlated with each other. The intuition of this is as follows: if deviates from its mean from below, the error it makes can be canceled out by if the latter is above its mean.
For example, antithetic variate (Owen, 2013) is a classic technique that relies on negative correlation. Assume we want to estimate , where is a symmetric density. Given , we can augment our estimate with an that is opposite to by reflecting through some center point, and set .
is still an unbiased estimator of, but has variance , where and is the correlation between and . If is monotonic, then the correlation is negative, so we achieved variance reduction at a rate faster than averaging two samples. Thus, Wu et al. (2018) propose to train an antithetic sampler. But in general, there’s no guarantee that the random function is monotonic, so we propose to train a joint sampler via latent variable.
3 Hierarchical Importance Weighted Auto-Encoder
In order to achieve anticorrelation just described, we consider a joint sampling scheme, with a hierarchical structure that admits fast sampling and allows us to approximate marginal densities required to form a valid lower bound.
3.1 Joint Importance Sampling
We first consider a joint density of ’s, denoted by . Let be the marginal. Let be a weighting factor such that for all . Then Jensen’s inequality gives
which we call the Joint Importance Weighted Lower Bound (J-IWLB) (see Appendix A for a detailed derivation). This allows us to generalize in two ways:
The flexibility to have different marginals
The dependency among ’s
Point 1⃝ with allows us to relax the necessary condition for optimality when , i.e. . For example, let
be the “posterior probability” of the random index, . Then is equivalent to using a mixture proposal . In order for the proposals to be optimal, it is sufficient if can be decomposed as a mixture density (to wit, ’s do not all need to be equal to ).
Point 2⃝ allows us to leverage the correlation among ’s to further reduce the variance. With making a positive deviation from the mean , one would hope has a higher chance of making a negative deviation to cancel the error. This can be thought of as a soft version of antithetic variates.
One difficulty in optimizing lies in estimating the marginal density . Particular choices of
, such as multivariate normal distribution, allows for exact evaluation, sinceis still normally distributed; this was explored by Klys et al. (2018). Another option is to define set of invertible transformations, for , and then apply the mapping , where . The density of under can then be evaluated via change of variable transformation. This is known as the normalizing flows (Rezende & Mohamed, 2015). Other more general forms of the joint , however, does not have tractable marginal densities, and thus require further approximation.
3.2 Hierarchical Importance Sampling
In order to induce correlation among ’s, we consider a hierarchical proposal:
with the conditional independence assumption for and (see Figure 1). First, notice that the marginals are not identical since each is sampled from a different conditional . More specifically, each marginal is a latent variable model, which can be thought of as an infinite mixture (Ranganath et al., 2016): . Second, are entangled through sharing the same common random number . The smaller the conditional entropy is, the more mutually dependent ’s are. We analyze the effect of this common random number in Section 6.1.
While there are different ways to model the joint proposal, we emphasize the following properties of the hierarchical proposals that make it an appealing choice:
Sampling can be parallelized.
Empirically, we found that optimization behaves better than a sequential model (we also tested a Markov joint proposal, i.e. depends only on , and found the learned proposals do not compensate for each other; see top rule of Figure 2).
can be interpreted as a summary (Edwards & Storkey, 2017) of the empirical distribution.
In the spirit of the variational formalism, we need to define an objective to optimize . However, the marginal densities are no longer tractable due to the integration over the prior measure . We introduce an auxiliary random variable (Agakov & Barber, 2004) to approximate 444 is the posterior of given ; that is ., and maximize the following objective:
where is the joint of , and is a partition of unity for all . First, if we choose and let depend on , boils down to the J-IWLB if . Second, with , it is simply variational inference with a hierarchical model (Ranganath et al., 2016). Lastly, is a lower bound on (we relegate the proof to Appendix B). We call it the Hierarchical Importance Weighted Lower Bound (H-IWLB).
This training criterion is chosen also because the inference model and generative model can have a unified objective.
It is also possible to consider training the inference model with different objectives.
For instance, direct minimization of the variance of the importance ratio amounts to minimization of the -divergence (see Dieng et al. (2017); Müller et al. (2018)) 555 We have also tried to minimize the variance of the estimator (average of importance ratio), with an estimate of the first moment.
But we found even the reparameterized gradient suffers from noisy signal and usually converges to a suboptimal solution.
We have also tried to minimize the variance of the estimator (average of importance ratio), with an estimate of the first moment. But we found even the reparameterized gradient suffers from noisy signal and usually converges to a suboptimal solution.. We discuss the connection between vanishing variance and convergence of the estimator in Section 4.
A family of weighting heuristics commonly used in practice are the power heuristics:
With larger , the heuristic tends to emphasize the proposal under which has larger likelihood more. When and , boils down to uniform probability (arithmetic average of the likelihood ratio) and the posterior probability of the index , respectively. We compare different choices of weighting heuristics qualitatively on a toy example in Section 6.2 and quantitatively on a suite of standard density modeling tasks with latent variable model in Section 6.3.
In our experiments, all and
are conditional Gaussian distributions (also conditioned onin the amortized inference setting). This allows us to use the reparameterized gradient (Kingma & Welling, 2014; Rezende et al., 2014). However, as noted by Rainforth et al. (2018), the signal-to-noise ratio of the gradient estimator for IWAE’s encoder decreases as increases, large would render the learning of the inference model inefficient. Thus, we consider the recently proposed doubly reparameterized gradient by Tucker et al. (2018). Empirically, we find the algorithm converges more stably.
Variational Autoencoders trained on the statically binarized MNIST dataset(Larochelle & Murray, 2011)
. Each hyperparameter is run 5 times to carry out mean and standard deviation (uncertainties are reported in AppendixE for readability). stands for the lower bound on log likelihood of the dataset (aining, lidation and st). is the negative log likelihood estimated with 2000 samples.
4 Connecting Variance and Lower Bound
Ideally, using a joint proposal allows for modelling the dependency among ’s and thus the negative correlation among the likelihood ratios, but we did not choose to optimize (minimize) variance directly. Instead we maximize the lower bound. We are interested in how the two are connected, i.e. if convergence of one implies another. The following is the main takeaway of this section:
Convergence happening in one mode implies the convergence in the other mode “if values of and cannot be extreme” (we will assume these two are bounded in some sense). However, this is in general not true in the case of importance sampling, where the likelihood ratio is sensitive to the choice of proposal distribution. This suggests the density needs to be controlled in some way (e.g. smoothing the standard deviation of a Gaussian to avoid over-confidence).
We work in a probability space . We write for if is a random variable and . A random sequence converges in to if as . Convergence in probability and in are denoted by and , respectively.
Let be a sequence of random variables, such that and thus . can be thought of as the likelihood ratio where , and its expectation wrt is exactly 666The index can be thought of as an indicator of a sequence of parameterizations of the joint . . We’d like to know if convergence of the lower bound to (e.g. via maximization of H-IWLB) implies vanishing variance under any assumption, which justifies the use of a joint/hierarchical proposal to reduce variance at a potentially faster rate. However, it is in general untrue that smaller , smaller and larger can imply one another 777Klys et al. (2018) also attempt to derive a bound, but their result does not hold without making any assumption. Maddison et al. (2017) also require uniform integrability to establish consistency.. Instead, we discuss their limiting behavior, and conclude that the condition converges to zero sits somewhere between -convergence and -convergence of . To do so, we require a bit more control over the sequences and , such as boundedness. The first implication of the conclusion is that if the variance of vanishes and the sequences are bounded in some sense, also converges to (see below). The second implication is that if converges to “more uniformly” (-convergence), then the variance of also converges to zero. We will discuss why boundedness control is required at the end of this section from the f-divergence perspective.
Now we turn to the analysis on the variance of . Doing so allows us to interpret as an estimator for , and we would like to answer the following question: if the variance of converges to zero, does converge to ? The answer is positive given some boundedness condition, and the result suggests that if the variance of the estimator is infinitesimally small, so is the bias. We now state the result:  Assume with , and for some , for all . If is uniformly integrable and is bounded in ,
The proposition tells us that if we look at as an estimator for some constant (e.g. ), if , then the vanishing variance of implies consistency.
With the same integrability condition on , we can conclude converges to .
With the same condition as Proposition 4, if we further assume is uniformly integrable,
In particular, as .
Finally, the following result shows that -convergence is sufficient for the convergence of variance of to zero.  With the same condition as Corollary 1:
Now, we turn back to the case of variational inference where (assume is normalized for the ease of exposition) and discuss the difficulty in analyzing the relationship between variance of and the lower bound . It is because variance is more sensitive to large likelihood ratio, whereas lower density region under does not take a heavy toll on the lower bound. More concretely, since , , where is the -divergence. Our insight is that the -divergence is an upper bound on the forward KL divergence (see appendix for a proof). This means when variance of decreases, the -divergence and the forward KL-divergence also decrease. However, decrease in the forward KL does not impliy decrease in the reverse KL, , which is what maximization of the lower bound
is equivalent to. This can be explained by the characteristic functionthat is used in the f-divergence family: , where is a convex function such that . For the forward KL and reverse KL, the corresponding convex functions are and , respectively. When , and . When , and . This means the forward KL will not reflect it when , but will be high when . The reverse KL on the other hand will explode when and can tolerate . These are known as the inclusive and exclusive properties of the two KLs (Minka et al., 2005). -divergence is defined by the convex function , which asymptotically bounds : . This means it will incur an even higher cost than the forward KL if . See the Figure 6 in the appendix for an illustration. The boundedness assumption on and made in this section makes sure it is less likely that will be arbitrarily large or close to zero.
5 Related Work
Much work has been done on improving the variational approximation, e.g. by using a more complex form of (Rezende & Mohamed, 2015; Kingma et al., 2016; Huang et al., 2018; Ranganath et al., 2016; Maaløe et al., 2016; Miller et al., 2016) in place of the conventional normal distribution. Along side the work of Burda et al. (2015), Mnih & Rezende (2016) and Bornschein & Bengio (2014); Le et al. (2018) also apply importance sampling to learning discrete latent variables and modifying the wake-sleep algorithm, respectively. Cremer et al. (2017); Bachman & Precup (2015) interpret IWAE as using a corrected proposal, whereas Nowozin (2018) interprets the IWLB as a biased estimator of the marginal log likelihood and propose to reduce the bias using the Jackknife method. However, Rainforth et al. (2018) realize the signal-to-noise ratio of the gradient of the inference model vanishes as increase, as the magnitude of the expected gradient decays faster than variance. Tucker et al. (2018) propose to use a doubly reparameterized gradient to mitigate this problem.
Closest to out work is that of Klys et al. (2018), where they propose to explore multivariate normal distribution as the joint proposal. Domke & Sheldon (2018) integrate defensive sampling into variational inference, and Wu et al. (2018) propose to use a differentiable antithetic sampler. Yin & Zhou (2018); Molchanov et al. (2018); Sobolev & Vetrov (2018) propose to use multiple samples to better estimate the marginal distribution of a hierarchical proposal.
We first demonstrate the effect of sharing a common random number on the dependency among the multiple proposals, and then apply the amortized version of hierarchical importance sampling (that is, H-IWAE) to learning a deep latent Gaussian models. The details of how the inference model is parameterized can be found in Appendix D.
6.1 Effect of Common Random Number
In this section, we analyze the effect of training with common random number, . As mentioned in Section 3, the correlation among ’s is induced by tying as a common factor. If for each index , we draw independently from , will be rendered independent. Doing so allows us to test if inference models trained with a common random number tend to output correlated ’s. Let the target in Figure 2 be the posterior distribution (unnormalized), and let and . We consider maximizing H-IWLB with two settings: with and without a common . We repeat the experiment 25 times with different random seeds, record the corresponding H-IWLB, variance of , variance and standard deviation of with common , as plotted in Figure 3.
Qualitatively, we visualize the correlation matrix of in Figure 4. This shows that ’s are indeed negatively correlated with each other when trained with common . When trained with independent for each , the hierarchical sampler tends to find similar solutions and are prone to incurring positively correlated biases (deviation from mean).
6.2 Effect of Weighting Heuristics
We also analyze the effect of using different weighting heuristics. We repeat the experiment in the last subsection and apply the power heuristic with and as the weighting scheme, both with common . We find that the weighting function has a major effect on the shape of the learned proposals (see Figure 2). With , the proposals behave more like a mixture, and the negative correlation is stronger as the proposals “specialize” in different regions. We also explore training the weighting function in Appendix E.
6.3 Variational Autoencoder
Our final experiment was to apply hierarchical proposals to learning variational autoencoders on a set of standard datasets, including binarized MNIST (Larochelle & Murray, 2011), binarized OMNIGLOT (Lake et al., 2015) and Caltech101 Silhouettes (Marlin et al., 2010), using the same architecture as described in Huang et al. (2018).
Results on the binarized MNIST and OMNIGLOT are in Table 1 and 2, respectively. We compare with Gaussian IWAE as a baseline. The hyperparameters are fixed as follows: minibatch size 64, learning rate , linear annealing schedule with 50,000 iterations for the log density terms except (i.e. KL between and for VAE), polyak averaging with exponential averaging coefficient 0.998. For the MNIST dataset, with , arithmetic averaging does not provide monotonic improvement in terms of negative log-likelihood (NLL) when increases, whereas with or the performance is better with larger . For the OMNIGLOT dataset, the performance is consistently better with larger .
Table 3 summarizes the experiment on the Caltech101 dataset. We compare with the Gausssian IWAE and IWAE with one single hierarchical proposal, dubbed IWAE (h), and perform grid search on a set of hyperparameters (Appendix D), each repeated three times to calculate the mean and standard deviation. We report the performance of the models by selecting the hyperparameters that correspond to the lowest averaged NLL on the validation set. We see that with larger , H-IWAE has an improved performance, but is outperformed by the IWAE with a single hierarchical proposal. We speculate this is due to the increased difficulty in optimizing the inference model with the extra parameters for each , resulting in a larger amortization bias in inference (Cremer et al., 2018). To validate the hypothesis, we repeat the experiment of H-IWAE with the balanced heuristic , but update the encoder twice on the same minibatch before updating the decoder once. This gives us a significant improvement over the learned model (see the last two rows).
In order to approximate the posterior distribution in variational inference with a more representative empirical distribution, we propose to use a hierarchical meta-sampler. We derive a variational lower bound as a training objective. Theoretically, we provide sufficient condition on boundedness to connect the convergence of the variance of the Monte Carlo estimate of the lower bound with the convergence of the bound itself. Empirically, we show that maximizing the lower bound implicitly reduces the variance of the estimate. Our analysis shows that learning dependency among the joint samples can induce negative correlation and improve the performance of inference.
- Agakov & Barber (2004) Agakov, F. V. and Barber, D. An auxiliary variational method. In Neural Information Processing, 2004.
- Bachman & Precup (2015) Bachman, P. and Precup, D. Training deep generative models: Variations on a theme. In NIPS Approximate Inference Workshop, 2015.
- Bornschein & Bengio (2014) Bornschein, J. and Bengio, Y. Reweighted wake-sleep. arXiv preprint arXiv:1406.2751, 2014.
- Burda et al. (2015) Burda, Y., Grosse, R., and Salakhutdinov, R. Importance weighted autoencoders. In International Conference on Learning Representations, 2015.
- Clevert et al. (2016) Clevert, D.-A., Unterthiner, T., and Hochreiter, S. Fast and accurate deep network learning by exponential linear units (elus). International Conference on Learning Representations, 2016.
- Cremer et al. (2017) Cremer, C., Morris, Q., and Duvenaud, D. Reinterpreting importance-weighted autoencoders. arXiv preprint arXiv:1704.02916, 2017.
- Cremer et al. (2018) Cremer, C., Li, X., and Duvenaud, D. Inference suboptimality in variational autoencoders. In International Conference on Machine Learning, 2018.
- Dieng et al. (2017) Dieng, A. B., Tran, D., Ranganath, R., Paisley, J., and Blei, D. Variational inference via upper bound minimization. In Advances in Neural Information Processing Systems, 2017.
- Domke & Sheldon (2018) Domke, J. and Sheldon, D. R. Importance weighting and variational inference. In Advances in Neural Information Processing Systems, pp. 4475–4484, 2018.
- Edwards & Storkey (2017) Edwards, H. and Storkey, A. Towards a neural statistician. In International Conference on Learning Representations, 2017.
Huang & Courville (2017)
Huang, C.-W. and Courville, A.
Sequentialized sampling importance resampling and scalable iwae.
NIPS Bayesian Deep Learning Workshop, 2017.
- Huang et al. (2018) Huang, C.-W., Krueger, D., Lacoste, A., and Courville, A. Neural autoregressive flows. In International Conference on Machine Learning, 2018.
- Kingma & Welling (2014) Kingma, D. P. and Welling, M. Auto-encoding variational bayes. In International Conference on Learning Representations, 2014.
- Kingma et al. (2016) Kingma, D. P., Salimans, T., Jozefowicz, R., Chen, X., Sutskever, I., and Welling, M. Improved variational inference with inverse autoregressive flow. In Advances in Neural Information Processing Systems, 2016.
- Klys et al. (2018) Klys, J., Bettencourt, J., and Duvenaud, D. Joint importance sampling for variational inference. 2018.
- Lake et al. (2015) Lake, B. M., Salakhutdinov, R., and Tenenbaum, J. B. Human-level concept learning through probabilistic program induction. Science, 350(6266):1332–1338, 2015.
Larochelle & Murray (2011)
Larochelle, H. and Murray, I.
The neural autoregressive distribution estimator.
International Conference on Artificial Intelligence and Statistics, 2011.
- Le et al. (2018) Le, T. A., Kosiorek, A. R., Siddharth, N., Teh, Y. W., and Wood, F. Revisiting reweighted wake-sleep. arXiv preprint arXiv:1805.10469, 2018.
- Maaløe et al. (2016) Maaløe, L., Sønderby, C. K., Sønderby, S. K., and Winther, O. Auxiliary deep generative models. In International Conference on Machine Learning, 2016.
- Maddison et al. (2017) Maddison, C. J., Lawson, J., Tucker, G., Heess, N., Norouzi, M., Mnih, A., Doucet, A., and Teh, Y. Filtering variational objectives. In Advances in Neural Information Processing Systems, pp. 6573–6583, 2017.
Marlin et al. (2010)
Marlin, B., Swersky, K., Chen, B., and Freitas, N.
Inductive principles for restricted boltzmann machine learning.In Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, pp. 509–516, 2010.
- Miller et al. (2016) Miller, A. C., Foti, N., and Adams, R. P. Variational boosting: Iteratively refining posterior approximations. arXiv preprint arXiv:1611.06585, 2016.
- Minka et al. (2005) Minka, T. et al. Divergence measures and message passing. Technical report, Technical report, Microsoft Research, 2005.
- Mnih & Rezende (2016) Mnih, A. and Rezende, D. J. Variational inference for monte carlo objectives. arXiv preprint arXiv:1602.06725, 2016.
- Molchanov et al. (2018) Molchanov, D., Kharitonov, V., Sobolev, A., and Vetrov, D. Doubly semi-implicit variational inference. arXiv preprint arXiv:1810.02789, 2018.
- Müller et al. (2018) Müller, T., McWilliams, B., Rousselle, F., Gross, M., and Novák, J. Neural importance sampling. arXiv preprint arXiv:1808.03856, 2018.
- Nowozin (2018) Nowozin, S. Debiasing evidence approximations: On importance-weighted autoencoders and jackknife variational inference. In International Conference on Learning Representations, 2018.
- Owen (2013) Owen, A. B. Monte Carlo theory, methods and examples. 2013.
- Rainforth et al. (2018) Rainforth, T., Kosiorek, A. R., Le, T. A., Maddison, C. J., Igl, M., Wood, F., and Teh, Y. W. Tighter variational bounds are not necessarily better. In International Conference on Machine Learning, 2018.
- Ranganath et al. (2016) Ranganath, R., Tran, D., and Blei, D. Hierarchical variational models. In International Conference on Machine Learning, 2016.
- Rezende & Mohamed (2015) Rezende, D. J. and Mohamed, S. Variational inference with normalizing flows. In International Conference on Machine Learning, 2015.
Rezende et al. (2014)
Rezende, D. J., Mohamed, S., and Wierstra, D.
Stochastic backpropagation and approximate inference in deep generative models.In International Conference on Machine Learning, 2014.
- Sobolev & Vetrov (2018) Sobolev, A. and Vetrov, D. Importance weighted hierarchical variational inference. 2018.
- Tucker et al. (2018) Tucker, G., Lawson, D., Gu, S., and Maddison, C. J. Doubly reparameterized gradient estimators for monte carlo objectives. arXiv preprint arXiv:1810.04152, 2018.
- Wu et al. (2018) Wu, M., Goodman, N., and Ermon, S. Differentiable antithetic sampling for variance reduction in stochastic variational inference. arXiv preprint arXiv:1810.02555, 2018.
- Yin & Zhou (2018) Yin, M. and Zhou, M. Semi-implicit variational inference. arXiv preprint arXiv:1805.11183, 2018.
Appendix A Detailed J-IWLB Derivation
We provide an alternative to the derivation in Section 3.1. Note that if we can show that
is equal to , then an application of Jensen’s inequality to the log of this quantity gives the desired bound.
First, since linearity holds for any collection of variables, whether or not they are independent, the summation can be taken outside, and since the resulting expectations involve only one at a time, the expectations can be taken with respect to their marginals . That is,
Each of these expectations has a simple form,
where it’s okay to drop the index previously appended to the ’s, because each integral refers to all values each could possibly take on (not any individual sample of their values, see Figure 5). Since pointwise for all , we can then swap summation and integration (assuming dominated-convergence-style regularity) and find
Appendix B Detailed H-IWLB Derivation
This derivation follows the derivation from the last section, and differs in the last step where we also marginalize out the auxiliary variable .
|(Linearity of expectation)|
|(Marginalization of )|
|(Marginalization of , and )|
Appendix C Proofs of Section 4
We work in a probability space . We write for if is a random variable and . Convergence in probability and in are denoted by and , respectively.
We start with a classic result. Assume as , then by Chebyshev’s inequality, in probability, and in probability by the continuity of . To get -convergence, i.e. , the missing piece that is both necessary and sufficient is the uniform integrability of , which is a form of boundedness condition (see also Proposition 1 of Maddison et al. (2017)). It is clear from now that to say something about the expected value we need to bound the random variable in some way. With the -convergence, we conclude the expected lower bound (in our case, some form of ELBO) converges to the marginal log-likelihood:
A family of random variables is bounded in probability if for any , there exists such that
Note that if is bounded in (i.e. ), is also bounded in probability, since by Markov’s inequality,
Setting gives us the uniform bound.
Below, we extend the well known Continuous Mapping Theorem to the difference of random sequences .
(Extended Continuous Mapping Theorem) Let and be random variables bounded in probability. If is a continuous function, then
Fix . Choose . There exists some positive value such that and . Within the interval , is uniformly continuous, so there exists such that
Now by subadditivity of measure,
The second term goes to as . Taking to yields the result. ∎
Note that the continuity assumption of can be weakened to assuming the set of discontinuity points of has measure zero.
Let and . By Chebyshev’s inequality, for any ,
which means . Due to -boundedness, is also bounded in probability, and . By the Extended Continuous Mapping Theorem, . Also, is bounded, implying is uniformly integrable, so , and