# Hierarchical Importance Weighted Autoencoders

Importance weighted variational inference (Burda et al., 2015) uses multiple i.i.d. samples to have a tighter variational lower bound. We believe a joint proposal has the potential of reducing the number of redundant samples, and introduce a hierarchical structure to induce correlation. The hope is that the proposals would coordinate to make up for the error made by one another to reduce the variance of the importance estimator. Theoretically, we analyze the condition under which convergence of the estimator variance can be connected to convergence of the lower bound. Empirically, we confirm that maximization of the lower bound does implicitly minimize variance. Further analysis shows that this is a result of negative correlation induced by the proposed hierarchical meta sampling scheme, and performance of inference also improves when the number of samples increases.

## Authors

• 13 publications
• 6 publications
• 1 publication
• 19 publications
• 107 publications
• ### Reinterpreting Importance-Weighted Autoencoders

The standard interpretation of importance-weighted autoencoders is that ...
04/10/2017 ∙ by Chris Cremer, et al. ∙ 0

• ### Sticking the Landing: Simple, Lower-Variance Gradient Estimators for Variational Inference

We propose a simple and general variant of the standard reparameterized ...
03/27/2017 ∙ by Geoffrey Roeder, et al. ∙ 0

• ### Doubly Reparameterized Gradient Estimators for Monte Carlo Objectives

Deep latent variable models have become a popular model choice due to th...
10/09/2018 ∙ by George Tucker, et al. ∙ 18

• ### On importance-weighted autoencoders

The importance weighted autoencoder (IWAE) (Burda et al., 2016) is a pop...
07/24/2019 ∙ by Axel Finke, et al. ∙ 0

• ### On improving Neymanian analysis for 2^K factorial designs with binary outcomes

03/12/2018 ∙ by Jiannan Lu, et al. ∙ 0

• ### Optimal Variance Control of the Score Function Gradient Estimator for Importance Weighted Bounds

This paper introduces novel results for the score function gradient esti...
08/05/2020 ∙ by Valentin Liévin, et al. ∙ 6

• ### An Uncertainty Principle for Estimates of Floquet Multipliers

We derive a Cramér-Rao lower bound for the variance of Floquet multiplie...
11/29/2017 ∙ by Aurya Javeed, et al. ∙ 0

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

Recent advance in variational inference (Kingma & Welling, 2014; Rezende et al., 2014) makes it efficient to model complex distribution using latent variable model with an intractable marginal likelihood , where

is an unobserved vector distributed by a prior distribution, e.g.

. The use of an inference network, or encoder, allows for amortization of inference by directly conditioning on the data , to approximate the true posterior . This is known as the Variational Autoencoder (VAE).

Normally, learning is achieved by maximizing a lower bound on the marginal likelihood, since the latter is not tractable in general. Naturally, one would be interested in reducing the gap between the bound and the desired objective. Burda et al. (2015) devises a new family of lower bounds with progressively smaller gap using multiple i.i.d. samples from the variational distribution , which they call the Importance Weighted Autoencoder (IWAE). Cremer et al. (2017); Bachman & Precup (2015) notice that IWAE can be interpretted as using a corrected variational distribution in the normal variational lower bound. The proposal is corrected towards the true posterior by the importance weighting, and approaches the latter with an increasing number of samples.

Intuitively, when only one sample is drawn to estimate the variational lower bound, the loss function highly penalizes the drawn sample, and thus the encoder. The decoder will be adjusted accordingly to maximize the likelihood in a biased manner, as it treats the sample as the real, observed data. In the IWAE setup, the inference model is allowed to make mistakes, as a sample corresponding to a high loss is penalized less owing to the importance weight.

Drawing multiple samples also allows us to represent the distribution at a higher resolution. This motivates us to construct a joint sampler such that the empirical distribution drawn from the joint sampler can better represent the posterior distribution. More specifically, we consider a hierarchical sampler, whose latent variable acts at a meta-level to decide the salient points of the true posterior, which we call the Hierarchical Importance Weighted Autoencoder (H-IWAE). Doing so allows for (1) summarizing the distribution using the latent variable (Edwards & Storkey, 2017), and (2) counteracting bias induced by each proposal. To analyze the latter effect, we look at the variance of the Monte Carlo estimate of the lower bound, and show that maximizing the lower bound implicitly reduces the variance. Our main contributions are as follows:

• We propose a hierarchical model to induce dependency among the samples for approximate inference, and derive a hierarchical importance weighted lower bound.

• We analyze the convergence of the variance of the Monte Carlo estimate of the lower bound, and draw a connection to the convergence of the bound itself.

• Empirically, we explore different weighting heuristics, validate the hypothesis that the joint samples tend to be negatively correlated, and perform experiments on a suite of standard density estimation tasks.

## 2 Background

In a typical setup of latent variable model, we assume a joint density function that factorizes as , where

are the observed and unobserved random variables, respectively. Learning is achieved by maximizing the marginal log-likelihood of the data

, which is in general intractable due to the integration, since a neural network is usually used to parameterize the likelihood function

.

### 2.1 Variational Inference

One way to estimate the marginal likelihood is via the variational lower bound. Concretely, we introduce a variational distribution 111For notational convenience we omit the conditioning on for amortized inference., and maximize the bound on the RHS:

 logp(x) =logEq(z)[p(x,z)q(z)] ≥Eq(z)[logp(x,z)q(z)]:=L(q)

which is known as the evidence lower bound (ELBO). The tightness of the ELBO can be described by the reverse Kullback-Leibler (KL) divergence , which is equal to if and only if .

### 2.2 Importance Weighted Auto-Encoder

Burda et al. (2015) noticed the likelihood ratio in the ELBO, , resembles the importance weight in importance sampling, and introduced a family of lower bounds called the Importance Weighted Lower Bound (IWLB):

 logp(x) =logEzj∼q(zj)[1KK∑j=1p(x,zj)q(zj)] ≥Ezj∼q(zj)[log1KK∑j=1p(x,zj)q(zj)]:=LK(q)

with when . In practice, the expectation over the product measure is estimated by sampling for , and setting

 ~LK(q)=log1KK∑j=1p(x,zj)q(zj)

One property of this family of bounds is monotonicity; i.e. for . Strong consistency of can also be proved since ’s are independently and identically distributed (by law of ), in which case almost surely.

This asymptotic property motivates the use of large number of samples in practice, but there are two practical concerns: First, even though evaluation of under the generative model can be done in parallel (sublinear rate in general), memory scales in . An online algorithm can be adopted to reduce memory to , but at the cost of evaluation (Huang & Courville, 2017). Second, for any finite , for any choice of proposal such that is not proportional to 222which is a reasonable assumption given the finite approximating capacity of the chosen family of and the finiteness of the recognition model for amortization., is strictly a lower bound on the marginal likelihood. This motivates the research in improving the surrogate loss functions (such as ) for the intractable .

Nowozin (2018), for instance, interpreted as a biased estimator of , and introduced a family of estimators with reduced bias. The bias of the new estimator can be shown to be reduced to , for (compared to for ). However, the variance of the estimator can be high and is no longer a lower bound.

Alternatively, one can look at the dispersion 333By dispersion, we mean how “spread out” or “stretched” the distribution of the random variable is. Common statistical dispersion indices include variance, mean absolute difference, entropy, etc. of the likelihood ratio , which itself is a random variable. The gap between and can be explained by Jensen’s inequality and the fact that is a strictly concave function: where is a positive random variable. The equality holds if and only if is almost surely a constant. This can be approximately true when is taken to be the likelihood ratio and when . Instead, if we take , we see a direct reduction in variance if ’s are all uncorrelated and identically distributed: . Intuitively, the shape of the distribution of becomes sharper with larger , resulting in a smaller gap when Jensen’s inequality is applied. This idea was also noticed by (Domke & Sheldon, 2018) and explored by (Klys et al., 2018).

However, a clear connection between how well is approximated and the minimization of the variance of and/or the variance of is lacking. We defer the discussion to Section 4 wherein we also provide a theoretical analysis.

### 2.3 Variance Reduction via Negative Correlation

Let where ’s are random variables and ’s sum to one. The variance of can be written as:

 Var(w)= K∑i=1π2iVar(wi)+2∑i

This suggests that the variance of is smaller if ’s are negatively correlated with each other. The intuition of this is as follows: if deviates from its mean from below, the error it makes can be canceled out by if the latter is above its mean.

For example, antithetic variate (Owen, 2013) is a classic technique that relies on negative correlation. Assume we want to estimate , where is a symmetric density. Given , we can augment our estimate with an that is opposite to by reflecting through some center point, and set .

is still an unbiased estimator of

, but has variance , where and is the correlation between and . If is monotonic, then the correlation is negative, so we achieved variance reduction at a rate faster than averaging two samples. Thus, Wu et al. (2018) propose to train an antithetic sampler. But in general, there’s no guarantee that the random function is monotonic, so we propose to train a joint sampler via latent variable.

## 3 Hierarchical Importance Weighted Auto-Encoder

In order to achieve anticorrelation just described, we consider a joint sampling scheme, with a hierarchical structure that admits fast sampling and allows us to approximate marginal densities required to form a valid lower bound.

### 3.1 Joint Importance Sampling

We first consider a joint density of ’s, denoted by . Let be the marginal. Let be a weighting factor such that for all . Then Jensen’s inequality gives

 logp(x) =log∫z1,...,zKK∑j=1πj(zj)p(x,zj)qj(zj)dQ(z1,...,zK) ≥EQ[logK∑j=1πj(zj)p(x,zj)qj(zj)]:=LK(Q)

which we call the Joint Importance Weighted Lower Bound (J-IWLB) (see Appendix A for a detailed derivation). This allows us to generalize in two ways:

1. [label=0⃝]

2. The flexibility to have different marginals

3. The dependency among ’s

Point 1⃝ with allows us to relax the necessary condition for optimality when , i.e. . For example, let

be the “posterior probability” of the random index

, . Then is equivalent to using a mixture proposal . In order for the proposals to be optimal, it is sufficient if can be decomposed as a mixture density (to wit, ’s do not all need to be equal to ).

Point 2⃝ allows us to leverage the correlation among ’s to further reduce the variance. With making a positive deviation from the mean , one would hope has a higher chance of making a negative deviation to cancel the error. This can be thought of as a soft version of antithetic variates.

One difficulty in optimizing lies in estimating the marginal density . Particular choices of

, such as multivariate normal distribution, allows for exact evaluation, since

is still normally distributed; this was explored by Klys et al. (2018). Another option is to define set of invertible transformations, for , and then apply the mapping , where . The density of under can then be evaluated via change of variable transformation. This is known as the normalizing flows (Rezende & Mohamed, 2015). Other more general forms of the joint , however, does not have tractable marginal densities, and thus require further approximation.

### 3.2 Hierarchical Importance Sampling

In order to induce correlation among ’s, we consider a hierarchical proposal:

 Q(z1,...,zK) =∫z0Q(z1,...,zK|z0)dq0(z0) =∫z0K∏j=1qj(zj|z0)dq0(z0)

with the conditional independence assumption for and (see Figure 1). First, notice that the marginals are not identical since each is sampled from a different conditional . More specifically, each marginal is a latent variable model, which can be thought of as an infinite mixture (Ranganath et al., 2016): . Second, are entangled through sharing the same common random number . The smaller the conditional entropy is, the more mutually dependent ’s are. We analyze the effect of this common random number in Section 6.1.

While there are different ways to model the joint proposal, we emphasize the following properties of the hierarchical proposals that make it an appealing choice:

1. Sampling can be parallelized.

2. Empirically, we found that optimization behaves better than a sequential model (we also tested a Markov joint proposal, i.e. depends only on , and found the learned proposals do not compensate for each other; see top rule of Figure 2).

3. can be interpreted as a summary (Edwards & Storkey, 2017) of the empirical distribution.

#### Optimization Objective

In the spirit of the variational formalism, we need to define an objective to optimize . However, the marginal densities are no longer tractable due to the integration over the prior measure . We introduce an auxiliary random variable (Agakov & Barber, 2004) to approximate 444 is the posterior of given ; that is ., and maximize the following objective:

 LK(Q0):=EQ0[logK∑j=1πj(zj,z0)p(x,zj)r(z0|zj)qj(zj|z0)q0(z0)]

where is the joint of , and is a partition of unity for all . First, if we choose and let depend on , boils down to the J-IWLB if . Second, with , it is simply variational inference with a hierarchical model  (Ranganath et al., 2016). Lastly, is a lower bound on (we relegate the proof to Appendix B). We call it the Hierarchical Importance Weighted Lower Bound (H-IWLB).

This training criterion is chosen also because the inference model and generative model can have a unified objective. It is also possible to consider training the inference model with different objectives. For instance, direct minimization of the variance of the importance ratio amounts to minimization of the -divergence (see Dieng et al. (2017); Müller et al. (2018)) 555

We have also tried to minimize the variance of the estimator (average of importance ratio), with an estimate of the first moment. But we found even the reparameterized gradient suffers from noisy signal and usually converges to a suboptimal solution.

. We discuss the connection between vanishing variance and convergence of the estimator in Section 4.

#### Weighting Heuristics

A family of weighting heuristics commonly used in practice are the power heuristics:

 παj(z,z0)=qj(z|z0)α∑Ki=1qi(z|z0)α

With larger , the heuristic tends to emphasize the proposal under which has larger likelihood more. When and , boils down to uniform probability (arithmetic average of the likelihood ratio) and the posterior probability of the index , respectively. We compare different choices of weighting heuristics qualitatively on a toy example in Section 6.2 and quantitatively on a suite of standard density modeling tasks with latent variable model in Section 6.3.

#### Training Procedure

In our experiments, all and

are conditional Gaussian distributions (also conditioned on

in the amortized inference setting). This allows us to use the reparameterized gradient (Kingma & Welling, 2014; Rezende et al., 2014). However, as noted by Rainforth et al. (2018), the signal-to-noise ratio of the gradient estimator for IWAE’s encoder decreases as increases, large would render the learning of the inference model inefficient. Thus, we consider the recently proposed doubly reparameterized gradient by Tucker et al. (2018). Empirically, we find the algorithm converges more stably.

## 4 Connecting Variance and Lower Bound

Ideally, using a joint proposal allows for modelling the dependency among ’s and thus the negative correlation among the likelihood ratios, but we did not choose to optimize (minimize) variance directly. Instead we maximize the lower bound. We are interested in how the two are connected, i.e. if convergence of one implies another. The following is the main takeaway of this section:

Convergence happening in one mode implies the convergence in the other mode “if values of and cannot be extreme” (we will assume these two are bounded in some sense). However, this is in general not true in the case of importance sampling, where the likelihood ratio is sensitive to the choice of proposal distribution. This suggests the density needs to be controlled in some way (e.g. smoothing the standard deviation of a Gaussian to avoid over-confidence).

We work in a probability space . We write for if is a random variable and . A random sequence converges in to if as . Convergence in probability and in are denoted by and , respectively.

Let be a sequence of random variables, such that and thus . can be thought of as the likelihood ratio where , and its expectation wrt is exactly 666The index can be thought of as an indicator of a sequence of parameterizations of the joint . . We’d like to know if convergence of the lower bound to (e.g. via maximization of H-IWLB) implies vanishing variance under any assumption, which justifies the use of a joint/hierarchical proposal to reduce variance at a potentially faster rate. However, it is in general untrue that smaller , smaller and larger can imply one another 777Klys et al. (2018) also attempt to derive a bound, but their result does not hold without making any assumption. Maddison et al. (2017) also require uniform integrability to establish consistency.. Instead, we discuss their limiting behavior, and conclude that the condition converges to zero sits somewhere between -convergence and -convergence of . To do so, we require a bit more control over the sequences and , such as boundedness. The first implication of the conclusion is that if the variance of vanishes and the sequences are bounded in some sense, also converges to (see below). The second implication is that if converges to “more uniformly” (-convergence), then the variance of also converges to zero. We will discuss why boundedness control is required at the end of this section from the f-divergence perspective.

Now we turn to the analysis on the variance of . Doing so allows us to interpret as an estimator for , and we would like to answer the following question: if the variance of converges to zero, does converge to ? The answer is positive given some boundedness condition, and the result suggests that if the variance of the estimator is infinitesimally small, so is the bias. We now state the result: [] Assume with , and for some , for all . If is uniformly integrable and is bounded in ,

 limn→∞Var(logwn)=0⇒logwnP→logc

The proposition tells us that if we look at as an estimator for some constant (e.g. ), if , then the vanishing variance of implies consistency.

With the same integrability condition on , we can conclude converges to .

###### Corollary 1.

With the same condition as Proposition 4, if we further assume is uniformly integrable,

 limn→∞Var(logwn)=0⇒logwnL1→logc

In particular, as .

Finally, the following result shows that -convergence is sufficient for the convergence of variance of to zero. [] With the same condition as Corollary 1:

 logwnL2→logc⇒limn→∞Var(logwn)=0

Now, we turn back to the case of variational inference where (assume is normalized for the ease of exposition) and discuss the difficulty in analyzing the relationship between variance of and the lower bound . It is because variance is more sensitive to large likelihood ratio, whereas lower density region under does not take a heavy toll on the lower bound. More concretely, since , , where is the -divergence. Our insight is that the -divergence is an upper bound on the forward KL divergence (see appendix for a proof). This means when variance of decreases, the -divergence and the forward KL-divergence also decrease. However, decrease in the forward KL does not impliy decrease in the reverse KL, , which is what maximization of the lower bound

is equivalent to. This can be explained by the characteristic function

that is used in the f-divergence family: , where is a convex function such that . For the forward KL and reverse KL, the corresponding convex functions are and , respectively. When , and . When , and . This means the forward KL will not reflect it when , but will be high when . The reverse KL on the other hand will explode when and can tolerate . These are known as the inclusive and exclusive properties of the two KLs (Minka et al., 2005). -divergence is defined by the convex function , which asymptotically bounds : . This means it will incur an even higher cost than the forward KL if . See the Figure 6 in the appendix for an illustration. The boundedness assumption on and made in this section makes sure it is less likely that will be arbitrarily large or close to zero.

## 5 Related Work

Much work has been done on improving the variational approximation, e.g. by using a more complex form of (Rezende & Mohamed, 2015; Kingma et al., 2016; Huang et al., 2018; Ranganath et al., 2016; Maaløe et al., 2016; Miller et al., 2016) in place of the conventional normal distribution. Along side the work of Burda et al. (2015), Mnih & Rezende (2016) and Bornschein & Bengio (2014); Le et al. (2018) also apply importance sampling to learning discrete latent variables and modifying the wake-sleep algorithm, respectively. Cremer et al. (2017); Bachman & Precup (2015) interpret IWAE as using a corrected proposal, whereas Nowozin (2018) interprets the IWLB as a biased estimator of the marginal log likelihood and propose to reduce the bias using the Jackknife method. However, Rainforth et al. (2018) realize the signal-to-noise ratio of the gradient of the inference model vanishes as increase, as the magnitude of the expected gradient decays faster than variance. Tucker et al. (2018) propose to use a doubly reparameterized gradient to mitigate this problem.

Closest to out work is that of Klys et al. (2018), where they propose to explore multivariate normal distribution as the joint proposal. Domke & Sheldon (2018) integrate defensive sampling into variational inference, and Wu et al. (2018) propose to use a differentiable antithetic sampler. Yin & Zhou (2018); Molchanov et al. (2018); Sobolev & Vetrov (2018) propose to use multiple samples to better estimate the marginal distribution of a hierarchical proposal.

## 6 Experiments

We first demonstrate the effect of sharing a common random number on the dependency among the multiple proposals, and then apply the amortized version of hierarchical importance sampling (that is, H-IWAE) to learning a deep latent Gaussian models. The details of how the inference model is parameterized can be found in Appendix D.

### 6.1 Effect of Common Random Number

In this section, we analyze the effect of training with common random number, . As mentioned in Section 3, the correlation among ’s is induced by tying as a common factor. If for each index , we draw independently from , will be rendered independent. Doing so allows us to test if inference models trained with a common random number tend to output correlated ’s. Let the target in Figure 2 be the posterior distribution (unnormalized), and let and . We consider maximizing H-IWLB with two settings: with and without a common . We repeat the experiment 25 times with different random seeds, record the corresponding H-IWLB, variance of , variance and standard deviation of with common , as plotted in Figure 3.

Qualitatively, we visualize the correlation matrix of in Figure 4. This shows that ’s are indeed negatively correlated with each other when trained with common . When trained with independent for each , the hierarchical sampler tends to find similar solutions and are prone to incurring positively correlated biases (deviation from mean).

### 6.2 Effect of Weighting Heuristics

We also analyze the effect of using different weighting heuristics. We repeat the experiment in the last subsection and apply the power heuristic with and as the weighting scheme, both with common . We find that the weighting function has a major effect on the shape of the learned proposals (see Figure 2). With , the proposals behave more like a mixture, and the negative correlation is stronger as the proposals “specialize” in different regions. We also explore training the weighting function in Appendix E.

### 6.3 Variational Autoencoder

Our final experiment was to apply hierarchical proposals to learning variational autoencoders on a set of standard datasets, including binarized MNIST (Larochelle & Murray, 2011), binarized OMNIGLOT (Lake et al., 2015) and Caltech101 Silhouettes (Marlin et al., 2010), using the same architecture as described in Huang et al. (2018).

Results on the binarized MNIST and OMNIGLOT are in Table 1 and 2, respectively. We compare with Gaussian IWAE as a baseline. The hyperparameters are fixed as follows: minibatch size 64, learning rate , linear annealing schedule with 50,000 iterations for the log density terms except (i.e. KL between and for VAE), polyak averaging with exponential averaging coefficient 0.998. For the MNIST dataset, with , arithmetic averaging does not provide monotonic improvement in terms of negative log-likelihood (NLL) when increases, whereas with or the performance is better with larger . For the OMNIGLOT dataset, the performance is consistently better with larger .

Table 3 summarizes the experiment on the Caltech101 dataset. We compare with the Gausssian IWAE and IWAE with one single hierarchical proposal, dubbed IWAE (h), and perform grid search on a set of hyperparameters (Appendix D), each repeated three times to calculate the mean and standard deviation. We report the performance of the models by selecting the hyperparameters that correspond to the lowest averaged NLL on the validation set. We see that with larger , H-IWAE has an improved performance, but is outperformed by the IWAE with a single hierarchical proposal. We speculate this is due to the increased difficulty in optimizing the inference model with the extra parameters for each , resulting in a larger amortization bias in inference (Cremer et al., 2018). To validate the hypothesis, we repeat the experiment of H-IWAE with the balanced heuristic , but update the encoder twice on the same minibatch before updating the decoder once. This gives us a significant improvement over the learned model (see the last two rows).

## 7 Conclusion

In order to approximate the posterior distribution in variational inference with a more representative empirical distribution, we propose to use a hierarchical meta-sampler. We derive a variational lower bound as a training objective. Theoretically, we provide sufficient condition on boundedness to connect the convergence of the variance of the Monte Carlo estimate of the lower bound with the convergence of the bound itself. Empirically, we show that maximizing the lower bound implicitly reduces the variance of the estimate. Our analysis shows that learning dependency among the joint samples can induce negative correlation and improve the performance of inference.

## Appendix A Detailed J-IWLB Derivation

We provide an alternative to the derivation in Section 3.1. Note that if we can show that

 EQ[K∑j=1πj(zj)p(x,zj)qj(zj)]

is equal to , then an application of Jensen’s inequality to the log of this quantity gives the desired bound.

First, since linearity holds for any collection of variables, whether or not they are independent, the summation can be taken outside, and since the resulting expectations involve only one at a time, the expectations can be taken with respect to their marginals . That is,

 EQ[K∑j=1πj(zj)p(x,zj)qj(zj)] =K∑j=1Eqj[πj(zj)p(x,zj)qj(zj)].

Each of these expectations has a simple form,

 Eqj[πj(zj)p(x,zj)qj(zj)] =∫πj(z)p(x,z)dz,

where it’s okay to drop the index previously appended to the ’s, because each integral refers to all values each could possibly take on (not any individual sample of their values, see Figure 5). Since pointwise for all , we can then swap summation and integration (assuming dominated-convergence-style regularity) and find

 K∑j=1∫πj(z)p(x,z)dz =∫K∑j=1πj(z)p(x,z)dz =∫p(x,z)dz =p(x).

## Appendix B Detailed H-IWLB Derivation

This derivation follows the derivation from the last section, and differs in the last step where we also marginalize out the auxiliary variable .

 LK(Q0) ≤logEQ0[K∑j=1πj(zj,z0)p(x,zj)r(z0|zj)qj(zj|z0)q0(z0)] (Jensen’s Inequality) =logK∑j=1EQ0[πj(zj,z0)p(x,zj)r(z0|zj)qj(zj|z0)q0(z0)] (Linearity of expectation) =logK∑j=1∫z0,zjπj(zj,z0)qj(zj|z0)q0(z0)p(x,zj)r(z0|zj)qj(zj|z0)q0(z0)dz0dzj (Marginalization of z¬(j∧0)) =log∫z0,z(K∑j=1πj(z,z0))p(x,z)r(z0|z)dz0dz (Identity) =logp(x) (Marginalization of πj, z0 and z)

## Appendix C Proofs of Section 4

#### Setup.

We work in a probability space . We write for if is a random variable and . Convergence in probability and in are denoted by and , respectively.

We start with a classic result. Assume as , then by Chebyshev’s inequality, in probability, and in probability by the continuity of . To get -convergence, i.e. , the missing piece that is both necessary and sufficient is the uniform integrability of , which is a form of boundedness condition (see also Proposition 1 of Maddison et al. (2017)). It is clear from now that to say something about the expected value we need to bound the random variable in some way. With the -convergence, we conclude the expected lower bound (in our case, some form of ELBO) converges to the marginal log-likelihood:

 |E[logwn]−logp(x)|≤E[|logwn−logp(x)|]n→∞⟶0
###### Definition 1.

A family of random variables is bounded in probability if for any , there exists such that

 supn≥1P(|Xn|>M)<ϵ

Note that if is bounded in (i.e. ), is also bounded in probability, since by Markov’s inequality,

Setting gives us the uniform bound.

Below, we extend the well known Continuous Mapping Theorem to the difference of random sequences .

###### Lemma 1.

(Extended Continuous Mapping Theorem) Let and be random variables bounded in probability. If is a continuous function, then

 Xn−YnP→0⇒f(Xn)−f(Yn)P→0
###### Proof.

Fix . Choose . There exists some positive value such that and . Within the interval , is uniformly continuous, so there exists such that

 |x−y|≤δϵ,Tr,ϵ⇒|g(x)−g(y)|≤ϵ

 P(|g(X)−g(Y)|>ϵ) ≤P(|Xn|>Tr,ϵ)+P(|Yn|>Tr,ϵ) +P({|g(Xn)−g(Yn)|>ϵ}∩{|Xn|≤Tr,ϵ}∩{|Yn|≤Tr,ϵ}) ≤rϵ+P(|Xn−Yn|>δϵ,Tr,ϵ)

The second term goes to as . Taking to yields the result. ∎

Note that the continuity assumption of can be weakened to assuming the set of discontinuity points of has measure zero.

Now we restate Proposition 4 below. See 4

###### Proof.

Let and . By Chebyshev’s inequality, for any ,

 limn→∞P(|logwn−Cn|>ϵ)=0

which means . Due to -boundedness, is also bounded in probability, and . By the Extended Continuous Mapping Theorem, . Also, is bounded, implying is uniformly integrable, so , and

 limn→∞|E[wn]−exp(Cn)|≤limn