Tighter Variational Bounds are Not Necessarily Better

02/13/2018 ∙ by Tom Rainforth, et al. ∙ 0

We provide theoretical and empirical evidence that using tighter evidence lower bounds (ELBOs) can be detrimental to the process of learning an inference network by reducing the signal-to-noise ratio of the gradient estimator. Our results call into question common implicit assumptions that tighter ELBOs are better variational objectives for simultaneous model learning and inference amortization schemes. Based on our insights, we introduce three new algorithms: the partially importance weighted auto-encoder (PIWAE), the multiply importance weighted auto-encoder (MIWAE), and the combination importance weighted auto-encoder (CIWAE), each of which includes the standard importance weighted auto-encoder (IWAE) as a special case. We show that each can deliver improvements over IWAE, even when performance is measured by the IWAE target itself. Moreover, PIWAE can simultaneously deliver improvements in both the quality of the inference network and generative network, relative to IWAE.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Variational bounds provide tractable and state-of-the-art objectives for training deep generative models (Kingma and Welling, 2014; Rezende et al., 2014). Typically taking the form of a lower bound on the intractable model evidence, they provide surrogate targets that are more amenable to optimization. In general, this optimization requires the generation of approximate posterior samples during the model training and so a number of methods look to simultaneously learn an inference network alongside the target generative network. This assists in the training process and provides an amortized inference artifact, the inference network, which can be used at test time (Kingma and Welling, 2014; Rezende et al., 2014).

The performance of variational bounds depends upon the choice of evidence lower bound (ELBO) and the formulation of the inference network, with the two often intricately linked to one another; if the inference network formulation is not sufficiently expressive, this can have a knock-on effect on the generative network (Burda et al., 2016). In choosing the ELBO, it is often implicitly assumed in the literature that using tighter ELBO is universally beneficial.

In this work we question this implicit assumption by demonstrating that, although using a tighter ELBO is typically beneficial to gradient updates of the generative network, it can be detrimental to updates of the inference network. Specifically, we present theoretical and empirical evidence that increasing the number of importance sampling particles, , to tighten the bound in the IWAE (Burda et al., 2016), degrades the SNR of the gradient estimates for the inference network, inevitably degrading the overall learning process.

Our results suggest that it may be best to use distinct objectives for learning the generative and inference networks, or that when using the same target, it should take into account the needs of both networks. Namely, that tighter bounds may be better for training the generative network, while looser bounds are often preferable for training the inference network. Based on these insights, we introduce three new algorithms: the PIWAE, the MIWAE, and the CIWAE. Each of these include IWAE as a special case and use of the same importance weights, but use these weights in different ways to ensure a higher SNR ratio for the inference network.

We demonstrate that the optimal setting for each of these algorithms improves in a variety of ways over IWAE. In particular, our new algorithms produce inference networks more closely matched to the true posterior. Remarkably, when treating the IWAE objective as the measure of performance, all our algorithms outperform IWAE. Finally, in the settings we considered, PIWAE improves uniformly over IWAE; it achieves improved final marginal likelihood scores, produces better inference networks, and trains faster.

2 Background and Notation

Let be an

-valued random variable defined via a process involving an unobserved

-valued random variable with joint density . Direct maximum likelihood estimation of is generally intractable if is a deep generative model due to the marginalization of . A common strategy is to optimize instead a variational lower bound on , defined via an auxiliary inference model :

(1)
(2)

Typically, is parameterized by a neural network, for which the approach is known as the VAE (Kingma and Welling, 2014; Rezende et al., 2014)

. Optimization is performed with SGA using unbiased estimators of

. If is reparameterizable (Kingma and Welling, 2014; Rezende et al., 2014), then given a reparameterized sample , the gradients can be used for the optimization.

The VAE objective places a harsh penalty on mismatch between and ; optimizing jointly in can confound improvements in with reductions in the KL (Turner and Sahani, 2011). Thus, research has looked to develop bounds that separate the tightness of the bound from the expressiveness of the class of . For example, the IWAE objectives (Burda et al., 2016), which we denote as , are a family of bounds defined by

(3)
(4)
(5)

. The IWAE objectives generalize the VAE objective ( corresponds to the VAE) and the bounds become strictly tighter as increases Burda et al. (2016). When the family of contains the true posteriors, the global optimum parameters are independent of , see for example (Le et al., 2017). Still, except for the most trivial models, it is not usually the case that contains the true posteriors, and (Burda et al., 2016) provide strong empirical evidence that setting leads to significant empirical gains over the VAE in terms of learning the generative model.

Optimizing tighter bounds is usually empirically associated with better models in terms of marginal likelihood on held out data. Other related approaches extend this to SMC (Maddison et al., 2017; Le et al., 2017; Naesseth et al., 2017) or change the lower bound that is optimized to reduce the bias (Li and Turner, 2016; Bamler et al., 2017). A second, unrelated, approach is to tighten the bound by improving the expressiveness of (Salimans et al., 2015; Tran et al., 2015; Rezende and Mohamed, 2015; Kingma et al., 2016; Maaløe et al., 2016; Ranganath et al., 2016). In this work we focus on the former, algorithmic, approaches to tighter bounds.

3 Assessing the Signal-to-Noise Ratio of the Gradient Estimators

Because it is not feasible to analytically optimize any ELBO in complex models, the effectiveness of any particular choice of ELBO is linked to our ability to numerically solve the resulting optimization problem. This motivates us to examine the effect

has on the variance and magnitude of the gradient estimates of IWAE for the two networks. More generally, we study IWAE gradient estimators constructed as the average of

estimates, each built from independent particles. We present a result characterizing the asymptotic signal-to-noise ratio in and . For the standard case of , our result shows that the signal-to-noise ratio of the reparameterization gradients of the inference network for the IWAE decreases with rate .

Specifically, as estimating the requires a Monte Carlo estimation of an expectation over , we have two sample sizes to tune for the estimate: the number of samples used for Monte Carlo estimation of the ELBO and the number of importance samples used in the bound construction. Here does not change the true value of , only our variance in estimating it, while changing changes the ELBO itself, with larger leading to tighter bounds (Burda et al., 2016). Presuming that reparameterization is possible, we can express our gradient estimate in the general form

(6)
(7)

and each . Thus, for a fixed budget of particle weights, we have a family of estimators with the case corresponding to the VAE objective and the case corresponding to IWAE. We will use to refer to gradient estimates with respect to and for the same with respect to .

Variance is not a good enough barometer for the effectiveness of a gradient estimation scheme; estimators with small expected values need proportionally smaller variances to estimate accurately. In the case of IWAE, when changes in simultaneously affect both the variance and expected value, the quality of the estimator for learning can worsen even as variance decreases. To see why consider the marginal likelihood estimates . Because these become exact (and thus independent of the proposal) as , it must be the case that . Thus as becomes large, the expected value of the gradient must decrease along with its variance, such that the variance relative to the problem scaling need not actually improve.

Specifically, we introduce the signal-to-noise-ratio, defined as the absolute value of the expected gradient scaled by its standard deviation:

(8)

where denotes the standard deviation of a random variable. The SNR is defined similarly for

and is defined separately on each dimension of the parameter vector. The SNR provides a measure of the relative accuracy of the gradient estimates. Though a high SNR does not always indicate a good SGA scheme (as the target objective itself might be poorly chosen), a low SNR is always problematic because it indicates that the gradient estimates are dominated by noise: if

then the estimates become completely random. We are now ready to state our main theoretical result, which at a high level is that and .

Theorem 1.

Assume that when

, the expected gradients; the variances of the gradients; and the first four moments of

, , and are all finite and the variances are also non-zero. Then the signal-to-noise ratios of the gradient estimates converge at the following rates

(9)
(10)

where is the true marginal likelihood.

Proof.

We give only an intuitive demonstration of the high-level result here and provide a formal proof in Appendix A. The effect of

on the SNR ratio follows from using the law of large numbers on the random variable

. Namely, the overall expectation is independent of and the variance reduces at a rate . The effect of is more complicated but is perhaps most easily seen by noting that (as shown by (Burda et al., 2016))

such that can be interpreted as a self-normalized importance sampling estimate. We can, therefore, invoke the known result (see e.g. (Hesterberg, 1988)) that the bias of a self-normalized importance sampler converges at a rate and the standard deviation at a rate . We thus see that the SNR converges at a rate if the asymptotic gradient is and otherwise, giving the convergence rates in the and cases respectively. ∎

The implication of these convergence rates is that increasing is monotonically beneficial to the SNR for both and , but that increasing is beneficial to the former and detrimental to the latter.

An important point of note is that, for large , the direction the expected inference network gradient is independent of . Namely, because we have as an intermediary result from deriving the SNR that

(11)

we see that expected gradient points in the direction of as . This direction is rather interesting: it implies that as , the optimal is that which minimizes the variance of the weights. This is well known to be the optimal importance sampling distribution in terms of approximating the posterior (Owen, 2013). Though it is not also necessarily the optimal proposal in terms of estimating the ELBO,111The optimum importance sampling proposal for calculating expectations of a particular function is distinct to that which best approximates the posterior (Owen, 2013; Ruiz et al., 2016). this is nonetheless an interesting equivalence that complements the results of (Cremer et al., 2017). It suggests that increasing may provide a preferable target in terms of the direction of the true inference network gradients, creating a trade-off with the fact that it also diminishes the SNR, reducing the estimates to pure noise if is set too high. In the absence of other factors, there may thus be a “sweet-spot” for setting .

(a) IWAE inference network gradient estimates
(b) IWAE generative network gradient estimates
Figure 1: Histograms of gradient estimates for the generative network and the inference network using the IWAE () () objective with different values of .

Typically when training deep generative models, one does not optimize a single ELBO but instead its average over multiple data points, i.e.

(12)

Our results extend to this setting because the are drawn independently for each , so

(13)
(14)

We thus also see that if we are using mini-batches such that is a chosen parameter and the are drawn from the empirical data distribution, then the SNR of scales as , i.e. and . Therefore increasing has the same ubiquitous benefit as increasing . In the rest of the paper, we will implicitly be considering the SNR for , but will omit the dependency on to simplify the notation.

4 Empirical Confirmation

Our convergence results hold exactly in relation to (and ) but are only asymptomatic in due to the higher order terms. Therefore their applicability should be viewed with a healthy degree of skepticism in the small regime. With this in mind, we now present empirical support for our theoretical results and test how well they hold in the small regime using a simple Gaussian model, for which we can analytically calculate the ground truth.

Consider a family of generative models with –valued latent variable and observed variable :

(15)

which is parameterized by . Let the inference network be parameterized by where . Given a dataset , we can analytically calculate the optimum of our target as explained in Appendix B, giving and , where and . For this particular problem, the optimal proposal is independent of . This will not be the case in general unless the family of possible contains the the true posteriors . Further, even for this problem, the expected gradients for the inference network still change with .

(a) Convergence of SNR for inference network
(b) Convergence of SNR for generative network
Figure 2: Convergence of signal-to-noise ratios of gradient estimates with total budget . Different lines correspond to different dimensions of the parameter vectors. Shown in blue is IWAE where we keep fixed and increase . Shown in red is VAE where is fixed and we increase . The black and green dashed lines show the expected convergence rates from our theoretical results, representing gradients of and respectively.

To conduct our investigation, we randomly generated a synthetic dataset from the model with dimensions, data points, and a true model parameter value that was itself randomly generated from a unit Gaussian, i.e. . We then considered the gradient at a random point in the parameter space close to optimum,222We consider the behavior of for points far away from the optimum in Appendix C.3. namely each dimension of each parameter was randomly offset from its optimum value using a zero-mean Gaussian with standard deviation . We then calculated empirical estimates of the ELBO gradients for IWAE, where is held fixed and we consider increasing , and for VAE, where is held fixed and we consider increasing . In all cases we calculated such estimates and used these samples to provide empirical estimates for, amongst other things, the mean and standard deviation of the estimator, and thereby an empirical estimate for the SNR. For the inference network, we predominantly focused on investigating the gradients of .

We start by examining the qualitative behavior of the different gradient estimators as increases as shown in Figure 1. This shows histograms of the gradient estimators for a single parameter of the inference network (left) and generative network (right) for IWAE (left). We first see in Figure 0(a) that as increases, both the magnitude and the standard deviation of the estimator decrease for the inference network, with the former decreasing faster. This matches the qualitative behavior of our theoretical result, with the SNR ratio diminishing as

increases. In particular, the probability of the gradient being positive or negative becomes roughly even for the larger values of

, meaning the optimizer is equally likely to increase as decrease the inference network parameters at the next iteration. By contrast, for the generative network, IWAE converges towards a non-zero gradient, such that, even though the SNR initially decreases with , it then rises again, with a very clear gradient signal for .

To provide a more rigorous analysis, we next directly examine the convergence of the SNR. Figure 2 shows the convergence of the estimators with increasing and . The observed rates for the inference network (Figure 1(a)) correspond to our theoretical results, with the suggested rates observed all the way back to . As expected, we see that as increases, so does , but as increases, reduces.

In Figure 1(b), we see that the theoretical convergence for is again observed exactly for variations in , but a more unusual behavior is seen for variations in where the SNR initially decreases before starting to increase again for large enough , eventually exhibiting behavior consistent with the theoretical result for large enough . The driving factor for this appears to be that, at least for this model, typically has a smaller magnitude (and often opposite sign) to (see Figure 0(b)). If we think of the estimators for all values of as biased estimates for , we see from our theoretical results that this bias decreases faster than the standard deviation. Consequently, if reducing this bias causes the magnitude of the expected gradient to diminish, this can mean that increasing initially causes the SNR to reduce for the generative network.

Note that this does not mean that the estimates are getting worse for the generative network. As we increase our bound is getting tighter and our estimates closer to the true gradient for the target that we actually want to optimize, i.e. . See Appendix C.2 for more details.

(a) Convergence of dsnr for inference network
(b) Convergence of dsnr for generative network
Figure 3: Convergence of directional signal-to-noise ratio of gradients estimates with total budget . The solid lines show the estimates dsnr and the shaded regions the interquartile range of in the individual ratios. Also shown for reference is the dsnr for a randomly generated vector where each component is drawn from a unit Gaussian.

It is also the case that increasing could be beneficial for the inference network even if it reduces the SNR by improving the direction of the expected gradient. However, we will return to consider a metric that examines this direction in Figure 4, where we will see that the SNR seems to be the dominant effect for the inference network.

(a) Convergence of dsnr for inference network
(b) Convergence of dsnr for generative network
Figure 4: Convergence of directional signal-to-noise ratio of gradient estimates where the true gradient is taken as . Figure conventions as per Figure 3.

As a reassurance that our chosen definition of the SNR is appropriate for the problem at hand and to examine the effect of multiple dimensions explicitly, we now also consider an alternative definition of the SNR that is similar (though distinct) to that used in (Roberts and Tedrake, 2009). We refer to this as the “directional” SNR (dsnr). At a high-level, we define the dsnr by splitting each gradient estimate into two component vectors, one parallel to the true gradient and one perpendicular, then taking the expectation of ratio of their magnitudes. More precisely, we define as being the true normalized gradient direction and then the dsnr as

(16)

The dsnr thus provides a measure of the expected proportion of the gradient that will point is the true direction. For perfect estimates of the gradients, then , but unlike the SNR, arbitrarily bad estimates do not have because even random vectors will have a component of their gradient in the true direction.

The convergence of the dsnr is shown in Figure 3, for which the true normalized gradient has been estimated empirically, noting that this varies with . We see a similar qualitative behavior to the SNR, with the gradients of IWAE for the inference network degrading to having the same directional accuracy as drawing a random vector. Interestingly, the DSNR seems to be following the same asymptotic convergence behavior as SNR for the generative network and for the inference network in (as shown by the dashed lines), even though we have no theoretical result to suggest this should occur.

As our theoretical results suggest that the direction of the true gradients correspond to targeting an improved objective as increases, we now examine whether this or the changes in the SNR is the dominant effect. To this end, we repeat our calculations for the dsnr but take as the true direction of the gradient for . This provides a measure of how varying and affects the quality of the gradient directions as biased estimators for . As shown in Figure 4, increasing is still detrimental for the inference network by this metric, even though it brings the expected gradient estimate closer to the true gradient. By contrast, increasing is now monotonically beneficial for the generative network. Increasing with leads to initial improvements for the inference network before plateauing due to the bias of the estimator. For the generative network, increasing has little impact, with the bias being the dominant factor throughout. Though this metric is not an absolute measure of performance of the SGA scheme, e.g. because high bias may be more detrimental than high variance, it is nonetheless a powerful result in suggesting that increasing can be detrimental to learning the inference network.

(a) IWAE
(b)
(c)
Figure 5:

Convergence of evaluation metrics IWAE-64,

, and during training. All lines show mean standard deviation over 4 runs.
(a) Comparing MIWAE and IWAE
(b) Comparing CIWAE and IWAE
(c) Comparing PIWAE and IWAE
Figure 6: Performance of MIWAE, CIWAE, PIWAE relative to IWAE in terms of metrics with (top row), (middle row), and (final row). All dots are the difference in the metric between a model trained on IWAE-64 and the experimental condition. Dotted line is the IWAE baseline. Note that in all cases, the far left of the plot correspond to settings equivalent to the IWAE.

5 New Estimators

Based on our previous findings, we now introduce three new algorithms that address the issue of diminishing SNR for the inference network. Our first, MIWAE, is exactly equivalent to the general formulation given in (6), the distinction coming from the fact that we will use both and , which has not been previously considered in the literature. The motivation for this is that because our inference network SNR increases as , we should be able to mitigate the issues increasing has on the SNR by also increasing . For fairness, we will keep our overall budget fixed, but we will show that given this budget, the optimal value for is often not .

Our second algorithm, CIWAE is simply a linear combination of IWAE and VAE, namely

(17)

where is a combination parameter. It is trivial to see that is a lower bound on the log marginal that is tighter than the VAE bound but looser than the IWAE bound. We then employ the following estimator

(18)

where we use the same for both terms. The motivation for CIWAE is that, if we set to a relatively small value, the objective will behave mostly like IWAE, except when the expected IWAE gradient becomes very small. When this happens, the VAE component should “take-over” and alleviate SNR issues: the asymptotic SNR of for is because the VAE component has non-zero expectation in the limit .

Our results suggest that what is good for the generative network, in terms of setting , is often detrimental for the inference network. It is therefore natural to question whether it is sensible to always use the same target for both the inference and generative networks. Motivated by this, our third method, PIWAE, uses the IWAE target when training the generative network, but the MIWAE target for training the inference network. We thus have

(19a)
(19b)

where we will generally set so that the same weights can be used for both gradients.

6 Experiments

We now consider using our new estimators to train deep variational autoencoders for the MNIST digits dataset (LeCun et al., 1998). For this, we duplicated the architecture and training schedule outlined in Burda et al. (2016)

. In particular, all networks were trained and evaluated using the stochastic binarization of 

Burda et al. (2016). For all methods we set a budget of weights in the target estimate for each datapoint in the minibatch.

To assess different aspects of the training performance, we consider three different metrics: with , with , and the latter of these minus the former. All reported metrics are evaluated on the test data.

The motivation for the with metric, denoted as IWAE-64, is that this is the target used for training the IWAE and so if another method does better on this metric than the IWAE, this is a clear indicator that SNR issues of the IWAE estimator have degraded its performance. In fact, this would demonstrate that, from a practical perspective, using the IWAE estimator is sub-optimal, even if our explicit aim is to optimize the IWAE bound.

Figure 7: Violin plots of ESS estimates for for each image of MNIST, normalized by the number of samples drawn. A violin plot is similar to box plot with a kernel density plot on each side — thicker means more MNIST images whose achieves that ESS.

The second metric, with , denoted , is used as a surrogate for estimating the log marginal likelihood and thus provides an indicator for fidelity of the learned generative model.

The third metric is an estimator for the divergence implicitly targeted by the IWAE. Namely, as shown by Le et al. (2017), the can be interpreted as

(20)
(21)
(22)

Thus we can estimate using , to provide a metric for divergence between the inference network and the proposal network. We use this instead of the simply because the latter can be deceptive metric for inference network fidelity (Cremer et al., 2017). For example, it tends to prefer that covers one of the posterior modes, rather than encompassing all of them.

Figure 6 shows the convergence of these metrics for each algorithm. Here we have considered the middle value for each of the parameters, namely we set for PIWAE and MIWAE, and for CIWAE. We see that PIWAE and MIWAE both comfortably outperformed, and CIWAE slightly outperformed, IWAE in terms of IWAE-64 metric, despite IWAE being directly trained on this target. In terms of , PIWAE gave the best performance, followed by IWAE. In terms of the KL, we see that the VAE performed best followed by MIWAE, with IWAE performing the worst. These results imply that, while IWAE is able to learn a generative network which performance similar to that of PIWAE, the inference network it learns is substantially worse than all the competing methods.

We next considered tuning the parameters for each of our algorithms as shown in Figure 6, for which we look at the final metric values after training. For MIWAE we see that as we increase , the metric gets worse, while the KL gets better. The IWAE-64 metric initially increases, before reducing again from to , suggesting that intermediate values for (i.e. , ) give a better trade-off. For the PIWAE, similar behavior to MIWAE is seen for the IWAE-64 and KL metrics. However, unlike for MIWAE, we see that initially increases with , such that PIWAE provides uniform improvement over IWAE for the and cases. CIWAE exhibits similar behavior in increasing as increasing for MIWAE, but there appears to be a larger degree of noise in the evaluations, while the optimal value of , though non-zero, seems to be closer to IWAE than for the other algorithms.

Figure 8: SNR of inference network weights during training. All lines are mean standard deviation over 20 randomly chosen weights per layer.

As an additional measure of the performance of the inference network that is distinct to any of the training targets, we also considered the effective sample size (ESS) (Owen, 2013) for the full trained networks defined as

(23)

The ESS is a measure of how many unweighted samples would be equivalent to the weighted sample set. A low ESS indicates that the inference network is struggling to perform effective inference for the generative network. The results, given in Figure 7, show that the ESSs for CIWAE, MIWAE, and the VAE were all significantly larger than for IWAE and PIWAE, with IWAE giving a particularly poor ESS.

Our final experiment looks at the SNR values for the inference networks. Here we took a number of different neural network gradient weights at different layers of the network and calculated empirical estimates for their SNRs at various points during the training. We then averaged these estimates over the different network weights, the results of which are given in Figure 8. This clearly shows the low SNR exhibited by the IWAE inference network, suggesting that our results from the simple Gaussian experiments carry over to the more complex neural network domain.

7 Conclusions

We have provided theoretical and empirical evidence that algorithmic approaches of increasing the tightness of the ELBO independently to the expressiveness of the inference network can be detrimental to learning by reducing the signal-to-noise ratio of the inference network gradients. Experiments on a simple latent variable model confirmed our theoretical findings. We then exploited these insights to introduce three estimators, the MIWAE, PIWAE, and the CIWAE and showed that each of these can deliver improvements over the IWAE, even when the metric used for this assessment is the IWAE target itself. In particular, piwae delivered simultaneous improvements in both the inference network and the generative network compared to iwae.

References

Appendix A Proof of SNR Convergence Rates

See 1

Proof.

We start by considering the variance of the estimators. We will first exploit the fact that each is independent and identically distributed and then apply Taylor’s theorem333This approach follows similar lines to the derivation of nested Monte Carlo convergence bounds in (Rainforth et al., 2017) and (Fort et al., 2017), and the derivation of the mean squared error for self-normalized importance sampling, see e.g. (Hesterberg, 1988). to about , using to indicate the remainder term, as follows.

Now we have by the mean-value form of the remainder that for some between and

and therefore

It follows that the terms are dominated as each of and vary with the square of the estimator error, whereas other comparable terms vary only with the unsquared difference. The assumptions on moments of the weights and their derivatives further guarantee that these terms are finite. More precisely, we have for some where must be bounded with probability as to maintain our assumptions. It follows that and thus that

(24)

using the fact that the third and fourth order moments of a Monte Carlo estimator both decrease at a rate .

Considering now the expected gradient estimate and again using a Taylor’s theorem, this time to a higher number of terms, we have

(25)

Using a similar process as in variance case, it is now straightforward to show that , which is thus similarly dominated (also giving us (11)).

Finally, by combing (24) and (25) and noting that we have

(26)
(27)
(28)

For , then because , we instead have

(29)

and we are done. ∎

Appendix B Derivation of Optimal Parameters for Gaussian Experiment

To derive the optimal parameters for the Gaussian experiment we first note that

is as per (4) and the form of the KL is taken from (Le et al., 2017). Next, we note that only controls the mean of the proposal so, while it is not possible to drive the KL to zero, it will be minimized for any particular when the means of and are the same. Furthermore, the corresponding minimum possible value of the KL is independent of and so we can calculate the optimum pair by first optimizing for and then choosing the matching . The optimal maximizes , giving . As we straightforwardly have , the KL is then minimized when and , giving , where and .

Appendix C Additional Empirical Analysis of SNR

c.1 Histograms for VAE

To complete the picture for the effect of and on the distribution of the gradients, we generated histograms for the (i.e. VAE) gradients as is varied. As shown in Figure 8(a), we see the expected effect from the law of large numbers that the variance of the estimates decreases with , but not the expected value.

(a) VAE inference network gradient estimates
(b) VAE generative network gradient estimates
Figure 9: Histograms of gradient estimates for the generative network and the inference network using the VAE () objectives with different values of .

c.2 Convergence of RMSE for Generative Network

Figure 10: RMSE in gradient estimate to

As explained in the main paper, the SNR is not entirely appropriate metric for the generative network – a low SNR is still highly problematic, but a high SNR does not indicate good performance. It is thus perhaps better to measure the quality of the gradient estimates for the generative network by looking at the RMSE to , i.e. . The convergence of this RMSE is shown in Figure 10 where the solid lines are the RMSE estimates using runs and the shaded regions show the interquartile range of the individual estimates. We see that increasing in the VAE reduces the variance of the estimates but has negligible effect on the RMSE due to the fixed bias. On the other hand, we see that increasing leads to a monotonic improvement, initially improving at a rate (because the bias is the dominating term in this region), before settling to the standard Monte Carlo convergence rate of (shown by the dashed lines).

c.3 Experimental Results for High Variance Regime

We now present empirical results for a case where our weights are higher variance. Instead of choosing a point close to the optimum by offsetting parameters with a standard deviation of , we instead offset using a standard deviation of . We further increased the proposal covariance to to make it more diffuse. This is now a scenario where the model is far from its optimum and the proposal is a very poor match for the model, giving very high variance weights.

We see that the behavior is the same for variation in , but somewhat distinct for variation in . In particular, the SNR and dsnr only decrease slowly with for the inference network, while increasing no longer has much benefit for the SNR of the inference network. It is clear that, for this setup, the problem is very far from the asymptotic regime in such that our theoretical results no longer directly apply. Nonetheless, the high-level effect observed is still that the SNR of the inference network deteriorates, albeit slowly, as increases.

(a) IWAE inference network gradient estimates
(b) VAE inference network gradient estimates
(c) IWAE generative network gradient estimates
(d) VAE generative network gradient estimates
Figure 11: Histograms of gradient estimates as per Figure 1.
(a) Convergence of SNR for inference network
(b) Convergence of SNR for generative network
Figure 12: Convergence of signal-to-noise ratios of gradient estimates as per Figure 2.
(a) Convergence of dsnr for inference network
(b) Convergence of dsnr for generative network
Figure 13: Convergence of directional signal-to-noise ratio of gradients estimates as per Figure 3.
(a) Convergence of dsnr for inference network
(b) Convergence of dsnr for generative network
Figure 14: Convergence of directional signal-to-noise ratio of gradient estimates where the true gradient is taken as as per Figure 14.

Appendix D Convergence of Toy Gaussian Problem

We now asses the effect of the outlined changes in the quality of the gradient estimates on the final optimization for our toy Gaussian problem. Figure 15 shows the convergence of running Adam (Kingma and Ba, 2014) to optimize , , and . This suggests that the effects observed predominantly transfer to the overall optimization problem. Interestingly, setting and gave the best performance on learning not only the inference network parameters, but also the generative network parameters.

Figure 15: Convergence of optimization for different values of and . (Top, left) during training (note this represents a different metric for different ). (Top, right) distance of the generative network parameters from the true maximizer. (Bottom) distance of the inference network parameters from the true maximizer. Plots show means over repeats with

standard deviation. Optimization is performed using the Adam algorithm with all parameters initialized by sampling from the uniform distribution on

.