Inference Suboptimality in Variational Autoencoders

01/10/2018 ∙ by Chris Cremer, et al. ∙ 0

Amortized inference has led to efficient approximate inference for large datasets. The quality of posterior inference is largely determined by two factors: a) the ability of the variational distribution to model the true posterior and b) the capacity of the recognition network to generalize inference over all datapoints. We analyze approximate inference in variational autoencoders in terms of these factors. We find that suboptimal inference is often due to amortizing inference rather than the limited complexity of the approximating distribution. We show that this is due partly to the generator learning to accommodate the choice of approximation. Furthermore, we show that the parameters used to increase the expressiveness of the approximation play a role in generalizing inference rather than simply improving the complexity of the approximation.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

In this paper, we analyze inference suboptimality: the mismatch between the true and approximate posterior. More specifically, we are interested in understanding what factors cause the gap between the marginal log-likelihood and the evidence lower bound (ELBO) in variational autoencoders (VAEs, Kingma & Welling (2014); Rezende et al. (2014)). We refer to this as the inference gap. Moreover, we break down the inference gap into two components: the approximation gap and the amortization gap. The approximation gap comes from the inability of the variational distribution family to exactly match the true posterior. The amortization gap refers to the difference caused by amortizing the variational parameters over the entire training set, instead of optimizing for each training example individually. We refer the reader to Table 1 for the definitions of the gaps and to Fig. 1 for a simple illustration of the gaps. In Fig. 1, refers to the ELBO evaluated using an amortized distribution , as is typical of VAE training. In contrast, is the ELBO evaluated using the optimal approximation within its variational family.

There has been significant work on improving variational inference in VAEs through the development of expressive approximate posteriors (Rezende & Mohamed, 2015; Kingma et al., 2016; Ranganath et al., 2016; Tomczak & Welling, 2016, 2017). These works have shown that with more expressive approximate posteriors, the model learns a better distribution over the data. Our study aims to gain a better understanding of the relationship between expressive approximations and improved generative models.

Our experiments investigate how the choice of encoder, posterior approximation, decoder, and optimization affect the approximation and amortization gaps. We train VAE models in a number of settings on the MNIST (LeCun et al., 1998), Fashion-MNIST (Xiao et al., 2017), and CIFAR-10 (Krizhevsky & Hinton, 2009) datasets.

Our contributions are: a) we investigate inference suboptimality in terms of the approximation and amortization gaps, providing insight to guide future improvements in VAE inference, b) we quantitatively demonstrate that the learned generative model accommodates the choice of approximation, and c) we demonstrate that parameterized functions that improve the expressiveness of the approximation play a significant role in reducing amortization error.

Figure 1: Gaps in Inference
Term Definition VAE Formulation
Inference
Approximation
Amortization
Table 1: Summary of Gap Terms. The middle column refers to the general case where our variational objective is a lower bound on the marginal log-likelihood. The right most column demonstrates the specific case in VAEs. refers to the optimal approximation within a family , i.e. .

2 Background

2.1 Inference in Variational Autoencoders

Let be the observed variable, the latent variable, and

be their joint distribution. Given a dataset

, we would like to maximize the marginal log-likelihood with respect to the model parameters :

In practice, the marginal log-likelihood is computationally intractable due to the integration over the latent variable . Instead, VAEs introduce an inference network to approximate the true posterior and optimize the ELBO with respect to model parameters and inference network parameters (parameterization subscripts omitted for brevity):

(1)
(2)

From the above equation, we see that the ELBO is tight when . The choice of

is often a factorized Gaussian distribution for its simplicity and efficiency. By utilizing the inference network (also referred to as encoder or recognition network), VAEs amortize inference over the entire dataset. Furthermore, the overall model is trained by stochastically optimizing the ELBO using the reparametrization trick

(Kingma & Welling, 2014).

2.2 Expressive Approximate Posteriors

There are a number of strategies for increasing the expressiveness of approximate posteriors, going beyond the original factorized-Gaussian. We briefly summarize normalizing flows and auxiliary variables.

2.2.1 Normalizing Flows

Normalizing flow (Rezende & Mohamed, 2015)

is a change of variables procedure for constructing complex distributions by transforming probability densities through a series of invertible mappings. Specifically, if we transform a random variable

with distribution , the resulting random variable has a distribution:

(3)

By successively applying these transformations, we can build arbitrarily complex distributions. Stacking these transformations remains tractable due to the determinant being decomposable: . An important property of these transformations is that we can take expectations with respect to the transformed density without explicitly knowing its formula due to the law of the unconscious statistician (LOTUS):

(4)

Using equations (3) and (4), the lower bound with the transformed approximation can be written as:

(5)

The main constraint on these transformations is that the determinant of their Jacobian needs to be easily computable.

2.2.2 Auxiliary Variables

Deep generative models can be extended with auxiliary variables which leave the generative model unchanged but make the variational distribution more expressive. Just as hierarchical Bayesian models induce dependencies between data, hierarchical variational models can induce dependencies between latent variables. The addition of the auxiliary variable changes the lower bound to:

(6)
(7)

where is called the reverse model. From Eqn. 7, we see that this bound is looser than the regular ELBO, however the extra flexibility provided by the auxiliary variable can result in a higher lower bound. This idea has been employed in works such as auxiliary deep generative models (ADGM, (Maaløe et al., 2016)), hierarchical variational models (HVM, (Ranganath et al., 2016)) and Hamiltonian variational inference (HVI, (Salimans et al., 2015)).

3 Methods

3.1 Approximation and Amortization Gaps

The inference gap is the difference between the marginal log-likelihood and a lower bound . Given the distribution in the family that maximizes the bound, , the inference gap decomposes as the sum of approximation and amortization gaps:

For VAEs, we can translate the gaps to KL divergences by rearranging Eqn. (1):

(8)

3.2 Flexible Approximate Posteriors

Our experiments involve expressive approximations which use flow transformations and auxiliary variables. The flow transformation that we employ is of the same type as the transformations of Real NVP (Dinh et al., 2017). We partition the latent variable into two, and , then perform the following transformations:

(9)
(10)

where are differentiable mappings parameterized by neural nets and takes the Hadamard or element-wise product. We partition the latent variable by simply indexing the elements of the first half and the second half. The determinant of the combined transformation’s Jacobian, , can be easily evaluated. See section 6.4 of the Supplementary material for a derivation. The lower bound of this approximation is the same as Eqn. (5). We refer to this approximation as .

We also experiment with an approximation that combines flow transformations and auxiliary variables. Let be the variable of interest and the auxiliary variable. The flow is the same as equations (9) and (10), where is replaced with and with . We refer to this approximate distribution as , where AF stands for auxiliary flow. We train this model by optimizing the following bound:

(11)

Note that this lower bound is looser as explained in Section 2.2.2. We refer readers to Section 6.1.2 in the Supplementary material for specific details of the flow configuration adopted in the experiments.

3.3 Marginal Log-Likelihood Estimation and Evidence Lower Bounds

In this section, we describe the estimates we use to compute the bounds of the inference gaps:

, , and . We use two bounds to estimate the marginal log-likelihood, : IWAE (Burda et al., 2016) and AIS (Neal, 2001).

The IWAE bound takes multiple importance weighted samples from the variational distribution resulting in a tighter lower bound than the VAE bound. The IWAE bound is computed as:

(12)

As the number of importance samples approaches infinity, the bound approaches the marginal log-likelihood. It is often used as an evaluation metric for generative models

(Burda et al., 2016; Kingma et al., 2016). AIS is potentially an even tighter lower bound. AIS weights samples from distributions which are sequentially annealed from an initial proposal distribution to the true posterior. See Section 6.5 in the Supplementary material for further details regarding AIS. To compute the AIS bound, we use 100 chains, each with 10000 intermediate distributions, where each transition consists of one HMC trajectory with 10 leapfrog steps. The initial distribution for AIS is the prior, so that it is encoder-independent.

We estimate the marginal log-likelihood by independently computing our tightest lower bounds then take the maximum of the two:

The and bounds are the standard ELBOs, , from Eqn. (2), computed with either the amortized or the optimal (see below). When computing and , we use 5000 samples.

A        B        C        D

Figure 2: True Posterior and Approximate Distributions of a VAE with 2D latent space. The columns represent four different datapoints. The green distributions are the true posterior distributions, highlighting the mismatch with the blue approximations. Amortized: Variational parameters learned over the entire dataset. Optimal: Variational parameters optimized for each individual datapoint. Flow: Using a flexible approximate distribution.

3.4 Local Optimization of the Approximate Distribution

To compute , we optimize the parameters of the variational distribution for every datapoint. For the local optimization of

, we initialize the mean and variance as the prior, i.e.

. We optimize the mean and variance using the Adam optimizer with a learning rate of . To determine convergence, after every 100 optimization steps, we compute the average of the previous 100 ELBO values and compare it to the best achieved average. If it does not improve for 10 consecutive iterations then the optimization is terminated. For and , the same process is used to optimize all of its parameters. All neural nets for the flow were initialized with a variant of the Xavier initilization (Glorot & Bengio, 2010). We use 100 Monte Carlo samples to compute the ELBO to reduce variance.

3.5 Validation of Bounds

The soundness of our empirical analysis depends on the reliability of the marginal log-likelihood estimator. For general importance sampling based estimators, the sample variance of the normalized importance weights can serve as an indicator of accuracy (Geweke, 1989; Neal, 2001). This quantitative measure, however, can also be unreliable, e.g. when the proposal misses an important mode of the target distribution (Neal, 2001).

In this work, we follow (Wu et al., 2017) to empirically validate our AIS estimates with Bidirectional Monte Carlo (BDMC, Grosse et al. (2015, 2016)). In addition to a lower bound provided by AIS, BDMC runs AIS chains backward from exact posterior samples to obtain an upper bound on the marginal log-likelihood. It should be noted that BDMC relies on the assumption that the distribution of the simulated data from the model roughly matches that of the real data. This is due to the backward chain initializes from exact posterior samples (Grosse et al., 2015).

For the MNIST and Fashion datasets, BDMC gives a gap within 0.1 nat for a linear schedule AIS with intermediate distributions and 100 importance samples on simulated datapoints. For 3-BIT CIFAR, the same AIS setting gives a gap within 1 nat with the sigmoidial annealing schedule (Grosse et al., 2015) on 100 simulated datapoints. Loosely speaking, this should give us confidence in how well our AIS lower bounds reflect the marginal log-likelihood computed on the real data.

MNIST Fashion-MNIST 3-BIT CIFAR
-89.80 -88.94 -97.47 -97.41 -816.9 -820.56
-90.80 -90.38 -98.92 -99.10 -820.19 -822.16
-91.23 -113.54 -100.53 -132.46 -831.65 -861.62
-92.57 -91.79 -104.75 -103.76 -869.12 -864.28
Approximation 1.43 1.44 3.06 1.69 14.75 1.60
Amortization 1.34 1.41 4.22 4.66 37.47 42.12
Inference 2.77 2.85 7.28 6.35 52.22 43.72
Table 2: Inference Gaps. The columns and refer to the variational distribution used for training the model. These lower bounds are computed on the training set and are in units of nats.

4 Related Work

Much of the earlier work on variational inference focused on optimizing the variational parameters locally for each datapoint, e.g. the original Stochastic Variational Inference scheme (SVI, Hoffman et al. (2013)). To scale inference to large datasets, most related works utilize inference networks to amortize the cost of inference over the entire dataset. Our work analyses the error that these inference networks introduce.

Most relevant to our work is the recent work of Krishnan et al. (2017), which explicitly remarks on two sources of error in variational learning with inference networks, and proposes to optimize approximate inference locally from an initialization output by the inference network. They show improved training on high-dimensional, sparse data with the hybrid method, claiming that local optimization reduces the negative effects of random initialization in the inference network early on in training. Thus their work focuses on reducing the amortization gap early on in training. Similar to this idea, Hoffman (2017) proposes to perform approximate inference during model training with MCMC at an initialization given by a variational distribution. Our work provides a means of explaining these improvements in terms of the sources of inference suboptimality that they reduce.

5 Experimental Results

5.1 Intuition through Visualization

To begin, we would like to gain an intuitive visualization of the gaps presented in Section 3.1. To this end, we trained a VAE with a two-dimensional latent space on MNIST and in Fig. 2 we show contour plots of various distributions in the latent space. The first row contains contour plots of the true posteriors for four different training datapoints (columns). We have selected these four examples to highlight different inference phenomena. The amortized fully-factorized Gaussian (FFG) row refers to the output of the recognition net, in this case, a FFG approximation. Optimal FFG is the FFG that best fits the posterior of the datapoint. Optimal Flow is the optimal fit of a flexible distribution to the same posterior, where the flexible distribution we use is described in Section 3.2.

Posterior A is an example of a distribution where a FFG can fit relatively well. Posterior B is an example of a posterior with dependence between dimensions, demonstrating the limitation of having a factorized approximation. Posterior C highlights a shortcoming of performing amortization with a limited-capacity recognition network, where the amortized FFG shares little support with the true posterior. Posterior D is a bi-modal distribution which demonstrates the ability of the flexible approximation to fit to complex distributions, in contrast to the simple FFG approximation. These observations raise the following question: in more typical VAEs, is the amortization of inference the leading cause of the distribution mismatch, or is it the limited expressiveness of the approximation?

MNIST Fashion-MNIST MNIST Fashion-MNIST
-89.61 -88.99 -95.99 -96.18 -89.82 -89.52 -102.56 -102.88
-90.65 -90.44 -97.40 -97.91 -90.96 -90.45 -103.73 -104.02
-91.07 -108.71 -99.64 -129.70 -90.84 -92.25 -103.85 -105.80
-92.18 -91.19 -102.73 -101.67 -92.33 -91.75 -106.90 -107.01
Approximation 1.46 1.45 3.65 1.73 1.02 0.93 1.29 1.14
Amortization 1.11 0.75 3.09 3.76 1.49 1.30 3.05 2.29
Inference 2.56 2.20 6.74 5.49 2.51 2.23 4.34 4.13
Table 3: Left: Larger Encoder. Right: Models trained without entropy annealing. The columns and refer to the variational distribution used for training the model. The lower bounds are computed on the training set and are in units of nats.

5.2 Amortization vs Approximation Gap

In this section, we compare how much the approximation and amortization gaps each contribute to the total inference gap. Table 2

are results of inference on the training set of MNIST, Fashion-MNIST and 3-BIT CIFAR (a binarized version of CIFAR-10, see Section

6.1.3 for details). For each dataset, we trained models with two different approximate posterior distributions: a fully-factorized Gaussian, , and the flexible distribution, . Due to the computational cost of optimizing the local parameters for each datapoint, our evaluation is performed on a subset of 1000 datapoints for MNIST and Fashion-MNIST and a subset of 100 datapoints for 3-BIT CIFAR.

For MNIST, we see that the amortization and approximation gaps each account for nearly half of the inference gap. On the more difficult Fashion-MNIST dataset, the amortization gap is larger than the approximation gap. For CIFAR, we see that the amortization gap is much more significant compared to the approximation gap. Thus, for the three datasets and model architectures that we consider, the amortization gap is likely to be the more prominent cause of inference suboptimality, especially when the dataset becomes more challenging to model. This indicates that improvements in inference will likely be a result of reducing amortization error, rather than approximation errors.

With these results in mind, would simply increasing the capacity of the encoder improve the amortization gap? We examined this by training the MNIST and Fashion-MNIST models from above but with larger encoders. See Section 6.1.2 for implementation details. Table 3 (left) are the results of this experiment. Comparing to Table 2, we see that, for both datasets and both variational distributions, using a larger encoder results in the inference gap decreasing and the decrease is mainly due to a reduction in the amortization gap.

5.3 Influence of Flows on the Amortization Gap

The common reasoning for increasing the expressiveness of the approximate posterior is to minimize the difference between the true and approximate distributions, i.e. reduce the approximation gap. However, given that the expressive approximation is often accompanied by many additional parameters, we would like to know how much influence it has on the amortization error.

To investigate this, we trained a VAE on MNIST, discarded the encoder, then retrained encoders with different approximate distributions on the fixed decoder. We fixed the decoder so that the true posterior is constant for all the retrained encoders. The initial encoder was a two-layer MLP with a factorized Gaussian distribution. In order to emphasize a large amortization gap, the retrained encoders had no hidden layers (ie. just linear transformations). For the retraiend encoders, we tested three approximate distributions: fully factorized Gaussian (

), auxiliary flow (), and Flow (). See Section 3.2 for the details of these distributions.

Variational Family
-84.70 -84.70 -84.70
-86.61 -85.48 -85.13
-129.83 -98.58 -97.99
Approximation 1.91 0.78 0.43
Amortization 43.22 13.10 12.86
Inference 45.13 13.88 13.29
Table 4: Influence of expressive approximations on the amortization gap. The parameters used to increase the flexibility of the approximate distribution also reduce the amortization gap.

The inference gaps of the retrained encoders on the training set are shown in Table 4. As expected, we observe that the small encoder with has a very large amortization gap. However, when we use or as the approximate distribution, we see the approximation gap decrease, but more importantly, there is a significant decrease in the amortization gap. This indicates that the parameters used for increasing the complexity of the approximation also play a large role in diminishing the amortization error.

These results are expected given that the parameterization of the Flow distribution can be interpreted as an instance of the RevNet (Gomez et al., 2017) which has demonstrated that Real-NVP transformations (Dinh et al., 2017) can model complex functions similar to typical MLPs. Thus the flow transformations we employ should also be expected to increase the expressiveness while also increasing the capacity of the encoder. The implication of this observation is that models which improve the flexibility of their variational approximation, and attribute their improved results to the increased expressiveness, may have actually been due to the reduction in amortization error.

Figure 3:

Inference gaps over epochs trained on binarized Fashion-MNIST. Blue is the approximation gap. Orange is the amortization gap. Standard is a VAE with FFG approximation. Flow is a VAE with a Flow approximation.

5.4 Influence of Approximate Posterior on True Posterior

To what extent does the posterior approximation affect the learned model? Turner & Sahani (2011)

studied the biases in parameter learning induced by the variational approximation when learning via variational Expectation-Maximization. Similarly, we ask whether a factorized Gaussian approximation causes the true posterior to be more like a factorized Gaussian?

Burda et al. (2016) visually demonstrate that when trained with an importance-weighted approximate posterior, the resulting true posterior is more complex than those trained with factorized Gaussian approximations. Just as it is hard to evaluate a generative model by visually inspecting samples, it is hard to judge how “Gaussian” the true posterior is by visual inspection. We can quantitatively determine how close the posterior is to a fully-factorized Gaussian (FFG) by comparing the marginal log-likelihood estimate and the Optimal FFG bound . This is equivalent to estimating the KL divergence between the optimal Gaussian and the true posterior, .

In Table 2 on MNIST, for the FFG trained model, is nearly the same for both and . In contrast, on the model trained with , is much larger for than . This suggests that the true posterior of a FFG-trained model is closer to FFG than the true posterior of the Flow-trained model. The same observation can be made on the Fashion-MNIST dataset. This implies that the decoder can learn to have a true posterior that fits better to the approximation.

These observations justify our results of Section 5.2. which showed that the amortization error is often the main cause of inference suboptimality. One reason for this is that the generator accommodates the choice of approximation, thus reducing the approximation error.

Generator Hidden Layers 0 2 4
-100.52 -84.78 -82.19
-104.42 -86.61 -83.82
Approximation Gap 3.90 1.83 1.63
Table 5: Increased decoder capacity reduces the approximation gap.

Given that we have seen that the generator can accommodate the choice of approximation, our next question is whether a generator with more capacity increases its ability to fit to the approximation. To this end, we trained VAEs with decoders of different sizes and measured the approximation gaps on the training set. Specifically, we trained decoders with 0, 2, and 4 hidden layers on MNIST. See Table 5 for the results. We see that as the capacity of the decoder increases, the approximation gap decreases. This result implies that the more flexible the generator is, the less flexible the approximate distribution needs to be to ensure accurate inference.

5.5 Inference Generalization

How well does amortized inference generalize at test time? We address this question by visualizing the gaps on training and validation datapoints across the training epochs. In Fig. 3, the models are trained on 50000 binarized Fashion-MNIST datapoints and the gaps are computed on a subset of a 100 training and validation datapoints. The top and bottom boundaries of the blue region represent and . The bottom boundary of the orange region represents . In other words, the blue region is the approximation gap and the orange is the amortization gap.

In Fig. 3, the Standard model (top left) refers to a VAE of latent size 20 trained with a factorized Gaussian approximate posterior. In this case, the encoder and decoder both have two hidden layers each consisting of 200 hidden units. The Flow model (top right) augments the Standard model with a variational distribution. Larger Decoder and Larger Encoder models have factorized Gaussian distributions and increase the number of hidden layers to three and the number of units in each layer to 500.

Firstly, we observe that for all models, the approximation gap on the training and validation sets are roughly equivalent. This indicates that the true posteriors of the held-out data are similar to that of the training data. Secondly, we note that for all models, the encoder overfits more than the decoder. These observations resonate with the encoder overfitting findings by Wu et al. (2017).

How does increasing decoder capacity affect inference on held-out data? We know from Section 5.4 that increasing generator capacity results in a posterior that better fits the approximation making posterior inference easier. Furthermore, the Larger Decoder plot of Fig. 3 shows that increasing generator capacity causes the model to be more prone to overfitting. Thus, there is a tradeoff between ease of inference and decoder overfitting.

5.5.1 Encoder Capacity and Approximation Expressiveness

We have seen in Sections 5.2 and 5.3 that expressive approximations as well as increasing encoder capacity can lead to a reduction in the amortization gap. This leads us to the following question: when should we increase encoder capacity versus increasing the expressiveness of the approximation? We answer this question in terms of how well each model can generalize its efficient inference (recognition network and variational distribution) to held-out data.

In Fig. 3, we see that the Flow model and the Larger Encoder model achieve similar on the validation set at the end of training. However, we see that the bound of the Larger Encoder model is significantly lower than the bound of the Flow model due to the encoder overfitting to the training data. Although they both model the data nearly equally well, the recognition net of the Larger Encoder model is no longer suitable to perform inference on the held-out data due to overfitting. Thus a potential rational for utilizing expressive approximations is that they improve generalization to held-out data in comparison to increasing the encoder capacity.

We highlight that, in many scenarios, efficient test time inference is not required and consequently, encoder overfitting is not an issue, since we can use non-efficient encoder-independent methods to estimate , such as AIS, IWAE with local optimization, or potentially retraining the encoder on the held-out data. In contrast, when efficient test time inference is required, encoder generalization is important and expressive approximations are likely advantageous.

5.6 Annealing the Entropy

Typical warm-up (Bowman et al., 2015; Sønderby et al., 2016) refers to annealing the term during training. This can also be interpreted as performing maximum likelihood estimation (MLE) early on during training. This optimization technique is known to help prevent the latent variable from degrading to the prior (Burda et al., 2016; Sønderby et al., 2016). We employ a similar annealing scheme during training by annealing the entropy of the approximate distribution:

where is annealed from 0 to 1 over training. This can be interpreted as maximum a posteriori (MAP) in the initial phase of training.

We find that warm-up techniques, such as annealing the entropy, are important for allowing the true posterior to be more complex. Table 3 (right) are results from a model trained without the entropy annealing schedule. Comparing these results to Table 2, we observe that the difference between and is significantly smaller without entropy annealing. This indicates that the true posterior is more Gaussian when entropy annealing is not used. This suggests that, in addition to preventing the latent variable from degrading to the prior, entropy annealing allows the true posterior to better utilize the flexibility of the expressive approximation.

6 Conclusion

In this paper, we investigated how encoder capacity, approximation choice, decoder capacity, and model optimization influence inference suboptimality in terms of the approximation and amortization gaps. We discovered that the amortization gap can be a leading source to inference suboptimality and that the generator can reduce the approximation gap by learning a true posterior that fits to the choice of approximation. We showed that the parameters used to increase the expressiveness of the approximation play a role in generalizing inference rather than simply improving the complexity of the approximation. We confirmed that increasing the capacity of the encoder reduces the amortization error. Additionally, we demonstrated that optimization techniques, such as entropy annealing, help the generative model to better utilize the flexibility of expressive variational distributions. Analyzing these gaps can be useful for guiding improvements in VAEs. Future work includes evaluating other types of expressive approximations, more complex likelihood functions, and datasets.

References

  • Bowman et al. (2015) Bowman, S. R., Vilnis, L., Vinyals, O., Dai, A. M., Jozefowicz, R., and Bengio, S. Generating Sentences from a Continuous Space. ArXiv e-prints, November 2015.
  • Burda et al. (2016) Burda, Y., Grosse, R., and Salakhutdinov, R. Importance weighted autoencoders. In ICLR, 2016.
  • Clevert et al. (2015) Clevert, Djork-Arné, Unterthiner, Thomas, and Hochreiter, Sepp. Fast and accurate deep network learning by exponential linear units (elus). arXiv preprint arXiv:1511.07289, 2015.
  • Dinh et al. (2017) Dinh, L., Sohl-Dickstein, J., and Bengio, S. Density estimation using Real NVP. ICLR, 2017.
  • Geweke (1989) Geweke, John. Bayesian inference in econometric models using monte carlo integration. Econometrica: Journal of the Econometric Society, pp. 1317–1339, 1989.
  • Glorot & Bengio (2010) Glorot, Xavier and Bengio, Yoshua.

    Understanding the difficulty of training deep feedforward neural networks.

    In

    Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics

    , pp. 249–256, 2010.
  • Gomez et al. (2017) Gomez, Aidan N, Ren, Mengye, Urtasun, Raquel, and Grosse, Roger B.

    The reversible residual network: Backpropagation without storing activations.

    In Advances in Neural Information Processing Systems, pp. 2211–2221, 2017.
  • Grosse et al. (2015) Grosse, R., Ghahramani, Z., and Adams, R. P. Sandwiching the marginal likelihood using bidirectional monte carlo. arXiv preprint arXiv:1511.02543, 2015.
  • Grosse et al. (2016) Grosse, Roger B, Ancha, Siddharth, and Roy, Daniel M. Measuring the reliability of mcmc inference with bidirectional monte carlo. In Advances in Neural Information Processing Systems, pp. 2451–2459, 2016.
  • Hoffman (2017) Hoffman, Matthew D.

    Learning deep latent gaussian models with markov chain monte carlo.

    In

    International Conference on Machine Learning

    , pp. 1510–1519, 2017.
  • Hoffman et al. (2013) Hoffman, Matthew D, Blei, David M, Wang, Chong, and Paisley, John. Stochastic variational inference. The Journal of Machine Learning Research, 14(1):1303–1347, 2013.
  • Jarzynski (1997) Jarzynski, C. Nonequilibrium equality for free energy differences. Physical Review Letters, 78(14):2690, 1997.
  • Kingma & Ba (2014) Kingma, Diederik and Ba, Jimmy. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
  • Kingma & Welling (2014) Kingma, D.P. and Welling, M. Auto-Encoding Variational Bayes. In ICLR, 2014.
  • Kingma et al. (2016) Kingma, D.P., Salimans, T., Jozefowicz, R., Chen, X., Sutskever, I., and Welling, M. Improving Variational Inference with Inverse Autoregressive Flow. NIPS, 2016.
  • Krishnan et al. (2017) Krishnan, R. G., Liang, D., and Hoffman, M.

    On the challenges of learning with inference networks on sparse, high-dimensional data.

    ArXiv e-prints, October 2017.
  • Krizhevsky & Hinton (2009) Krizhevsky, Alex and Hinton, Geoffrey. Learning multiple layers of features from tiny images. University of Toronto, 2009.
  • Larochelle & Bengio (2008) Larochelle, Hugo and Bengio, Yoshua.

    Classification using discriminative restricted boltzmann machines.

    In Proceedings of the 25th international conference on Machine learning, pp. 536–543. ACM, 2008.
  • LeCun et al. (1998) LeCun, Y., Bottou, L., Bengio, Y., and Haffner, P. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 1998.
  • Maaløe et al. (2016) Maaløe, L., Sønderby, CK., Sønderby, SK., and Winther, O. Auxiliary Deep Generative Models. ICML, 2016.
  • Neal (2001) Neal, R.M. Annealed importance sampling. Statistics and Computing, 2001.
  • Ranganath et al. (2016) Ranganath, R., Tran, D., and Blei, D. M. Hierarchical Variational Models. ICML, 2016.
  • Rezende & Mohamed (2015) Rezende, D.J. and Mohamed, S. Variational Inference with Normalizing Flows. In ICML, 2015.
  • Rezende et al. (2014) Rezende, D.J., Mohamed, S., and Wierstra, D. Stochastic Backpropagation and Approximate Inference in Deep Generative Models. ICML, 2014.
  • Salakhutdinov & Murray (2008) Salakhutdinov, R. and Murray, I.

    On the quantitative analysis of deep belief networks.

    In Proceedings of the 25th international conference on Machine learning, pp. 872–879. ACM, 2008.
  • Salimans et al. (2015) Salimans, T., Kingma, D.P., and Welling, M. Markov chain monte carlo and variational inference: Bridging the gap. In ICML, 2015.
  • Sønderby et al. (2016) Sønderby, Casper Kaae, Raiko, Tapani, Maaløe, Lars, Sønderby, Søren Kaae, and Winther, Ole. Ladder variational autoencoders. In Advances in Neural Information Processing Systems, pp. 3738–3746, 2016.
  • Tomczak & Welling (2016) Tomczak, J. M. and Welling, M. Improving Variational Auto-Encoders using Householder Flow. ArXiv e-prints, November 2016.
  • Tomczak & Welling (2017) Tomczak, J. M. and Welling, M. Improving Variational Auto-Encoders using convex combination linear Inverse Autoregressive Flow. ArXiv e-prints, June 2017.
  • Turner & Sahani (2011) Turner, R and Sahani, M. Two problems with variational expectation maximisation for time-series models. inference and learning in dynamic models. Cambridge University Press, pp. 104–123, 2011.
  • Wu et al. (2017) Wu, Y., Burda, Y., Salakhutdinov, R., and Grosse, R. On the Quantitative Analysis of Decoder-Based Generative Models. ICLR, 2017.
  • Xiao et al. (2017) Xiao, Han, Rasul, Kashif, and Vollgraf, Roland. Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms. github.com/zalandoresearch/fashion-mnist, 2017.