1 Introduction
Deep generative models seek rich latent representations of data, and provide a mechanism for sampling new data. Generating novel and plausible data points is intrinsically valuable, while also beneficial for agents that plan and simulate interactions with their environment.
A popular approach to generative modeling is with variational autoencoders (VAE, [22]). A major challenge in VAEs, however, is that they assume a factorial posterior, which is widely known to limit their flexibility [31, 21, 27, 4, 17, 37, 3, 40]. Further, VAEs do not offer exact density estimation — a requirement in many statistical procedures.
Normalizing flows (NF) play an important role in the recent developments of both density estimation and variational inference [31]
. Normalizing flows are smooth, invertible transformations with tractable Jacobians, which can map a complex data distribution to simple distribution, such as a standard normal. In the context of variational inference, a normalizing flow transforms a simple, known base distribution into a more faithful representation of the true posterior. Flowbased models are also an attractive approach for density estimation because they provide exact density computation and sampling with only a single neural network pass (in some instances). Recent developments in NFs have focused of creating deeper, more complex transformations in order to increase the flexibility of the learned distribution.
Our contribution
In this work we propose a wider, not deeper approach for increasing the expressiveness of the posterior approximation. Our approach, gradient boosted flows (GBF), iteratively adds new NF components to a model based on gradient boosting, where each new NF component is fit to the residuals of the previously trained components. A weight is learned for each component of the GBF model, resulting in an approximate posterior that is a mixture model. Unlike recent work in boosted variational inference [15, 27], our approach is flowbased and can enhance deep generative models with flexible GBF approximate posteriors using the reparameterization trick [22, 32].
GBF compliments a number of existing flows, improving performance at the cost of additional training cycles — not additional complexity. However, our analysis highlights the need for analytically invertible flows in order to efficiently boost flowbased models. We explore the “decoder shock” phenomenon — a challenge unique to VAEs that model the approximate posterior with GBF. When GBF begins training a new component the distribution of samples passed to the VAE’s decoder changes abruptly, causing a temporary increase in loss. We propose a training technique to combat “decoder shock” and show that performance steadily improves as more components are added to the model.
Our results demonstrate that GBF improves performance on density estimation tasks, capable of modeling data with multiple modes. Lastly, we augment the VAE with a GBF variational posteriors, and show image modeling results on par with stateoftheart NFs.
2 Background
2.1 Variational Inference
Approximate inference plays an important role in fitting complex probabilistic models. Variational inference, in particular, transforms inference into an optimization problem with the goal of finding a variational distribution that closely approximates the true posterior , where are the observed data, the latent variables, and are learned parameters [18, 41, 1]. Writing the loglikelihood of the data in terms of the approximate posterior reveals:
(1) 
Since the second term in (1) is the KullbackLeibler (KL) divergence, which is nonnegative, then the first term forms a lower bound on the loglikelihood of the data, and hence referred to as the evidence lower bound (ELBO).
2.2 Variational Autoencoder
Kingma and Welling, Rezende et al. show that a reparameterization of the ELBO can result in a differentiable bound that is amenable to optimization via stochastic gradients and backpropagation. Further, Kingma and Welling structure the inference problem as an autoencoder, introducing the variational autoencoder (VAE) and minimizing the negativeELBO . Rewriting the as:
(2) 
shows the probabilistic decoder , and highlights how the VAE encodes the latent variables with the variational posterior , but is regularized with the prior .
2.3 Normalizing Flows
Normalizing flows are a method for density estimation and improving inference. In the original formulation, Rezende and Mohamed modify the VAE’s posterior approximation, applying a chain of transformations to the inference network’s output , giving:
(3) 
where the density transformation
is an invertible, smooth mapping. By the chain rule and inverse function theorem, a random variable
has a computable density [36, 35]:(4) 
Thus, a NFbased approximate posterior optimizes:
(5) 
where is a known base distribution.
Building on the planar and radial flows from [31], Sylvester flows generalize planar flows into a more expressive framework [40]. Inverse autoregressive flows (IAF, [21]) and masked autoregressive flows (MAF, [28]) scale to higher dimensions by exploiting the ordering of variables in the flow. Neural autoregressive flows (NAF, [16]) and the more compact Block NAF [6], replace affine transformations with autoregressive neural networks.
While all NFs are invertible, flows based on coupling layers like NICE [7] and successor RealNVP [8] are analytically invertible. Glow replaced RealNVP’s permutation operation with a convolution [20]. Neural spline flows provide a method for increasing the flexibility of both coupling and autoregressive transforms using monotonic rationalquadratic splines [9]. Nonlinear squared flows [42] offer a highly multimodal transformation and are analytically invertible.
2.4 Boosted Variational Inference
Gradient boosting [26, 10, 11, 12] considers the minimization of a loss , where is a function representing the current model. Consider an additive perturbation around to , where is a function representing a new component. A Taylor expansion as :
(6) 
reveals the functional gradient , which is the direction that reduces the loss at the current solution.
By considering convex combinations of distributions and with weight , boosting can be applied outside of classification or regression setting [33]. Recently, Miller et al., Guo et al. introduced the idea of boosting variational inference, which improves a variational posterior by iteratively adding simple approximations.
figure, GBF introduces a second component, which seeks a region of high probability that is not well modeled by the first component (mass is fit to the right ellipsoid). For this toy problem, finetuning the components with additional
boosted training leads to an even better solution — adjusting the first component to fit the left ellipsoid, and reweighing the first component appropriately as shown in the FineTune Components figure.3 Gradient Boosted Flows
Gradient boosted flows (GBF) build on recent ideas in boosted variational inference in order to increase the flexibility of posteriors approximated with NFs. The GBF approximate posterior is constructed by successively adding new components based on gradient boosting, where each new component is a step normalizing flow that is fit to the functional gradient of the loss from the previously trained components .
Gradient boosting assigns a weight to the new component and we restrict the weight
to make sure the model stays a valid probability distribution. The resulting variational posterior is a mixture model of the form:
(7) 
where the new variational posterior is a convex combination of fixed components (mixture model) and the new component . In our formulation of GBF we consider components, where is fixed and finite.
During training, the first component is fit using a traditional objective function for fitting NFs, and no boosting is applied. At stages , there are fixed components , consisting of a convex combination of the step flow models from the previous stages, and the new component being trained . Instead of jointly optimizing w.r.t. to both and , we train until convergence, and then optimize the corresponding weight .
3.1 GBF Variational Bound
We seek a variational posterior that closely matches the true posterior — that is, we wish to minimize . From (1) note that the minimizing KL is equivalent to minimizing the negativeELBO. Thus, a GBF model has objective:
(8) 
In order to expand the approximate posterior term we first use the change of variables formula as in (4). Second, since expectations of mixtures can be written as a convex combination of expectations w.r.t. each component (See Appendix C for details), then the approximate posterior is:
(9) 
where the sample is transformed into by choosing a single mixture component based on weights , and then applying component ’s flow transformation: — hence, the outer sum over component indices reflects integrating over the choice of component. The expectation in (9) is with respect to a base distribution that is shared by all components.
3.2 Updates to New Boosting Components
Given the objective function in (8), we proceed with deriving updates for new components. At stage , we assume to be fixed, and the focus is learning and
based on functional gradient descent (FGD). For a moment we disregard
^{1}^{1}1A proper handling of this term is detailed in 4.1. the random variable in the expectation of (8), and consider the functional gradient w.r.t. at :(10) 
Since are the fixed components, then minimizing the loss can be achieved by choosing a new component that has the maximum inner product with the negative of the gradient. In other words, we choose a such that:
where denotes the functional gradient from (10), and denotes the family of step normalizing flows.
To avoid letting degenerate to a point mass at the functional gradient’s minimum, we add an entropy regularization term controlled by , hence is:
(11) 
Despite the differences in derivation, optimization of GBF has a similar structure to other flowbased VAEs. Specifically, with the addition of the entropy regularization term, (11) can be rearranged to show the new component minimizes:
(12) 
where denotes a sample transformed by component ’s flow. Hence, similar to the VAE objective from (2), a GBF has a KLdivergence regularization term between the new component and the prior. The key difference for GBF, however, is that the negative loglikelihood – is downweighted for samples that are already explained by the fixed portion of the model.
3.3 Updating Component Weights
algocf[tb]
It follows that the weights on each component can be updated by taking the gradient of the loss with respect to . At training iterate we have:
where we’ve defined:
To estimate the gradient with Monte Carlo, we draw samples and and update
with stochastic gradient descent in Algorithm
LABEL:algo:rhoc. To ensure a stable convergence we follow [15] and implement a decaying learning rate.Updating a component’s weight with Algorithm LABEL:algo:rhoc is only needed once after each component converges. We find, however, that results improve by “finetuning” components with additional training after the initial training pass. During the finetuning stage, for each we train and treat all other components as fixed. Figure 1 demonstrates this phenomenon: when a single flow is not flexible enough to model the target distribution modecovering behavior arises. Introducing the second component trained with the boosting objective improves results, and consequently the second component’s weight is increased. Finetuning the first component leads to a better solution and assigns equal weight to the two components. We also witness improvements in VAE’s with GBF variational posteriors after finetuning on real datasets (see Figure 3).
4 GBF Implementation Novelties
4.1 Reparameterization Trick with GBF
Because the objective (8) includes an intractable expectation with respect to a continuousvalued random variable, evaluating the functional gradient requires the reparameterization trick [22, 32]
. We compute an unbiased estimate of the gradient by reparameterizing the latent variable
in terms of a known base distribution and a differentiable transformation. When , we have where . Thus, we form the Monte Carlo estimator of the individual datapoint negativeELBO, and write the functional gradient:(13) 
where denotes a sample from the new component .
4.2 Flows Compatible with Gradient Boosting
The main constraint in adopting GBF is that flows must be analytically invertible (see Figure 2). The focus of GBF is on training the new component , but in order to draw samples we sample from the base distribution and transform according to:
However, by (12) updating requires computing the likelihood — which cannot be done directly. Instead, we seek the point within the base distribution such that:
where randomly chooses one of the fixed components. Then, under the change of variables formula, we approximate by:
Since all NFs are by definition invertible, then in theory all normalizing flows can be boosted. In practice, however, only flows that are analytically invertible can be boosted efficiently. Inverse autoregressive flows (IAF, [21]) and masked autoregressive flows (MAF, [28]) are invertible, however, they are times slower to invert where is the dimensionality of . Whereas, flows based on coupling layers, such as NICE [7] and RealNVP [8], as well as neural spline flows [9], nonlinear squared flows [42], and flows based on lowertriangular matrices [38, 39] can be easily inverted and boosted.
4.3 Decoder Shock in Gradient Boosted VAEs
One challenge in training VAEs using a GBF variational posterior follows from the sharing of one decoder between all components. During training the decoder naturally acclimates to receiving samples from a particular component (e.g. ). However, when a new stage begins the decoder begins receiving samples from a different component . At this point the loss jumps, a phenomenon we refer to as “decoder shock” (see Figure 3). Reasons for “decoder shock” are as follows.
First, the KLannealing schedule is reset when begins training. KLannealing is an important technique for successfully training VAE models [2, 34]. By reducing the weight of the KL term in (2
) during the initial epochs the model is free to discover useful representations of the data before being penalized for complexity. Without KLannealing, models may choose the “low hanging fruit” and rely purely on a powerful decoder
[2, 34, 4, 30, 5]. Thus, by resetting the annealing schedule the KL term in the loss increases.Second, and more importantly, when is introduced a sudden shift occurs in the distribution of samples passed to the decoder. Moreover, because the KLannealing schedule had reset, is free to create an approximate posterior that is much less constricted than that of the previous component . Consequently, this causes a sharp increase in reconstructions errors.
A spike in loss between boosting stages is unique to GBF. Unlike other boosted models, with GBF there is a module (the decoder) that depends on the boosted modules — this does not exist when boosting decision trees for regression or classification (for example). To overcome the “decoder shock” problem we propose a simple solution that deviates from a traditional boosting approach. Instead of only drawing samples from
during training, we blend in samples from the fixed components too. By occasionally sampling from we help the decoder remember past components and adjust to changes in the full approximate posterior . In our experiments we find good results when annealing the sampling rate from 0 to 0.5 over the 1000 epochs is trained. We emphasize that despite occasionally sampling from , the parameter weights of remain fixed — the samples from are purely for the decoder’s benefit.5 Experiments
To demonstrate the flexibility of GBF, we highlight results on two density estimation tasks, as well as boost normalizing flows within a VAE for generative modeling of images on four datasets: Freyfaces^{2}^{2}2http://www.cs.nyu.edu/~roweis/data/frey_rawface.mat, Caltech 101 Silhouettes [25], Omniglot [23]
, and statically binarized MNIST
[24].In all of our experiments we boost RealNVP [8] transformations of varying flow lengths. RealNVP, and the closely related Nonlinear Independent Component Estimation (NICE, [7]), partitions the latent variables into subsets , and only modifies one subset with an affine coupling layer: , where are scale and translation functions from . We parameterize as feedforward networks with TanH activations and a single hidden layer. While coupling layer models like RealNVP are less flexible and have been shown to be empirically inferior to planar flows in variational inference [31], RealNVP remains an attractive choice for boosting because it is trivially invertible, and gives exact loglikelihood computation, sampling, and inference with one forward pass.
5.1 Toy Density Problems
5.1.1 Density Estimation
In Figure 4 we apply GBF to the density estimation problems found in [20, 14, 6]. In this problem the model is given samples from an unknown 2dimensional data distribution
, and the goal is to transform the complex parametric model
of the data distribution into a simple, easy to evaluate distribution(i.e. a standard Normal distribution). The parameters
are found through maximum likelihood. For standard normalizing flows, the complex density is parameterized as , whereas the boosted models use the approximate posterior described in (7). Each flow is a sequence of RealNVP coupling layers [8] with a TanH activation and 128 hidden units, flows are trained for 25k iterations using the Adam optimizer [19].Results
We compare our results to a deep 8flow RealNVP model. To show the flexibility of GBF, we boost 8, 4, and 2 components, where each of the components are RealNVP flows of length or , respectively. We choose these flow lengths because they highlight the result of boosting when no one component is flexible enough to perfectly model the data distribution. In each example the 8flow RealNVP and gradient boosted flow contain the same number of parameters, however the gradient boosted model is able to achieve sharper results and more clearly defined multimodality.
5.1.2 Density Matching
In the density matching problem the model generates samples from a simple distribution (such as a standard Normal) and transforms them into a complex distribution . The 2dimensional target’s analytical form is given and parameters are learned by minimizing where is formulated using the change of variables formula.
Results
Figure 5 highlights results on the density matching problem. For each of the four energy functions we compare our results to a deep 16flow RealNVP model. The gradient boosted model is configured with two RealNVP components, each of length . In each case the gradient boosted flows provide an accurate density estimation with half as many total parameters. When the component flows are flexible enough to model most or all of the target density, components can overlap. However, by training the component weights the model downweights new components that don’t provide additional information.
Model  MNIST  Freyfaces  Omniglot  Caltech 101  

ELBO  NLL  ELBO  NLL  ELBO  NLL  ELBO  NLL  
VAE  
Planar  104.23  
Radial  
Sylvester  84.54  81.99  4.54  4.49  101.99  98.54  112.26  100.38 
IAF  
RealNVP  
GBF  82.70  4.48  99.11  
GBF+  82.67  4.41  99.09  106.55 
5.2 Modeling Real Data with Variational Autoencoders
Following [31], we employ NFs for improving VAEs [22]. We compare our model on the same image datasets as those used in [40], however we limit the computational complexity of the experiments by reducing the number of convolutional layers in the encoder and decoder of the VAEs from 14 layers to 6. In Table 1 we compare the performance of our gradient boosted flows to other normalizing flow architectures. Planar, radial, and Sylvester normalizing flows (SNF) each use , with SNF’s bottleneck set to
orthogonal vectors per orthogonal matrix. IAF is trained with
transformations, each of which is a single hidden layer MADE [13] with either or hidden units. RealNVP uses transformations with either or hidden units in the Tanh feedforward network. For all models, the dimensionality of the flow is fixed at .Each baseline model in Table 1 is trained for 1000 epochs, annealing the KL term in the objective function over the first 250 epochs as in [2, 34]. The gradient boosted models apply the same training schedule to each component. We optimize using the Adam optimizer [19] with a learning rate of (decay of 0.5x with a patience of 250 steps). To evaluate the negative loglikelihood (NLL) we use importance sampling (as proposed in [32]) with 2000 importance samples. To ensure a fair comparison, the reported ELBO for GBF models is computed by (5) — effectively dropping GBF’s fixed components term and setting the entropy regularization to . Since GBF’s variational posterior is a mixture model, we sample components from the mixture and average the ELBO calculation over samples drawn from .
Results
In all results RealNVP — which is more ideally suited for parametric density estimation tasks, performs the worst of the flow models. Nonetheless, applying gradient boosting to RealNVP improves the results significantly. On Freyfaces, the smallest dataset consisting of just 1965 images, gradient boosted RealNVP gives the best performance — suggesting that GBF may help in overfitting. For the larger Omniglot dataset of handwritten characters, Sylvester flows are superior, however, gradient boosting improves the RealNVP baseline considerably and achieves a negative loglikelihood comparable to Sylvester (99.09 versus 98.54). GBF improves on the baseline RealNVP, however both GBF and IAF’s results are notably higher than traditional flows like planar, radial, and Sylvester for the Caltech 101 Silhouettes dataset. Lastly, on MNIST we find the boosting improves the NLL on RealNVP from 83.36 to 82.67, and is on par with Sylvester flows.
In all datasets finetuning GBF components (listed as GBF+ in Table 1) with an additional 50 epochs per component and recomputing the weights , further improves results. Finetuning allows each component in the mixture an opportunity to adjust to the components that were trained after it. As was shown in the toy example show in Figure 1, this adjustment can be crucial to producing a betting fitting approximate posterior. A likely explanation for this phenomenon is that GBF optimizes a likelihood based objective and hence attempts to explain all of the data (as shown by the modecovering behavior in 1). Thus, components that overextended themselves during the initial training pass can focus on producing a tighter approximation on a subset of the posterior during the finetuning stage.
6 Conclusion
In this work we introduce gradient boosted flows, a technique for increasing the flexibility of flowbased variational posteriors through gradient boosting. GBF, iteratively adds new NF components, where each new component is fit to the residuals of the previously trained components. We show that GBF is only constrained to analytically invertible flows — making GBF complimentary to many existing NF models. In our experiments we demonstrated that GBF improves over their baseline single component model, without increasing the depth of the model, and produces image modeling results on par with stateoftheart flows. Further, we showed GBF models used for density estimation create more flexible distributions with a fraction of the total parameters.
In the future we wish to further investigate the “decoder shock” phenomenon occurring when GBF is paired with a VAE. Future work may benefit from exploring other strategies for alleviating “decoder shock”, such as multiple decoders or different annealing strategies. Additionally, in our experiments we used RealNVP as the base component. Future work may consider other flows for boosting, as well as heterogeneous combinations of flows as the different components.
Acknowledgements
The research was supported by NSF grants OAC1934634, IIS1908104, IIS1563950, IIS1447566, IIS1447574, IIS1422557, CCF1451986. We thank the University of Minnesota Supercomputing Institute (MSI) for technical support.
References
 Blei et al. [2017] Blei, D. M., Kucukelbir, A., and McAuliffe, J. D. (2017). Variational Inference: A Review for Statisticians. Journal of the American Statistical Association, 112(518):859–877.
 Bowman et al. [2016] Bowman, S. R., Vilnis, L., Vinyals, O., Dai, A., Jozefowicz, R., and Bengio, S. (2016). Generating Sentences from a Continuous Space. In Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning, pages 10–21, Berlin, Germany. Association for Computational Linguistics.
 Casale et al. [2018] Casale, F. P., Dalca, A. V., Saglietti, L., Listgarten, J., and Fusi, N. (2018). Gaussian Process Prior Variational Autoencoders. Advances in Neural Information Processing Systems, page 11.
 Chen et al. [2017] Chen, X., Kingma, D. P., Salimans, T., Duan, Y., Dhariwal, P., Schulman, J., Sutskever, I., and Abbeel, P. (2017). Variational Lossy Autoencoder. ICLR.

Cremer et al. [2018]
Cremer, C., Li, X., and Duvenaud, D. (2018).
Inference Suboptimality in Variational Autoencoders.
In
International Conference on Machine Learning
, Stockholm, Sweden. 
De Cao et al. [2019]
De Cao, N., Titov, I., and Aziz, W. (2019).
Block Neural Autoregressive Flow.
35th Conference on Uncertainty in Artificial Intelligence (UAI19)
.  Dinh et al. [2015] Dinh, L., Krueger, D., and Bengio, Y. (2015). NICE: Nonlinear Independent Components Estimation. ICLR.
 Dinh et al. [2017] Dinh, L., SohlDickstein, J., and Bengio, S. (2017). Density estimation using Real NVP. ICLR.
 Durkan et al. [2019] Durkan, C., Bekasov, A., Murray, I., and Papamakarios, G. (2019). Neural Spline Flows. In Advances in Neural Information Processing Systems.

Friedman et al. [2000]
Friedman, J., Hastie, T., and Tibshirani, R. (2000).
Additive logistic regression: A statistical view of boosting.
The annals of statistics, 28(2):337–407.  Friedman [2001] Friedman, J. H. (2001). Greedy function approximation: A gradient boosting machine. Annals of statistics, pages 1189–1232.
 Friedman [2002] Friedman, J. H. (2002). Stochastic gradient boosting. Computational Statistics & Data Analysis, 38(4):367–378.
 Germain et al. [2015] Germain, M., Gregor, K., Murray, I., and Larochelle, H. (2015). MADE: Masked Autoencoder for Distribution Estimation. In International Conference on Machine Learning, volume 37, Lille, France.
 Grathwohl et al. [2019] Grathwohl, W., Chen, R. T. Q., Bettencourt, J., Sutskever, I., and Duvenaud, D. (2019). FFJORD: Freeform Continuous Dynamics for Scalable Reversible Generative Models. In International Conference on Learning Representations.
 Guo et al. [2016] Guo, F., Wang, X., Fan, K., Broderick, T., and Dunson, D. B. (2016). Boosting Variational Inference. In Advances in Neural Information Processing Systems, Barcelona, Spain.
 Huang et al. [2018a] Huang, C.W., Krueger, D., Lacoste, A., and Courville, A. (2018a). Neural Autoregressive Flows. In International Conference on Machine Learning, page 10, Stockholm, Sweden.
 Huang et al. [2018b] Huang, C.W., Tan, S., Lacoste, A., and Courville, A. (2018b). Improving Explorability in Variational Inference with Annealed Variational Objectives. In Advances in Neural Information Processing Systems, page 11, Montréal, Canada.
 Jordan et al. [1999] Jordan, M. I., Ghahramani, Z., Jaakkola, T. S., and Saul, L. K. (1999). Introduction to variational methods for graphical models. Machine Learning, 37(2):183–233.
 Kingma and Ba [2015] Kingma, D. P. and Ba, J. (2015). Adam: A method for stochastic optimization. ICLR.
 Kingma and Dhariwal [2018] Kingma, D. P. and Dhariwal, P. (2018). Glow: Generative Flow with Invertible 1x1 Convolutions. In Advances in Neural Information Processing Systems, Montréal, Canada.
 Kingma et al. [2016] Kingma, D. P., Salimans, T., Jozefowicz, R., Chen, X., Sutskever, I., and Welling, M. (2016). Improving Variational Inference with Inverse Autoregressive Flow. In Advances in Neural Information Processing Systems.
 Kingma and Welling [2014] Kingma, D. P. and Welling, M. (2014). AutoEncoding Variational Bayes. Proceedings of the 2nd International Conference on Learning Representations (ICLR), pages 1–14.
 Lake et al. [2015] Lake, B. M., Salakhutdinov, R., and Tenenbaum, J. B. (2015). Humanlevel concept learning through probabilistic program induction. Science, 350(6266):1332–1338.
 Larochelle and Murray [2011] Larochelle, H. and Murray, I. (2011). The Neural Autoregressive Distribution Estimator. International Conference on Artificial Intelligence and Statistics (AISTATS), 15:9.

Marlin et al. [2010]
Marlin, B. M., Swersky, K., Chen, B., and de Freitas, N. (2010).
Inductive Principles for Restricted Boltzmann Machine Learning.
13thInternational Conference on Artificial Intelligence and Statistics (AISTATS), 9:8.  Mason et al. [1999] Mason, L., Baxter, J., Bartlett, P. L., and Frean, M. R. (1999). Boosting Algorithms as Gradient Descent. In Advances in Neural Information Processing Systems, pages 512–518.
 Miller et al. [2017] Miller, A. C., Foti, N., and Adams, R. P. (2017). Variational Boosting: Iteratively Refining Posterior Approximations. In Proceedings of the 34th International Conference on Machine Learning, volume 70, pages 2420–2429. PMLR.
 Papamakarios et al. [2017] Papamakarios, G., Pavlakou, T., and Murray, I. (2017). Masked Autoregressive Flow for Density Estimation. In Advances in Neural Information Processing Systems.

Paszke et al. [2017]
Paszke, A., Gross, S., Chintala, S., Chanan, G., Yang, E., DeVito, Z., Lin, Z.,
Desmaison, A., Antiga, L., and Lerer, A. (2017).
Automatic differentiation in PyTorch.
In Advances in Neural Information Processing Systems, page 4.  Rainforth et al. [2018] Rainforth, T., Kosiorek, A. R., Le, T. A., Maddison, C. J., Igl, M., Wood, F., and Teh, Y. W. (2018). Tighter Variational Bounds Are Not Necessarily Better. In International Conference on Machine Learning, Stockholm, Sweden.
 Rezende and Mohamed [2015] Rezende, D. J. and Mohamed, S. (2015). Variational Inference with Normalizing Flows. In International Conference on Machine Learning, volume 37, pages 1530–1538, Lille, France. PMLR.

Rezende et al. [2014]
Rezende, D. J., Mohamed, S., and Wierstra, D. (2014).
Stochastic Backpropagation and Approximate Inference in Deep Generative Models.
In Proceedings of the 31st International Conference on Machine Learning, volume 32 of 2, pages 1278–1286, Beijing, China. PMLR.  Rosset and Segal [2002] Rosset, S. and Segal, E. (2002). Boosting Density Estimation. In Advances in Neural Information Processing Systems, page 8.
 Sønderby et al. [2016] Sønderby, C. K., Raiko, T., Maaløe, L., Sønderby, S. K., and Winther, O. (2016). Ladder Variational Autoencoders. In Advances in Neural Information Processing Systems.
 Tabak and Turner [2013] Tabak, E. G. and Turner, C. V. (2013). A Family of Nonparametric Density Estimation Algorithms. Communications on Pure and Applied Mathematics, 66(2):145–164.
 Tabak and VandenEijnden [2010] Tabak, E. G. and VandenEijnden, E. (2010). Density estimation by dual ascent of the loglikelihood. Communications in Mathematical Sciences, 8(1):217–233.
 Tomczak and Welling [2018] Tomczak, J. and Welling, M. (2018). VAE with a VampPrior. In International Conference on Artificial Intelligence and Statistics (AISTATS), volume 84, Lanzarote, Spain.

Tomczak and Welling [2016]
Tomczak, J. M. and Welling, M. (2016).
Improving Variational AutoEncoders using Householder
Flow.
In
Bayesian Deep Learning Workshop (NIPS 2016)
.  Tomczak and Welling [2017] Tomczak, J. M. and Welling, M. (2017). Improving Variational AutoEncoders using convex combination linear Inverse Autoregressive Flow. arXiv:1706.02326 [stat].
 van den Berg et al. [2018] van den Berg, R., Leonard Hasenclever, Jakub M. Tomczak, and Max Welling (2018). Sylvester Normalizing Flows for Variational Inference. Uncertainty in Artificial Intelligence (UAI).
 Wainwright and Jordan [2007] Wainwright, M. J. and Jordan, M. I. (2007). Graphical Models, Exponential Families, and Variational Inference. Foundations and Trends® in Machine Learning, 1(1–2):1–305.
 Ziegler and Rush [2019] Ziegler, Z. M. and Rush, A. M. (2019). Latent Normalizing Flows for Discrete Sequences. In Advances in Neural Information Processing Systems.
Appendix A Dataset Details
In Section 5.2, VAEs are modified with GBF approximate posteriors to model four datasets: Freyfaces^{3}^{3}3https://github.com/y0ast/VariationalAutoencoder/blob/master/freyfaces.pkl, Caltech 101 Silhouettes^{4}^{4}4https://people.cs.umass.edu/~marlin/data/caltech101_silhouettes_28_split1.mat [25], Omniglot^{5}^{5}5https://github.com/yburda/iwae/tree/master/datasets/OMNIGLOT [23], and statically binarized MNIST^{6}^{6}6http://yann.lecun.com/exdb/mnist/ [24]. Details of these datasets are given below.
The Freyfaces dataset contains 1965 grayscale images of size portraying one man’s face in a variety of emotional expressions. Following van den Berg et al., we randomly split the dataset into 1565 training, 200 validation, and 200 test set images.
The Caltech 101 Silhouettes dataset contains 4100 training, 2264 validation, and 2307 test set images. Each image portrays the black and white silhouette of one of 101 objects, and is of size . As van den Berg et al. note, there is a large variety of objects relative to the training set size, resulting in a particularly difficult modeling challenge.
The Omniglot dataset contains 23000 training, 1345 validation, and 8070 test set images. Each image portrays one of 1623 handwritten characters from 50 different alphabets, and is of size . Images in Omniglot are dynamically binarized.
Finally, the MNIST dataset contains 50000 training, 10000 validation, and 10000 test set images. Each image is a binary, and portrays a handwritten digit.
Appendix B Model Architectures
In Section 5.2, we compute results on real datasets for the VAE and VAEs with a flowbased approximate posterior. In each model we use convolutional layers, where convolutional layers follow the PyTorch convention [29]. The encoder of these networks contains the following layers:
where is a kernel size,
is a padding size, and
is a stride size. The final convolutional layer is followed by a fullyconnected layer that outputs parameters for the diagonal Gaussian distribution and amortized parameters of the flows (depending on model).
Similarly, the decoder mirrors the encoder using the following transposed convolutions:
where is an outer padding. The decoders final layer is passed to standard 2dimensional convolutional layer to reconstruction the output — whereas the other convolutional layers listed above implement a gated action function:
where and are inputs and outputs of the th layer, respectively, are weights of the th layer, denote biases, is the convolution operator,
is the sigmoid activation function, and
is an elementwise product.Appendix C ELBO’s Approximate Posterior with GBF
Augmenting a flowbased variational posterior with gradient boosting changes corresponding ELBO term. In order to clarify the derivation of the approximate posterior term we provide details on computing expectations of flow objects and mixtures.
() Expectations w.r.t. a Mixture.
Let be a gradient boosted flow and consider any function . Then the expectation:
where (a) holds because is a finite convex combination, and reflects integrating over the choice of mixture component for each sample . Thus, the expectation of a function w.r.t. a mixture model is equivalent to a convex combination of expectations w.r.t. each component distribution .
() Expectations w.r.t. a Flow Transformation.
Recall that expectations w.r.t. a flow transformation can be written as w.r.t. a base distribution. Specifically, let be a function of , and some approximate posterior component whose density transformation is a flow of length . Then the transformed sample is computed by: . Moreover, by the Law of the Unconscious Statistician (LOTUS), the expectation of w.r.t. is:
In other words, we can write the expectation w.r.t. an unknown density as an expectation w.r.t. the base distribution if given the flow transformations .
GBF Approximate Posterior Term.
From () and () above, it follows that the GBF approximate posterior term can be expanded as:
(14) 
where denotes the Jacobian term for component ’s density transformation . The final step highlights how every component shares the same base distribution, and samples from this base distribution are transformed by the components. Thus, the sum over component indices — which integrates over the choice in mixture component for each sample, only applies to the change of variables term.
Appendix D Reparameterization Trick with GBF
A more detailed explanation on applying the reparameterization trick [22, 32] to the negativeELBO in (8) is show below.
In (a) the approximate posterior has been expanded following (C), and (b) follows by rewriting the expectation in terms of random noise and defining . Under the reparameterization, the gradient and expectation operators are commutative, and we can form a simple Monte Carlo estimator . Finally, the functional gradient of w.r.t at is:
Appendix E Derivation of Component Weights
After has been estimated, the mixture model still needs to estimate . Recall that can be written as the convex combination:
Then, with
the objective function can be written as a function of :
(15) 
The above expression can be used in a blackbox line search method or, as we have done, in a stochastic gradient descent algorithm. Toward that end, taking gradient of (E) w.r.t. yields the component weight updates shown in Section 3.3.
Comments
There are no comments yet.