Gradient Boosted Flows

02/27/2020 ∙ by Robert Giaquinto, et al. ∙ 0

Normalizing flows (NF) are a powerful framework for approximating posteriors. By mapping a simple base density through invertible transformations, flows provide an exact method of density evaluation and sampling. The trend in normalizing flow literature has been to devise deeper, more complex transformations to achieve greater flexibility. We propose an alternative: Gradient Boosted Flows (GBF) model a variational posterior by successively adding new NF components by gradient boosting so that each new NF component is fit to the residuals of the previously trained components. The GBF formulation results in a variational posterior that is a mixture model, whose flexibility increases as more components are added. Moreover, GBFs offer a wider, not deeper, approach that can be incorporated to improve the results of many existing NFs. We demonstrate the effectiveness of this technique for density estimation and, by coupling GBF with a variational autoencoder, generative modeling of images.



There are no comments yet.


page 4

page 10

page 11

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Deep generative models seek rich latent representations of data, and provide a mechanism for sampling new data. Generating novel and plausible data points is intrinsically valuable, while also beneficial for agents that plan and simulate interactions with their environment.

A popular approach to generative modeling is with variational autoencoders (VAE, [22]). A major challenge in VAEs, however, is that they assume a factorial posterior, which is widely known to limit their flexibility [31, 21, 27, 4, 17, 37, 3, 40]. Further, VAEs do not offer exact density estimation — a requirement in many statistical procedures.

Normalizing flows (NF) play an important role in the recent developments of both density estimation and variational inference [31]

. Normalizing flows are smooth, invertible transformations with tractable Jacobians, which can map a complex data distribution to simple distribution, such as a standard normal. In the context of variational inference, a normalizing flow transforms a simple, known base distribution into a more faithful representation of the true posterior. Flow-based models are also an attractive approach for density estimation because they provide exact density computation and sampling with only a single neural network pass (in some instances). Recent developments in NFs have focused of creating deeper, more complex transformations in order to increase the flexibility of the learned distribution.

Our contribution

In this work we propose a wider, not deeper approach for increasing the expressiveness of the posterior approximation. Our approach, gradient boosted flows (GBF), iteratively adds new NF components to a model based on gradient boosting, where each new NF component is fit to the residuals of the previously trained components. A weight is learned for each component of the GBF model, resulting in an approximate posterior that is a mixture model. Unlike recent work in boosted variational inference [15, 27], our approach is flow-based and can enhance deep generative models with flexible GBF approximate posteriors using the reparameterization trick [22, 32].

GBF compliments a number of existing flows, improving performance at the cost of additional training cycles — not additional complexity. However, our analysis highlights the need for analytically invertible flows in order to efficiently boost flow-based models. We explore the “decoder shock” phenomenon — a challenge unique to VAEs that model the approximate posterior with GBF. When GBF begins training a new component the distribution of samples passed to the VAE’s decoder changes abruptly, causing a temporary increase in loss. We propose a training technique to combat “decoder shock” and show that performance steadily improves as more components are added to the model.

Our results demonstrate that GBF improves performance on density estimation tasks, capable of modeling data with multiple modes. Lastly, we augment the VAE with a GBF variational posteriors, and show image modeling results on par with state-of-the-art NFs.

2 Background

2.1 Variational Inference

Approximate inference plays an important role in fitting complex probabilistic models. Variational inference, in particular, transforms inference into an optimization problem with the goal of finding a variational distribution that closely approximates the true posterior , where are the observed data, the latent variables, and are learned parameters [18, 41, 1]. Writing the log-likelihood of the data in terms of the approximate posterior reveals:


Since the second term in (1) is the Kullback-Leibler (KL) divergence, which is non-negative, then the first term forms a lower bound on the log-likelihood of the data, and hence referred to as the evidence lower bound (ELBO).

2.2 Variational Autoencoder

Kingma and Welling, Rezende et al. show that a re-parameterization of the ELBO can result in a differentiable bound that is amenable to optimization via stochastic gradients and back-propagation. Further, Kingma and Welling structure the inference problem as an autoencoder, introducing the variational autoencoder (VAE) and minimizing the negative-ELBO . Re-writing the as:


shows the probabilistic decoder , and highlights how the VAE encodes the latent variables with the variational posterior , but is regularized with the prior .

2.3 Normalizing Flows

Normalizing flows are a method for density estimation and improving inference. In the original formulation, Rezende and Mohamed modify the VAE’s posterior approximation, applying a chain of transformations to the inference network’s output , giving:


where the density transformation

is an invertible, smooth mapping. By the chain rule and inverse function theorem, a random variable

has a computable density [36, 35]:


Thus, a NF-based approximate posterior optimizes:


where is a known base distribution.

Building on the planar and radial flows from [31], Sylvester flows generalize planar flows into a more expressive framework [40]. Inverse autoregressive flows (IAF, [21]) and masked autoregressive flows (MAF, [28]) scale to higher dimensions by exploiting the ordering of variables in the flow. Neural autoregressive flows (NAF, [16]) and the more compact Block NAF [6], replace affine transformations with autoregressive neural networks.

While all NFs are invertible, flows based on coupling layers like NICE [7] and successor RealNVP [8] are analytically invertible. Glow replaced RealNVP’s permutation operation with a convolution [20]. Neural spline flows provide a method for increasing the flexibility of both coupling and autoregressive transforms using monotonic rational-quadratic splines [9]. Non-linear squared flows [42] offer a highly multi-modal transformation and are analytically invertible.

2.4 Boosted Variational Inference

Gradient boosting [26, 10, 11, 12] considers the minimization of a loss , where is a function representing the current model. Consider an additive perturbation around to , where is a function representing a new component. A Taylor expansion as :


reveals the functional gradient , which is the direction that reduces the loss at the current solution.

By considering convex combinations of distributions and with weight , boosting can be applied outside of classification or regression setting [33]. Recently, Miller et al., Guo et al. introduced the idea of boosting variational inference, which improves a variational posterior by iteratively adding simple approximations.

1 Component
2 Components
Fine-Tune Components
Figure 1: Example of GBF behavior: A simple affine flow (one scale and shift operation) is not flexible enough to model the target distribution and leads to mode-covering behavior as shown in the 1 Component figure. In the 2 Components

figure, GBF introduces a second component, which seeks a region of high probability that is not well modeled by the first component (mass is fit to the right ellipsoid). For this toy problem, fine-tuning the components with additional

boosted training leads to an even better solution — adjusting the first component to fit the left ellipsoid, and re-weighing the first component appropriately as shown in the Fine-Tune Components figure.

3 Gradient Boosted Flows

Gradient boosted flows (GBF) build on recent ideas in boosted variational inference in order to increase the flexibility of posteriors approximated with NFs. The GBF approximate posterior is constructed by successively adding new components based on gradient boosting, where each new component is a -step normalizing flow that is fit to the functional gradient of the loss from the previously trained components .

Gradient boosting assigns a weight to the new component and we restrict the weight

to make sure the model stays a valid probability distribution. The resulting variational posterior is a mixture model of the form:


where the new variational posterior is a convex combination of fixed components (mixture model) and the new component . In our formulation of GBF we consider components, where is fixed and finite.

During training, the first component is fit using a traditional objective function for fitting NFs, and no boosting is applied. At stages , there are fixed components , consisting of a convex combination of the -step flow models from the previous stages, and the new component being trained . Instead of jointly optimizing w.r.t. to both and , we train until convergence, and then optimize the corresponding weight .

3.1 GBF Variational Bound

We seek a variational posterior that closely matches the true posterior — that is, we wish to minimize . From (1) note that the minimizing KL is equivalent to minimizing the negative-ELBO. Thus, a GBF model has objective:


In order to expand the approximate posterior term we first use the change of variables formula as in (4). Second, since expectations of mixtures can be written as a convex combination of expectations w.r.t. each component (See Appendix C for details), then the approximate posterior is:


where the sample is transformed into by choosing a single mixture component based on weights , and then applying component ’s flow transformation: — hence, the outer sum over component indices reflects integrating over the choice of component. The expectation in (9) is with respect to a base distribution that is shared by all components.

3.2 Updates to New Boosting Components

Figure 2: Gradient boosted flows increase model flexibility by adding new NF components. (a) Samples are drawn from the base distribution shown on bottom, and (b) transformed into using the -step flow transformation corresponding to the new component (only a 1-step flow shown). By gradient boosting we fit the new component to the residuals of the fixed components , requiring the likelihood of under the fixed components . Due to the change of variables formula, is computed by (c) first mapping back to the base distribution using the inverse flow transformation , and then (d) evaluating .

Given the objective function in (8), we proceed with deriving updates for new components. At stage , we assume to be fixed, and the focus is learning and

based on functional gradient descent (FGD). For a moment we disregard

111A proper handling of this term is detailed in 4.1. the random variable in the expectation of (8), and consider the functional gradient w.r.t. at :


Since are the fixed components, then minimizing the loss can be achieved by choosing a new component that has the maximum inner product with the negative of the gradient. In other words, we choose a such that:

where denotes the functional gradient from (10), and denotes the family of -step normalizing flows.

To avoid letting degenerate to a point mass at the functional gradient’s minimum, we add an entropy regularization term controlled by , hence is:


Despite the differences in derivation, optimization of GBF has a similar structure to other flow-based VAEs. Specifically, with the addition of the entropy regularization term, (11) can be rearranged to show the new component minimizes:


where denotes a sample transformed by component ’s flow. Hence, similar to the VAE objective from (2), a GBF has a KL-divergence regularization term between the new component and the prior. The key difference for GBF, however, is that the negative log-likelihood – is down-weighted for samples that are already explained by the fixed portion of the model.

3.3 Updating Component Weights


It follows that the weights on each component can be updated by taking the gradient of the loss with respect to . At training iterate we have:

where we’ve defined:

To estimate the gradient with Monte Carlo, we draw samples and and update

with stochastic gradient descent in Algorithm

LABEL:algo:rhoc. To ensure a stable convergence we follow [15] and implement a decaying learning rate.

Updating a component’s weight with Algorithm LABEL:algo:rhoc is only needed once after each component converges. We find, however, that results improve by “fine-tuning” components with additional training after the initial training pass. During the fine-tuning stage, for each we train and treat all other components as fixed. Figure 1 demonstrates this phenomenon: when a single flow is not flexible enough to model the target distribution mode-covering behavior arises. Introducing the second component trained with the boosting objective improves results, and consequently the second component’s weight is increased. Fine-tuning the first component leads to a better solution and assigns equal weight to the two components. We also witness improvements in VAE’s with GBF variational posteriors after fine-tuning on real datasets (see Figure 3).

4 GBF Implementation Novelties

4.1 Reparameterization Trick with GBF

Because the objective (8) includes an intractable expectation with respect to a continuous-valued random variable, evaluating the functional gradient requires the reparameterization trick [22, 32]

. We compute an unbiased estimate of the gradient by reparameterizing the latent variable

in terms of a known base distribution and a differentiable transformation. When , we have where . Thus, we form the Monte Carlo estimator of the individual data-point negative-ELBO, and write the functional gradient:


where denotes a sample from the new component .

4.2 Flows Compatible with Gradient Boosting

The main constraint in adopting GBF is that flows must be analytically invertible (see Figure 2). The focus of GBF is on training the new component , but in order to draw samples we sample from the base distribution and transform according to:

However, by (12) updating requires computing the likelihood — which cannot be done directly. Instead, we seek the point within the base distribution such that:

where randomly chooses one of the fixed components. Then, under the change of variables formula, we approximate by:

Since all NFs are by definition invertible, then in theory all normalizing flows can be boosted. In practice, however, only flows that are analytically invertible can be boosted efficiently. Inverse autoregressive flows (IAF, [21]) and masked autoregressive flows (MAF, [28]) are invertible, however, they are times slower to invert where is the dimensionality of . Whereas, flows based on coupling layers, such as NICE [7] and RealNVP [8], as well as neural spline flows [9], non-linear squared flows [42], and flows based on lower-triangular matrices [38, 39] can be easily inverted and boosted.

4.3 Decoder Shock in Gradient Boosted VAEs

One challenge in training VAEs using a GBF variational posterior follows from the sharing of one decoder between all components. During training the decoder naturally acclimates to receiving samples from a particular component (e.g. ). However, when a new stage begins the decoder begins receiving samples from a different component . At this point the loss jumps, a phenomenon we refer to as “decoder shock” (see Figure 3). Reasons for “decoder shock” are as follows.

First, the KL-annealing schedule is reset when begins training. KL-annealing is an important technique for successfully training VAE models [2, 34]. By reducing the weight of the KL term in (2

) during the initial epochs the model is free to discover useful representations of the data before being penalized for complexity. Without KL-annealing, models may choose the “low hanging fruit” and rely purely on a powerful decoder

[2, 34, 4, 30, 5]. Thus, by resetting the annealing schedule the KL term in the loss increases.

Second, and more importantly, when is introduced a sudden shift occurs in the distribution of samples passed to the decoder. Moreover, because the KL-annealing schedule had reset, is free to create an approximate posterior that is much less constricted than that of the previous component . Consequently, this causes a sharp increase in reconstructions errors.

A spike in loss between boosting stages is unique to GBF. Unlike other boosted models, with GBF there is a module (the decoder) that depends on the boosted modules — this does not exist when boosting decision trees for regression or classification (for example). To overcome the “decoder shock” problem we propose a simple solution that deviates from a traditional boosting approach. Instead of only drawing samples from

during training, we blend in samples from the fixed components too. By occasionally sampling from we help the decoder remember past components and adjust to changes in the full approximate posterior . In our experiments we find good results when annealing the sampling rate from 0 to 0.5 over the 1000 epochs is trained. We emphasize that despite occasionally sampling from , the parameter weights of remain fixed — the samples from are purely for the decoder’s benefit.

Figure 3: Example of “decoder shock” on Caltech 101 Silhouettes dataset, where new boosting components are introduced every 1000 epochs. While loss on the test set decreases steadily as we add new components, the validation loss jumps dramatically when a new component is introduced due to sudden changes in the distribution of samples passed to the decoder. We also highlight how fine-tuning components — by making a second pass with only 25 epochs over each component, improves results at very little computational cost.

5 Experiments

To demonstrate the flexibility of GBF, we highlight results on two density estimation tasks, as well as boost normalizing flows within a VAE for generative modeling of images on four datasets: Freyfaces222, Caltech 101 Silhouettes [25], Omniglot [23]

, and statically binarized MNIST


In all of our experiments we boost RealNVP [8] transformations of varying flow lengths. RealNVP, and the closely related Non-linear Independent Component Estimation (NICE, [7]), partitions the latent variables into subsets , and only modifies one subset with an affine coupling layer: , where are scale and translation functions from . We parameterize as feed-forward networks with TanH activations and a single hidden layer. While coupling layer models like RealNVP are less flexible and have been shown to be empirically inferior to planar flows in variational inference [31], RealNVP remains an attractive choice for boosting because it is trivially invertible, and gives exact log-likelihood computation, sampling, and inference with one forward pass.

8 1

RealNVP (K=8)
Single Component
Gradient Boosted

4 2

2 4

Figure 4: Density estimation for 2D toy data. The second column shows a RealNVP model with flows [8]. In the final column is an equivalently sized Gradient Boosted Flow, consisting of components each with flow length (listed to the left of each row). For reference, GBF’s first component (trained using standard methods) is shown in column three — highlighting how flows of length or can be combined for a more refined and flexible model.

5.1 Toy Density Problems

5.1.1 Density Estimation

In Figure 4 we apply GBF to the density estimation problems found in [20, 14, 6]. In this problem the model is given samples from an unknown 2-dimensional data distribution

, and the goal is to transform the complex parametric model

of the data distribution into a simple, easy to evaluate distribution

(i.e. a standard Normal distribution). The parameters

are found through maximum likelihood. For standard normalizing flows, the complex density is parameterized as , whereas the boosted models use the approximate posterior described in (7). Each flow is a sequence of RealNVP coupling layers [8] with a TanH activation and 128 hidden units, flows are trained for 25k iterations using the Adam optimizer [19].


We compare our results to a deep 8-flow RealNVP model. To show the flexibility of GBF, we boost 8, 4, and 2 components, where each of the components are RealNVP flows of length or , respectively. We choose these flow lengths because they highlight the result of boosting when no one component is flexible enough to perfectly model the data distribution. In each example the 8-flow RealNVP and gradient boosted flow contain the same number of parameters, however the gradient boosted model is able to achieve sharper results and more clearly defined multi-modality.


RealNVP (K=16)
Gradient Boosted


RealNVP (K=16)
Gradient Boosted



Figure 5: Matching the energy functions from Table 1 of [31]. The middle columns show deep RealNVPs with flows. On the right, we show that GBF with components of length (half as many parameters) can perform as well or better than their deep counterpart.

5.1.2 Density Matching

In the density matching problem the model generates samples from a simple distribution (such as a standard Normal) and transforms them into a complex distribution . The 2-dimensional target’s analytical form is given and parameters are learned by minimizing where is formulated using the change of variables formula.


Figure 5 highlights results on the density matching problem. For each of the four energy functions we compare our results to a deep 16-flow RealNVP model. The gradient boosted model is configured with two RealNVP components, each of length . In each case the gradient boosted flows provide an accurate density estimation with half as many total parameters. When the component flows are flexible enough to model most or all of the target density, components can overlap. However, by training the component weights the model down-weights new components that don’t provide additional information.

Model MNIST Freyfaces Omniglot Caltech 101
Planar 104.23
Sylvester 84.54 81.99 4.54 4.49 101.99 98.54 112.26 100.38
GBF 82.70 4.48 99.11
GBF+ 82.67 4.41 99.09 106.55
Table 1: Negative ELBO (-ELBO, lower is better) and Negative log-likelihood (NLL, lower is better) results on MNIST, Freyfaces, Omniglot, and Caltech 101 Silhouettes datasets. For the Freyfaces dataset the results are reported in bits per dim. Results for the other datasets are reported in nats. For planar, radial and orthogonal Sylvester flows a flow depth of was used. IAF and RealNVP used steps with a hidden unit size of on Freyfaces and Caltech 101, and units for MNIST and Omniglot. GBF models boosted up to RealNVP components. GBF+ indicates the GBF model with an additional pass over each component (50 epochs/component) to “fine-tune” and re-compute the weights . The top 3 NLL results for each dataset are bolded.

5.2 Modeling Real Data with Variational Autoencoders

Following [31], we employ NFs for improving VAEs [22]. We compare our model on the same image datasets as those used in [40], however we limit the computational complexity of the experiments by reducing the number of convolutional layers in the encoder and decoder of the VAEs from 14 layers to 6. In Table 1 we compare the performance of our gradient boosted flows to other normalizing flow architectures. Planar, radial, and Sylvester normalizing flows (SNF) each use , with SNF’s bottleneck set to

orthogonal vectors per orthogonal matrix. IAF is trained with

transformations, each of which is a single hidden layer MADE [13] with either or hidden units. RealNVP uses transformations with either or hidden units in the Tanh feed-forward network. For all models, the dimensionality of the flow is fixed at .

Each baseline model in Table 1 is trained for 1000 epochs, annealing the KL term in the objective function over the first 250 epochs as in [2, 34]. The gradient boosted models apply the same training schedule to each component. We optimize using the Adam optimizer [19] with a learning rate of (decay of 0.5x with a patience of 250 steps). To evaluate the negative log-likelihood (NLL) we use importance sampling (as proposed in [32]) with 2000 importance samples. To ensure a fair comparison, the reported ELBO for GBF models is computed by (5) — effectively dropping GBF’s fixed components term and setting the entropy regularization to . Since GBF’s variational posterior is a mixture model, we sample components from the mixture and average the ELBO calculation over samples drawn from .


In all results RealNVP — which is more ideally suited for parametric density estimation tasks, performs the worst of the flow models. Nonetheless, applying gradient boosting to RealNVP improves the results significantly. On Freyfaces, the smallest dataset consisting of just 1965 images, gradient boosted RealNVP gives the best performance — suggesting that GBF may help in overfitting. For the larger Omniglot dataset of hand-written characters, Sylvester flows are superior, however, gradient boosting improves the RealNVP baseline considerably and achieves a negative log-likelihood comparable to Sylvester (99.09 versus 98.54). GBF improves on the baseline RealNVP, however both GBF and IAF’s results are notably higher than traditional flows like planar, radial, and Sylvester for the Caltech 101 Silhouettes dataset. Lastly, on MNIST we find the boosting improves the NLL on RealNVP from 83.36 to 82.67, and is on par with Sylvester flows.

In all datasets fine-tuning GBF components (listed as GBF+ in Table 1) with an additional 50 epochs per component and re-computing the weights , further improves results. Fine-tuning allows each component in the mixture an opportunity to adjust to the components that were trained after it. As was shown in the toy example show in Figure 1, this adjustment can be crucial to producing a betting fitting approximate posterior. A likely explanation for this phenomenon is that GBF optimizes a likelihood based objective and hence attempts to explain all of the data (as shown by the mode-covering behavior in 1). Thus, components that over-extended themselves during the initial training pass can focus on producing a tighter approximation on a subset of the posterior during the fine-tuning stage.

6 Conclusion

In this work we introduce gradient boosted flows, a technique for increasing the flexibility of flow-based variational posteriors through gradient boosting. GBF, iteratively adds new NF components, where each new component is fit to the residuals of the previously trained components. We show that GBF is only constrained to analytically invertible flows — making GBF complimentary to many existing NF models. In our experiments we demonstrated that GBF improves over their baseline single component model, without increasing the depth of the model, and produces image modeling results on par with state-of-the-art flows. Further, we showed GBF models used for density estimation create more flexible distributions with a fraction of the total parameters.

In the future we wish to further investigate the “decoder shock” phenomenon occurring when GBF is paired with a VAE. Future work may benefit from exploring other strategies for alleviating “decoder shock”, such as multiple decoders or different annealing strategies. Additionally, in our experiments we used RealNVP as the base component. Future work may consider other flows for boosting, as well as heterogeneous combinations of flows as the different components.


The research was supported by NSF grants OAC-1934634, IIS-1908104, IIS-1563950, IIS-1447566, IIS-1447574, IIS-1422557, CCF-1451986. We thank the University of Minnesota Supercomputing Institute (MSI) for technical support.


  • Blei et al. [2017] Blei, D. M., Kucukelbir, A., and McAuliffe, J. D. (2017). Variational Inference: A Review for Statisticians. Journal of the American Statistical Association, 112(518):859–877.
  • Bowman et al. [2016] Bowman, S. R., Vilnis, L., Vinyals, O., Dai, A., Jozefowicz, R., and Bengio, S. (2016). Generating Sentences from a Continuous Space. In Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning, pages 10–21, Berlin, Germany. Association for Computational Linguistics.
  • Casale et al. [2018] Casale, F. P., Dalca, A. V., Saglietti, L., Listgarten, J., and Fusi, N. (2018). Gaussian Process Prior Variational Autoencoders. Advances in Neural Information Processing Systems, page 11.
  • Chen et al. [2017] Chen, X., Kingma, D. P., Salimans, T., Duan, Y., Dhariwal, P., Schulman, J., Sutskever, I., and Abbeel, P. (2017). Variational Lossy Autoencoder. ICLR.
  • Cremer et al. [2018] Cremer, C., Li, X., and Duvenaud, D. (2018). Inference Suboptimality in Variational Autoencoders. In

    International Conference on Machine Learning

    , Stockholm, Sweden.
  • De Cao et al. [2019] De Cao, N., Titov, I., and Aziz, W. (2019). Block Neural Autoregressive Flow.

    35th Conference on Uncertainty in Artificial Intelligence (UAI19)

  • Dinh et al. [2015] Dinh, L., Krueger, D., and Bengio, Y. (2015). NICE: Non-linear Independent Components Estimation. ICLR.
  • Dinh et al. [2017] Dinh, L., Sohl-Dickstein, J., and Bengio, S. (2017). Density estimation using Real NVP. ICLR.
  • Durkan et al. [2019] Durkan, C., Bekasov, A., Murray, I., and Papamakarios, G. (2019). Neural Spline Flows. In Advances in Neural Information Processing Systems.
  • Friedman et al. [2000] Friedman, J., Hastie, T., and Tibshirani, R. (2000).

    Additive logistic regression: A statistical view of boosting.

    The annals of statistics, 28(2):337–407.
  • Friedman [2001] Friedman, J. H. (2001). Greedy function approximation: A gradient boosting machine. Annals of statistics, pages 1189–1232.
  • Friedman [2002] Friedman, J. H. (2002). Stochastic gradient boosting. Computational Statistics & Data Analysis, 38(4):367–378.
  • Germain et al. [2015] Germain, M., Gregor, K., Murray, I., and Larochelle, H. (2015). MADE: Masked Autoencoder for Distribution Estimation. In International Conference on Machine Learning, volume 37, Lille, France.
  • Grathwohl et al. [2019] Grathwohl, W., Chen, R. T. Q., Bettencourt, J., Sutskever, I., and Duvenaud, D. (2019). FFJORD: Free-form Continuous Dynamics for Scalable Reversible Generative Models. In International Conference on Learning Representations.
  • Guo et al. [2016] Guo, F., Wang, X., Fan, K., Broderick, T., and Dunson, D. B. (2016). Boosting Variational Inference. In Advances in Neural Information Processing Systems, Barcelona, Spain.
  • Huang et al. [2018a] Huang, C.-W., Krueger, D., Lacoste, A., and Courville, A. (2018a). Neural Autoregressive Flows. In International Conference on Machine Learning, page 10, Stockholm, Sweden.
  • Huang et al. [2018b] Huang, C.-W., Tan, S., Lacoste, A., and Courville, A. (2018b). Improving Explorability in Variational Inference with Annealed Variational Objectives. In Advances in Neural Information Processing Systems, page 11, Montréal, Canada.
  • Jordan et al. [1999] Jordan, M. I., Ghahramani, Z., Jaakkola, T. S., and Saul, L. K. (1999). Introduction to variational methods for graphical models. Machine Learning, 37(2):183–233.
  • Kingma and Ba [2015] Kingma, D. P. and Ba, J. (2015). Adam: A method for stochastic optimization. ICLR.
  • Kingma and Dhariwal [2018] Kingma, D. P. and Dhariwal, P. (2018). Glow: Generative Flow with Invertible 1x1 Convolutions. In Advances in Neural Information Processing Systems, Montréal, Canada.
  • Kingma et al. [2016] Kingma, D. P., Salimans, T., Jozefowicz, R., Chen, X., Sutskever, I., and Welling, M. (2016). Improving Variational Inference with Inverse Autoregressive Flow. In Advances in Neural Information Processing Systems.
  • Kingma and Welling [2014] Kingma, D. P. and Welling, M. (2014). Auto-Encoding Variational Bayes. Proceedings of the 2nd International Conference on Learning Representations (ICLR), pages 1–14.
  • Lake et al. [2015] Lake, B. M., Salakhutdinov, R., and Tenenbaum, J. B. (2015). Human-level concept learning through probabilistic program induction. Science, 350(6266):1332–1338.
  • Larochelle and Murray [2011] Larochelle, H. and Murray, I. (2011). The Neural Autoregressive Distribution Estimator. International Conference on Artificial Intelligence and Statistics (AISTATS), 15:9.
  • Marlin et al. [2010] Marlin, B. M., Swersky, K., Chen, B., and de Freitas, N. (2010).

    Inductive Principles for Restricted Boltzmann Machine Learning.

    13thInternational Conference on Artificial Intelligence and Statistics (AISTATS), 9:8.
  • Mason et al. [1999] Mason, L., Baxter, J., Bartlett, P. L., and Frean, M. R. (1999). Boosting Algorithms as Gradient Descent. In Advances in Neural Information Processing Systems, pages 512–518.
  • Miller et al. [2017] Miller, A. C., Foti, N., and Adams, R. P. (2017). Variational Boosting: Iteratively Refining Posterior Approximations. In Proceedings of the 34th International Conference on Machine Learning, volume 70, pages 2420–2429. PMLR.
  • Papamakarios et al. [2017] Papamakarios, G., Pavlakou, T., and Murray, I. (2017). Masked Autoregressive Flow for Density Estimation. In Advances in Neural Information Processing Systems.
  • Paszke et al. [2017] Paszke, A., Gross, S., Chintala, S., Chanan, G., Yang, E., DeVito, Z., Lin, Z., Desmaison, A., Antiga, L., and Lerer, A. (2017).

    Automatic differentiation in PyTorch.

    In Advances in Neural Information Processing Systems, page 4.
  • Rainforth et al. [2018] Rainforth, T., Kosiorek, A. R., Le, T. A., Maddison, C. J., Igl, M., Wood, F., and Teh, Y. W. (2018). Tighter Variational Bounds Are Not Necessarily Better. In International Conference on Machine Learning, Stockholm, Sweden.
  • Rezende and Mohamed [2015] Rezende, D. J. and Mohamed, S. (2015). Variational Inference with Normalizing Flows. In International Conference on Machine Learning, volume 37, pages 1530–1538, Lille, France. PMLR.
  • Rezende et al. [2014] Rezende, D. J., Mohamed, S., and Wierstra, D. (2014).

    Stochastic Backpropagation and Approximate Inference in Deep Generative Models.

    In Proceedings of the 31st International Conference on Machine Learning, volume 32 of 2, pages 1278–1286, Beijing, China. PMLR.
  • Rosset and Segal [2002] Rosset, S. and Segal, E. (2002). Boosting Density Estimation. In Advances in Neural Information Processing Systems, page 8.
  • Sønderby et al. [2016] Sønderby, C. K., Raiko, T., Maaløe, L., Sønderby, S. K., and Winther, O. (2016). Ladder Variational Autoencoders. In Advances in Neural Information Processing Systems.
  • Tabak and Turner [2013] Tabak, E. G. and Turner, C. V. (2013). A Family of Nonparametric Density Estimation Algorithms. Communications on Pure and Applied Mathematics, 66(2):145–164.
  • Tabak and Vanden-Eijnden [2010] Tabak, E. G. and Vanden-Eijnden, E. (2010). Density estimation by dual ascent of the log-likelihood. Communications in Mathematical Sciences, 8(1):217–233.
  • Tomczak and Welling [2018] Tomczak, J. and Welling, M. (2018). VAE with a VampPrior. In International Conference on Artificial Intelligence and Statistics (AISTATS), volume 84, Lanzarote, Spain.
  • Tomczak and Welling [2016] Tomczak, J. M. and Welling, M. (2016). Improving Variational Auto-Encoders using Householder Flow. In

    Bayesian Deep Learning Workshop (NIPS 2016)

  • Tomczak and Welling [2017] Tomczak, J. M. and Welling, M. (2017). Improving Variational Auto-Encoders using convex combination linear Inverse Autoregressive Flow. arXiv:1706.02326 [stat].
  • van den Berg et al. [2018] van den Berg, R., Leonard Hasenclever, Jakub M. Tomczak, and Max Welling (2018). Sylvester Normalizing Flows for Variational Inference. Uncertainty in Artificial Intelligence (UAI).
  • Wainwright and Jordan [2007] Wainwright, M. J. and Jordan, M. I. (2007). Graphical Models, Exponential Families, and Variational Inference. Foundations and Trends® in Machine Learning, 1(1–2):1–305.
  • Ziegler and Rush [2019] Ziegler, Z. M. and Rush, A. M. (2019). Latent Normalizing Flows for Discrete Sequences. In Advances in Neural Information Processing Systems.

Appendix A Dataset Details

In Section 5.2, VAEs are modified with GBF approximate posteriors to model four datasets: Freyfaces333, Caltech 101 Silhouettes444 [25], Omniglot555 [23], and statically binarized MNIST666 [24]. Details of these datasets are given below.

The Freyfaces dataset contains 1965 gray-scale images of size portraying one man’s face in a variety of emotional expressions. Following van den Berg et al., we randomly split the dataset into 1565 training, 200 validation, and 200 test set images.

The Caltech 101 Silhouettes dataset contains 4100 training, 2264 validation, and 2307 test set images. Each image portrays the black and white silhouette of one of 101 objects, and is of size . As van den Berg et al. note, there is a large variety of objects relative to the training set size, resulting in a particularly difficult modeling challenge.

The Omniglot dataset contains 23000 training, 1345 validation, and 8070 test set images. Each image portrays one of 1623 hand-written characters from 50 different alphabets, and is of size . Images in Omniglot are dynamically binarized.

Finally, the MNIST dataset contains 50000 training, 10000 validation, and 10000 test set images. Each image is a binary, and portrays a hand-written digit.

Appendix B Model Architectures

In Section 5.2, we compute results on real datasets for the VAE and VAEs with a flow-based approximate posterior. In each model we use convolutional layers, where convolutional layers follow the PyTorch convention [29]. The encoder of these networks contains the following layers:

where is a kernel size,

is a padding size, and

is a stride size. The final convolutional layer is followed by a fully-connected layer that outputs parameters for the diagonal Gaussian distribution and amortized parameters of the flows (depending on model).

Similarly, the decoder mirrors the encoder using the following transposed convolutions:

where is an outer padding. The decoders final layer is passed to standard 2-dimensional convolutional layer to reconstruction the output — whereas the other convolutional layers listed above implement a gated action function:

where and are inputs and outputs of the -th layer, respectively, are weights of the -th layer, denote biases, is the convolution operator,

is the sigmoid activation function, and

is an element-wise product.

Appendix C ELBO’s Approximate Posterior with GBF

Augmenting a flow-based variational posterior with gradient boosting changes corresponding ELBO term. In order to clarify the derivation of the approximate posterior term we provide details on computing expectations of flow objects and mixtures.

() Expectations w.r.t. a Mixture.

Let be a gradient boosted flow and consider any function . Then the expectation:

where (a) holds because is a finite convex combination, and reflects integrating over the choice of mixture component for each sample . Thus, the expectation of a function w.r.t. a mixture model is equivalent to a convex combination of expectations w.r.t. each component distribution .

() Expectations w.r.t. a Flow Transformation.

Recall that expectations w.r.t. a flow transformation can be written as w.r.t. a base distribution. Specifically, let be a function of , and some approximate posterior component whose density transformation is a flow of length . Then the transformed sample is computed by: . Moreover, by the Law of the Unconscious Statistician (LOTUS), the expectation of w.r.t. is:

In other words, we can write the expectation w.r.t. an unknown density as an expectation w.r.t. the base distribution if given the flow transformations .

GBF Approximate Posterior Term.

From () and () above, it follows that the GBF approximate posterior term can be expanded as:


where denotes the Jacobian term for component ’s density transformation . The final step highlights how every component shares the same base distribution, and samples from this base distribution are transformed by the components. Thus, the sum over component indices — which integrates over the choice in mixture component for each sample, only applies to the change of variables term.

Appendix D Reparameterization Trick with GBF

A more detailed explanation on applying the reparameterization trick [22, 32] to the negative-ELBO in (8) is show below.

In (a) the approximate posterior has been expanded following (C), and (b) follows by rewriting the expectation in terms of random noise and defining . Under the reparameterization, the gradient and expectation operators are commutative, and we can form a simple Monte Carlo estimator . Finally, the functional gradient of w.r.t at is:

Appendix E Derivation of Component Weights

After has been estimated, the mixture model still needs to estimate . Recall that can be written as the convex combination:

Then, with

the objective function can be written as a function of :


The above expression can be used in a black-box line search method or, as we have done, in a stochastic gradient descent algorithm. Toward that end, taking gradient of (E) w.r.t. yields the component weight updates shown in Section 3.3.