1 Introduction
Stochastic variational inference (Hoffman et al., 2013) allows for posterior inference in increasingly large and complex problems using stochastic gradient ascent. In continuous latent variable models, variational inference can be made particularly efficient through the amortized inference, in which inference networks amortize the cost of calculating the variational posterior for a data point (Gershman and Goodman, 2014)
. A particularly successful class of models is the variational autoencoder (VAE) in which both the generative model and the inference network are given by neural networks, and sampling from the variational posterior is efficient through the noncentered parameterization
(Kingma and Welling, 2014), also known as the reparameterization trick (Kingma and Welling, 2013; Rezende et al., 2014).Despite its success, variational inference has drawbacks compared to other inference methods such as MCMC. Variational inference searches for the best posterior approximation within a parametric family of distributions. Hence, the true posterior distribution can only be recovered exactly if it happens to be in the chosen family. In particular, with widely used simple variational families such as diagonal covariance Gaussian distributions, the variational approximation is likely to be insufficient. More complex variational families enable better posterior approximations, resulting in improved model performance. Therefore, designing tractable and more expressive variational families is an important problem in variational inference
(Nalisnick et al., 2016; Salimans et al., 2015; Tran et al., 2015).Rezende and Mohamed (2015) introduced a general framework for constructing more flexible variational distributions, called normalizing flows. Normalizing flows transform a base density through a number of invertible parametric transformations with tractable Jacobians into more complicated distributions. They proposed two classes of normalizing flows: planar flows and radial flows. While effective for small problems, these can be hard to train and often many transformations are required to get good performance. For planar flows, Kingma et al. (2016) argue that this is due to the fact that the transformation used acts as a bottleneck, warping one direction at a time. Having a large number of flows makes the inference network very deep and harder to train, empirically resulting in suboptimal performance. Kingma et al. (2016)
proposed inverse autoregressive flows (IAF), achieving state of the art results on dynamically binarized MNIST at the time of publication. While very successful, each transformation in IAF only depends on the datapoint
through a context vector, with flow parameters that are independent of the datapoint.
Paper contribution In this paper, we use Sylvester’s determinant identity to introduce Sylvester normalizing flows (SNFs). This family of flows is a generalization of planar flows, removing the bottleneck. We compare a number of different variants of SNFs and show that they compare favorably against planar flows and IAFs. We show that one specific variant of SNF is related to IAF, with the main difference being the amortization strategy of the flow parameters. Besides the usual requirement of having flexible transformations, this demonstrates the importance of having datadependent flow parameters. Note that this concept generalizes to applying normalizing flows to any conditional distribution, in the sense that the transformation parameters should be functions of the conditioning variable.
2 Variational Inference
Consider a probabilistic model with observations and continuous latent variables and model parameters . In generative modeling we are often interested in performing maximum (marginal) likelihood learning of the parameters of the latentvariable model . This requires marginalization over the unobserved latent variables . Unfortunately, this integration is generally intractable. Variational inference (Jordan et al., 1999) instead introduces a variational approximation to the posterior, to construct a lower bound on the log marginal likelihood:
(1)  
(2)  
(3) 
This bound is known as the evidence lower bound (ELBO) and is referred to as the variational free energy. In equation (2), the first term represents the reconstruction error, and the second term is the KullbackLeibler (KL) divergence from the approximate posterior to the prior distribution, which acts as a regularizer. In this paper we consider variational autoencoders (VAEs), where both and are distributions whose parameters are given by neural networks. That is, we perform amortized inference such that with the parameters of the encoder neural network. The parameters and of the generative model and inference model, respectively, are trained jointly through stochastic minimization of , which can be made efficient through the reparameterization trick (Kingma and Welling, 2013; Rezende et al., 2014).
From equation (1
) we see that the better the variational approximation to the posterior the tighter the ELBO. The simplest, but probably most widely used choice of variation distribution
is diagonalcovariance Gaussians of the formHowever, with such simple variational distributions the ELBO will be fairly loose, resulting in biased maximum likelihood estimates of the model parameters
(see Fig. 1) and harming generative performance. Thus, for variational inference to work well, more flexible approximate posterior distributions are needed.2.1 Normalizing Flows
Rezende and Mohamed (2015) propose a way to construct more flexible posteriors by transforming a simple base distribution with a series of invertible transformations (known as normalizing flows) with easily computable Jacobians. The resulting transformed density after one such transformation is as follows (Tabak and Turner, 2013; Tabak and VandenEijnden, 2010):
(4) 
where , and is an invertible function. In general the cost of computing the Jacobian will be . However, it is possible to design transformations with more efficiently computable Jacobians.
This strategy is used in variational inference as follows: first, a stochastic variable is drawn from a simple base posterior distribution such as a diagonal Gaussian . The sample is then transformed with a number of flows. After applying flows, the final latent stochastic variables are given by . The corresponding logdensity is then given by:
(5) 
where are the parameters of the th transformation. Note that in order to achieve a flexible amortization strategy, the flow parameters can be made dependent on the input data: (Rezende and Mohamed, 2015). Given a variational posterior parametrized by a normalizing flow of length K, the variational objective can be rewritten as:
(6)  
(7) 
Normalizing flows are frequently applied to amortized variational inference. Instead of learning the parameters of the posterior distribution for each data point, (such as and for a Gaussian posterior), the inputdependence of the posterior distribution parameters is modeled through an encoder/inference network. When performing amortized inference for normalizing flows, the flow parameters determine the final distribution, and should thus also be considered functions of the datapoint . This can be achieved through the use of hypernetworks (Ha et al., 2016).
Rezende and Mohamed (2015) introduced a normalizing flow, called planar flow, for which the Jacobian determinant could be computed efficiently. A single transformation of the planar flow is given by:
(8) 
Here, , and
is a suitable smooth activation function.
Rezende and Mohamed (2015) show that for , transformations of this kind are invertible as long as .By the Matrix determinant lemma the Jacobian of this transformation is given by:
(9) 
where denotes the derivative of and which can be computed in time.
In practice, many planar flow transformations are required to transform a simple base distribution into a flexible distribution, especially for high dimensional latent spaces. Kingma et al. (2016) argue that this is related to the term in Eq. (8
), which effectively acts as a singleneuron MLP. In the next section we will derive a generalization of planar flows, which does not have a singleneuron bottleneck, while still maintaining the property of an efficiently computable Jacobian determinant.
3 Sylvester Normalizing Flows
Consider the following more general transformation similar to a single layer MLP with
hidden units and a residual connection:
(10) 
with , , and . The Jacobian determinant of this transformation can be obtained using Sylvester’s determinant identity, which is a generalization of the matrix determinant lemma.
Theorem 1 (Sylvester’s determinant identity).
For all ,
(11) 
where and are and dimensional identity matrices, respectively.
When , the computation of the determinant of a matrix is thus reduced to the computation of the determinant of an matrix.
Using Sylvester’s determinant identity, the Jacobian determinant of the transformation in Eq. (10) is given by:
(12) 
Since Sylvester’s determinant identity plays a crucial role in the proposed family of normalizing flows, we will refer to them as Sylvester normalizing flows.
3.1 Parameterization of and
In general, the transformation in (10) will not be invertible. Therefore, we propose the following special case of the above transformation:
(13) 
where and are upper triangular matrices, and
with the columns forming an orthonormal set of vectors. By theorem 1, the determinant of the Jacobian of this transformation reduces to:
(14) 
which can be computed in , since is also upper triangular. The following theorem gives a sufficient condition for this transformation to be invertible.
Theorem 2.
Let and be upper triangular matrices. Let be a smooth function with bounded, positive derivative. Then, if the diagonal entries of and satisfy and is invertible, the transformation given by (13) is invertible.
Proof.
Case 1: and diagonal
Recall that onedimensional real functions with strictly positive derivatives are invertible. The columns of are orthonormal and span a subspace of . Let denote its orthogonal complement. We can decompose , where and . Similarly we can decompose . Clearly, . Hence only acts on and . Thus, it suffices to consider the effect of on . Multiplying (13) by from the left gives:
(15) 
where the vectors and are the respective coordinates of and w.r.t. . The dimensions in (15) are completely independent and each dimension is transformed by a real function . Consider a single dimension of (15). Since , we have and thus is invertible. Since all dimensions are independent and the transformation is invertible in each dimension we can find such that . Hence we can write the inverse of as:
(16) 
Case 2: triangular, diagonal
Let us now consider the case when is an upper triangular matrix. By the argument for the diagonal case above, it suffices to consider the effect of the transformation in . Multiplying (13) by from the left gives:
(17) 
where the vectors and contain the respective coordinates of and w.r.t. . As in the diagonal case consider the functions . Since , we have and thus is invertible. Let us rewrite (17) in terms of :
(18)  
(19)  
(20) 
Since is invertible we can write . Now suppose we have expressed in terms of . Then
(21)  
Thus we have expressed in terms of . By induction, we can express in terms of and hence the transformation is invertible.
Case 3: and triangular
Now consider the general case when is triangular. As before we only need to consider the effect of the transformation in .
(22) 
Let be the function . By assumption, is invertible with inverse . Multiplying (22) by gives:
(23) 
Since is upper triangular with diagonal entries , is covered by case 2 considered before and is invertible. Thus, can be written as:
(24) 
Hence the transformation in (22) is invertible. ∎
3.2 Preserving Orthogonality of
Orthogonality is a convenient property, mathematically, but hard to achieve in practice. In this paper we consider three different flows based on the theorem above and various ways to preserve the orthogonality of
. The first two use explicit differentiable constructions of orthogonal matrices, while the third variant assumes a specific fixed permutation matrix as the orthogonal matrix.
Orthogonal Sylvester flows.
First, we consider a Sylvester flow using matrices with orthogonal columns (OSNF). In this flow we can choose , and thus introduce a flexible bottleneck. Similar to (Hasenclever et al., 2017), we ensure orthogonality of by applying the following differentiable iterative procedure proposed by (Björck and Bowie, 1971; Kovarik, 1970):
(25) 
with a sufficient condition for convergence given by . Here, the 2norm of a matrix refers to , with
representing the largest singular value of
. In our experimental evaluations we ran the iterative procedure until , with the Frobenius norm, and a small convergence threshold. We observed that running this procedure up to steps was sufficient to ensure convergence with respect to this threshold. To minimize the computational overhead introduced by orthogonalization we perform this orthogonalization in parallel for all flows.Since this orthogonalization procedure is differentiable, it allows for the calculation of gradients with respect to
by backpropagation, allowing for any standard optimization scheme such as stochastic gradient descent to be used for updating the flow parameters.
Householder Sylvester flows.
Second, we study Householder Sylvester flows (HSNF) where the orthogonal matrices are constructed by products of Householder reflections. Householder transformations are reflections about hyperplanes. Let
, then the reflection about the hyperplane orthogonal to is given by:(26) 
It is worth noting that performing a single Householder transformation is very cheap to compute, as it only requires parameters. Chaining together several Householder transformations results in more general orthogonal matrices, and it can be shown (Bischof and Sun, 1997; Sun and Bischof, 1995) that any orthogonal matrix can be written as the product of Householder transformations. In our Householder Sylvester flow, the number of Householder transformations
is a hyperparameter that trades off the number of parameters and the generality of the orthogonal transformation. Note that the use of Householder transformations forces us to use
, since Householder transformation result in square matrices.Triangular Sylvester flows.
Third, we consider a triangular Sylvester flow (TSNF), in which all orthogonal matrices
alternate per transformation between the identity matrix and the permutation matrix corresponding to reversing the order of
. This is equivalent to alternating between lower and upper triangular and for each flow.3.3 DataDependent Flow Parameters
As previously mentioned, the parameters of the base distribution as well as the flow parameters can be functions of the data point (Rezende and Mohamed, 2015). Figure 2 (left) shows a diagram of one SNF step and the amortization procedure. The inference network takes datapoints
as input, and provides as an output the mean and variance of
such that . Several SNF transformations are then applied, such that , producing a flexible posterior distribution for . All of the flow parameters (, and for each transformation) are produced as an output of a hypernetwork (attached to the inference network) and are thus functions of .4 Related Work
4.1 Normalizing Flows for Variational Inference
A number of invertible transformations with tractable Jacobians have been proposed in recent years. Rezende and Mohamed (2015) first discussed such transformations in the context of stochastic variation inference, coining the term normalizing flows.
Rezende and Mohamed (2015) proposed two different parametric families of transformations with tractable Jacobians: planar and radial flows. While effective for small problems, these transformations are hard to scale to large latent spaces and often require a large number of transformations. The transformation corresponding to planar flows is given in Eq. (8).
More recently, a successful class of flows called Inverse Autoregressive Flows was introduced in (Kingma et al., 2016). As the name suggests, one IAF transformation can be seen as the inverse of an autoregressive transformation. Consider the following autoregressive transformation:
(27) 
with . This transformation models the distribution over the variable with an autoregressive factorization . Since the parameters of transformation for are dependent on , this procedure requires sequential steps to sample a single vector . This is undesirable for variational inference, where sampling occurs for every forward pass.
However, the inverse transformation (which exists if ) is easy to sample from:
(28) 
For this inverse transformation, is no longer dependent on the transformation of for . Hence, this transformation can be computed in parallel: . Rewriting and , yields the IAF transformation:
(29) 
Starting from
, multiple IAF transformations can be stacked on top of each other to produce flexible probability distributions.
If and depend on linearly, IAF can model full covariance Gaussian distributions. In order to move away from Gaussian distributions to more flexible distributions, it is important that and are nonlinear functions of .
In practice, wide MADEs (Germain et al., 2015) or deep PixelCNN layers (van den Oord et al., 2016) are needed to increase the flexibility of IAF transformations. This results in transformations with a large number of parameters. As shown in Figure 2 (right), amortization is achieved through a context that is fed into the autoregressive networks as an additional input at every IAF step.
Our Triangular Sylvester flows are strongly related to meanonly IAF transformations (). As mentioned in Kingma et al. (2016), between every IAF transformation the order of is reversed, in order to ensure that on average all dimensions get warped equally. In TSNF, the same effect is achieved by using the permutation matrix that reverses the order of in every other transformation as the orthogonal matrix. However, meanonly IAF is a volumepreserving transformation, i.e. the determinant of the Jacobian has absolute value one. TSNF is not volume preserving due to the nonzero elements on the diagonals of and . Note, that in Kingma et al. (2016) it was shown that the difference in performance between meanonly IAF and the general IAF transformation was negligible.
The most important difference between IAF and TSNF is the way parameters are amortized. In TSNF, and are directly amortized functions of the input (see Fig. 2). This is equivalent to amortizing the MADE parameters in meanonly IAF. Having input dependent MADE parameters allows for flexible transformations with fewer parameters.
Householder Sylvesters flows can also be seen as a nonlinear extension of Householder flows (Tomczak and Welling, 2016). Householder flows are volumepreserving flows, which transform the variational posterior with a diagonal covariance matrix to a fullcovariance posterior. Householder flows are a special case of HSNF if , is the identity matrix, and the residual connection in Eq. (13) is left out.
4.2 Normalizing Flows for Density Estimation
A number of invertible transformations have been proposed in the context of density estimation. Note that density estimation requires the inverse of the flow to be tractable. Having a provably invertible transformation is not the same as being able to compute the inverse.
For density estimation with normalizing flows, we are interested maximizing the loglikelihood of the data:
(30) 
Thus, the goal is to transform a complicated data distribution back to a simple distribution. In general, both directions of an invertible transformations need not be tractable. Hence, methods developed for density estimation are generally not directly applicable to variational inference.
Nonlinear independent component estimation (NICE, Dinh et al. (2014)) and the related Real NVP (Dinh et al., 2016), and Masked Autoregressive Flow (MAF, Papamakarios et al. (2017)) are recent examples of normalizing flows for density estimation.
In NICE, each transformation splits the variables into two disjoint subsets . One of the subsets is transformed as , while is left unchanged. In the next transformation a different subset of variables is transformed. This results in a transformation which is trivially invertible and has a tractable Jacobian. Real NVP uses the same fundamental idea. Appealingly, because of the tractable inverse, NICE and real NVP can generate data and estimate density with one forward pass. However due to fact that only a subset of variables is updated in each transformation many transformations are needed in practice. Rezende and Mohamed (2015) compared NICE to planar flows in the context of variational inference and found that planar flows empirically perform better.
Finally, Papamakarios et al. (2017) showed that fitting an MAF can be seen as fitting an implicit IAF from the data distribution to the base distribution. However, generating data from an MAF density model requires passes, making it unappealing for variational inference.
5 Number of Parameters
Here, we briefly compare the number of parameters needed by planar flows, IAF and the three Sylvester normalizing flows. We denote the size of the stochastic variables with , and the number of output units of the inference network with .
Planar flows use amortized parameters and for each flow transformation. Therefore, the number of parameters related to flow transformations is equal to .
For the implementation of IAF as described in Section 6, the inference network needs to produce a context of size , where denotes the width of the MADE layers. The total number of flow related learnable parameters then comes down to .
In the case of Orthogonal Sylvester flows with a bottleneck of size , we require parameters. For Householder Sylvester flows with Householder reflections per flow transformation, parameters are needed. Finally, for triangular Sylvester flows parameters require optimization.
Planar flows require the smallest number of parameters but generally result in worse results. IAFs on the other hand require a number of parameters that is quadratic in the width of the MADE layers. For good results this has to be quite large. In contrast, for SNFs the number of parameters is quadratic in the dimension of the latent space and while large, this can still be amortized.
6 Experiments
We perform empirical studies of the performance of Sylvester flows on four datasets: statically binarized MNIST, Freyfaces, Omniglot and Caltech 101 Silhouettes. The baseline model is a plain VAE with a fully factorized Gaussian distribution. We furthermore compare against planar flows and Inverse Autoregressive Flows of different sizes.
We use annealing to optimize the lower bound, where the prefactor of the KL divergence is linearly increased from 0 to 1 during 100 epochs as suggested by
Bowman et al. (2015) and Sønderby et al. (2016). A learning rate of was used in all experiments. In order to obtain estimates for the negative log likelihood we used importance sampling (as proposed in (Rezende et al., 2014)). Unless otherwise stated, 5000 importance samples were used.In order to assess the performance of the different flows properly, we use the same base encoder and decoder architecture for all models. We use gated convolutions and transposed convolutions as base layers for the encoder and decoder architecture respectively. The inference network consists of several gated convolution layers that produce a hidden unit vector. After being flattened, these hidden units act as an input to two fully connected layers that predict the mean and variance of .
For planar and Sylvester flows, the flattened hidden units are passed to a separate linear layer that output the amortized flow parameters. For IAF, the flattened hidden units are also passed to a linear layer to produce the context vector . For details of the architecture see Section A of the appendix. In all models the latent space is of dimension .
We use the following implementation for each IAF transformation^{1}^{1}1This implementation is based on the open source code for IAF available at https://github.com/openai/iaf: one IAF transformation first applies one MADE Layer (denoted as MaskedLinear) followed by a nonlinearity to the input , upscaling it to a hidden variable of size . At this point the context vector is added to the hidden units, after which two more masked layers are applied to produce the mean and scale of the IAF transformation:
(31) 
Here, denotes the sigmoid activation function. In Kingma et al. (2016) it was mentioned that the gated form of IAF in Eq. (31) is more stable than the form of Eq. (29). Note that the size of scales with the width of the MADE layers .
Model  ELBO  NLL 

VAE  
Planar  
IAF  
OSNF  
HSNF  
TSNF 
Model  Freyfaces  Omniglot  Caltech 101  

ELBO  NLL  ELBO  NLL  ELBO  NLL  
VAE  
Planar  
IAF  
OSNF  
HSNF  
TSNF 
6.1 Mnist
Figure 3 shows the dependence of the negative evidence lower bound (or free energy) on the number of flows and the type of flow for static MNIST. The exact numbers corresponding to the figure are shown in Section B in the appendix.
For all models the performance improves as a functions of the number of flows. For 4 flows the difference between the baseline VAE and planar flows is very small. However, planar flows clearly benefit from more flow transformations.
For IAF three different widths of the MADE layers were used: , 640 and 1280. Surprisingly, for 4 flows the widest IAF with 1280 hidden units is outperformed by an IAF with 640 hidden units in the MADE layers. We expect this to be due to the fact that this model has more parameters and can therefore be harder to train, as indicated by the larger standard deviation for this model.
All three Sylvester flows outperform IAF and planar flows. For Orthogonal Sylvester flows, we show results for and orthogonal vectors per orthogonal matrix, thus corresponding to bottlenecks of size 16 and 32 respectively for a latent space of size . Clearly, a larger bottleneck improves performance. For Householder Sylvester flows we experimented with and Householder reflections per orthogonal matrix. Since the results were nearly indistinguishable between these two variants, we have left out the curve for to avoid clutter. OSNF with , HSNF and TSNF seem to perform on par.
In Table 1, the negative evidence lower bound and the estimated negative loglikelihood are shown for the baseline VAE, together with all flow models for 16 flows. The reported result for IAF is for a MADE width of 1280. The OSNF model has a bottleneck of , and HSNF contains 8 Householder reflections per orthogonal matrix. Again, all Sylvester flows outperform planar flows and IAF, both in terms of the free energy and the negative loglikelihood.
As discussed in Section 4, TSNF is closely related to meanonly IAF, but with the MADE parameters produced by a hypernetwork that depends on the input data . The fact that TSNF outperforms IAF indicates that having datadependent flow parameters directly leads to a more flexible transformation compared to taking a very wide MADE with a datadependent context as an additional input.
6.2 Freyfaces, Omniglot and Caltech 101 Silhouettes
We further assess the performance of the different models on Freyfaces, Omniglot and Caltech 101 Silhouettes. The results are shown in Table 2. The model settings are the same^{2}^{2}2For Caltech 101 Silhouettes we used 2000 importance samples for the estimation of the negative loglikelihood. as those used for Table 1.
Freyfaces is a very small dataset of around 2000 faces. All normalizing flows increase the performance, with planar flows yielding the best result, closely followed by Triangular and Householder Sylvester flows. We expect planar flows to perform the best in this case since it is the least sensitive to overfitting.
For Omniglot and Caltech 101 Silhouettes the results are clearer, with the Sylvester normalizing flows family resulting in the best performance. Both HSNF and TSNF perform better than OSNF. This could be attributed to the fact that OSNF has a bottleneck of for a latent space size of . The IAF scores for Caltech 101 are surprisingly bad. We expect this could be the case due to the large number of parameters that need to be trained for IAF(1280). Therefore we also evaluated the result for MADEs of width 320 for 16 flows. The resulting free energy and estimated negative loglikelihood are and respectively, only slightly improving on the results of 1280 wide IAFs.
7 Conclusion
We present a new family of normalizing flows: Sylvester normalizing flows. These flows generalize planar flows, while maintaining an efficiently computable Jacobian determinant through the use of Sylvester’s determinant identity. We ensure invertibility of the flows through the use of orthogonal and triangular parameter matrices. Three variants of Sylvester flows are investigated. First, orthogonal Sylvester flows use an iterative procedure to maintain orthogonality of parameter matrices. Second, Householder Sylvester flows use Householder reflections to construct orthogonal matrices. Third, triangular Sylvester flows alternate between fixed permutation and identity matrices for the orthogonal matrices. We show that the triangular Sylvester flows are closely related to meanonly IAF, with datadependent MADE parameters. While performing comparably with planar flows and IAF for the Freyfaces dataset, our proposed family of flows improve significantly upon planar flows and IAF on the three other datasets.
Acknowledgements
We would like to thank Christos Louizos for helping with the implementation of inverse autoregressive flows, and Diederik Kingma for fruitful discussions. LH is funded by the UK EPSRC OxWaSP CDT through grant EP/L016710/1. JMT is funded by the European Commission within the MSCIF (Grant No. 702666). RvdB is funded by SAP SE.
References
 Bischof and Sun (1997) Christian Bischof and Xiaobai Sun. On orthogonal block elimination. Technical Report MCSP4500794, Argonne National Laboratory, Argonne, IL, 10 1997.
 Björck and Bowie (1971) Åke Björck and Clazett Bowie. An iterative algorithm for computing the best estimate of an orthogonal matrix. SIAM Journal on Numerical Analysis, 8(2):358–364, 1971.
 Bowman et al. (2015) Samuel R. Bowman, Luke Vilnis, Oriol Vinyals, Andrew M. Dai, Rafal Jozefowicz, and Samy Bengio. Generating Sentences from a Continuous Space. nov 2015. URL http://arxiv.org/abs/1511.06349.
 Dinh et al. (2014) Laurent Dinh, David Krueger, and Yoshua Bengio. NICE: nonlinear independent components estimation. abs/1410.8516, 2014.
 Dinh et al. (2016) Laurent Dinh, Jascha SohlDickstein, and Samy Bengio. Density estimation using Real NVP. arXiv preprint arXiv:1605.08803, 2016.
 Germain et al. (2015) Mathieu Germain, Karol Gregor, Iain Murray, and Hugo Larochelle. MADE: Masked Autoencoder for Distribution Estimation. ICML, pages 881–889, 2015.
 Gershman and Goodman (2014) Samuel Gershman and Noah Goodman. Amortized inference in probabilistic reasoning. In Proceedings of the Annual Meeting of the Cognitive Science Society, volume 36, 2014.
 Ha et al. (2016) David Ha, Andrew Dai, and Quoc V. Le. Hypernetworks. arXiv preprint arXiv:1609.09106.
 Hasenclever et al. (2017) Leonard Hasenclever, Jakub Tomczak, Rianne van den Berg, and Max Welling. Variational inference with orthogonal normalizing flows. 2017.

Hoffman et al. (2013)
Matthew D. Hoffman, David M. Blei, Chong Wang, and John Paisley.
Stochastic variational inference.
Journal of Machine Learning Research
, 14:1303–1347, 2013.  Jordan et al. (1999) Michael I. Jordan, Zoubin Ghahramani, Tommi S. Jaakkola, and Lawrence K. Saul. An introduction to variational methods for graphical models. Machine Learning, 37(2):183–233, 1999.
 Kingma and Welling (2014) Diederik Kingma and Max Welling. Efficient gradientbased inference through transformations between bayes nets and neural nets. ICML, pages 1782–1790, 2014.
 Kingma and Welling (2013) Diederik P Kingma and Max Welling. Autoencoding variational bayes. arXiv preprint arXiv:1312.6114, 2013.
 Kingma et al. (2016) Diederik P Kingma, Tim Salimans, Rafal Jozefowicz, Xi Chen, Ilya Sutskever, and Max Welling. Improved Variational Inference with Inverse Autoregressive Flow. NIPS, pages 4743–4751, 2016.
 Kovarik (1970) Zdislav Kovarik. Some iterative methods for improving orthonormality. SIAM Journal on Numerical Analysis, 7(3):386–389, 1970.

Nalisnick et al. (2016)
Eric Nalisnick, Lars Hertel, and Padhraic Smyth.
Approximate inference for deep latent gaussian mixtures.
In
NIPS Workshop on Bayesian Deep Learning
, 2016.  Papamakarios et al. (2017) George Papamakarios, Iain Murray, and Theo Pavlakou. Masked Autoregressive Flow for Density Estimation. NIPS, pages 2335–2344, 2017.
 Rezende and Mohamed (2015) Danilo Rezende and Shakir Mohamed. Variational inference with normalizing flows. ICML, pages 1530–1538, 2015.
 Rezende et al. (2014) Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and approximate inference in deep generative models. arXiv preprint arXiv:1401.4082, 2014.
 Salimans et al. (2015) Tim Salimans, Diederik Kingma, and Max Welling. Markov Chain Monte Carlo and variational inference: Bridging the gap. ICML, pages 1218–1226, 2015.
 Sønderby et al. (2016) Casper Kaae Sønderby, Tapani Raiko, Lars Maaløe, Søren Kaae Sønderby, and Ole Winther. Ladder Variational Autoencoders. feb 2016. URL http://arxiv.org/abs/1602.02282.
 Sun and Bischof (1995) Xiaobai Sun and Christian Bischof. A basiskernel representation of orthogonal matrices. SIAM Journal on Matrix Analysis and Applications, 16(4):1184–1196, 1995.
 Tabak and Turner (2013) EG Tabak and Cristina V Turner. A family of nonparametric density estimation algorithms. Communications on Pure and Applied Mathematics, 66(2):145–164, 2013.
 Tabak and VandenEijnden (2010) Esteban G Tabak and Eric VandenEijnden. Density estimation by dual ascent of the loglikelihood. Communications in Mathematical Sciences, 8(1):217–233, 2010.
 Tomczak and Welling (2016) Jakub M Tomczak and Max Welling. Improving Variational Autoencoders using Householder Flow. arXiv preprint arXiv:1611.09630, 2016.
 Tran et al. (2015) Dustin Tran, Rajesh Ranganath, and David M Blei. The variational Gaussian process. arXiv preprint arXiv:1511.06499, 2015.
 van den Oord et al. (2016) Aaron van den Oord, Nal Kalchbrenner, Lasse Espeholt, koray kavukcuoglu, Oriol Vinyals, and Alex Graves. Conditional image generation with pixelcnn decoders. NIPS, pages 4790–4798, 2016.
Appendix A Architecture
In the experiments we used convolutional layers for both the encoder and the decoder. Moreover, we used the gated activation function for convolutional layers:
where and are inputs and outputs of the th layer, respectively, are weights of the th layer, denote biases, is the convolution operator, and is the sigmoid activation function.
We used the following architecture of the encoder ( is a kernel size,
is a padding size, and
is a stride size):
^{3}^{3}3We use a PyTorch convention of defining convolutional layers.
Notice the last layer acts as a fullyconnected layer. Eventually, fullyconnected linear layers were used to parameterized diagonal Gaussian distribution and amortized parameters of a flow.
The decoder mirrors the structure of the encoder with transposed convolutional layers ( is an outer padding):
a.1 Description of datasets
In the experimetns we used the following four image datasets: static MNIST^{4}^{4}4http://yann.lecun.com/exdb/mnist/, OMNIGLOT^{5}^{5}5https://github.com/yburda/iwae/blob/master/datasets/OMNIGLOT/chardata.mat., Caltech 101 Silhouettes^{6}^{6}6https://people.cs.umass.edu/~marlin/data/caltech101_silhouettes_28_split1.mat., and Frey Faces^{7}^{7}7http://www.cs.nyu.edu/~roweis/data/frey_rawface.mat. Frey Faces contains images of size and all other datasets contain images.
MNIST consists of handwritten digits split into 60,000 training datapoints and 10,000 test sample points. In order to perform model selection we put aside 10,000 images from the training set.
OMNIGLOT is a dataset containing 1,623 handwritten characters from 50 various alphabets. Each character is represented by about 20 images that makes the problem very challenging. The dataset is split into 24,345 training datapoints and 8,070 test images. We randomly pick 1,345 training examples for validation. During training we applied dynamic binarization of data similarly to dynamic MNIST.
Caltech 101 Silhouettes contains images representing silhouettes of 101 object classes. Each image is a filled, black polygon of an object on a white background. There are 4,100 training images, 2,264 validation datapoints and 2,307 test examples. The dataset is characterized by a small training sample size and many classes that makes the learning problem ambitious.
Frey Faces is a dataset of faces of a one person with different emotional expressions. The dataset consists of nearly 2,000 grayscaled images. We randomly split them into 1,565 training images, validation images and test images. We repeated the experiment times.
Appendix B MNIST experiments
Model  ELBO 

VAE  
Planar ()  
Planar ()  
Planar ()  
IAF (, )  
IAF (, )  
IAF (, )  
IAF (, )  
IAF (, )  
IAF (, )  
IAF (, )  
IAF (, )  
IAF (, )  
OSNF (, )  
OSNF (, )  
OSNF (, )  
OSNF (, )  
OSNF (, )  
OSNF (, )  
HSNF (, )  
HSNF (, )  
HSNF (, )  
HSNF (, )  
HSNF (, )  
HSNF (, )  
TSNF ()  
TSNF ()  
TSNF () 