1 Introduction
Maximum likelihood is a core machine learning paradigm that poses learning as a distribution alignment problem. However, it is often unclear what family of distributions should be used to fit highdimensional continuous data. In this regard, the change of variables theorem offers an appealing way to construct flexible distributions that allow tractable exact sampling and efficient evaluation of its density. This class of models is generally referred to as invertible or flowbased generative models
(Deco and Brauer, 1995; Rezende and Mohamed, 2015).With invertibility as its core design principle, flowbased models have shown to be capable of generating realistic images (Kingma and Dhariwal, 2018) and can achieve density estimation performance onpar with competing stateoftheart approaches (Ho et al., 2019). In applications, they have been applied to study adversarial robustness (Jacobsen et al., 2019) and are used to train hybrid models with both generative and classification capabilities (Nalisnick et al., 2019) using a weighted maximum likelihood objective.
However, existing highdimensional flowbased models rely on specialized architectures such as coupling blocks (Dinh et al., 2014, 2017) or solving a differential equation (Grathwohl et al., 2019). Such approaches have a strong inductive bias that can hinder their application in other tasks, such as learning representations that are suitable for both generative and discriminative tasks.
Recent work by Behrmann et al. (2019) showed that residual networks (He et al., 2016) can be made invertible by simply enforcing a Lipschitz constraint, allowing to use a very successful discriminative deep network architecture for unsupervised flowbased modeling. Unfortunately, the density evaluation requires computing an infinite series. The choice of a fixed truncation estimator used by Behrmann et al. (2019) leads to substantial bias that is tightly coupled with the expressiveness of the network, and cannot be said to be performing maximum likelihood as bias is introduced in the objective and gradients.
In this work, we introduce Residual Flows, a flowbased generative model that produces an unbiased estimate of the log density and has memoryefficient backpropagation through the log density computation. This allows us to use expressive architectures and train via maximum likelihood. Furthermore, we propose and experiment with the use of activations functions that avoid gradient saturation and induced mixed norms for Lipschitzconstrained neural networks.
2 Background
Maximum likelihood estimation.
To perform maximum likelihood with stochastic gradient descent, it is sufficient to have an unbiased estimator for the gradient as
(1) 
where is the unknown data distribution which can be sampled from and is the model distribution. An unbiased estimator of the gradient also immediately follows from an unbiased estimator of the log density function, .
Change of variables theorem.
With an invertible transformation , the change of variables
(2) 
captures the change in density of the transformed samples. A simple base distribution such as a standard normal is often used for . Tractable evaluation of (2) allows flowbased models to be trained using the maximum likelihood objective (1
). In contrast, variational autoencoders
(Kingma and Welling, 2014) can only optimize a stochastic lower bound, and generative adversial networks (Goodfellow et al., 2014) require an extra discriminator network for training.Invertible residual networks (iResNets).
Residual networks are composed of simple transformations . Behrmann et al. (2019) noted that this transformation is invertible by the Banach fixed point theorem if is contractive, i.e. with Lipschitz constant strictly less than unity, which was enforced using spectral normalization (Miyato et al., 2018; Gouk et al., 2018).
Applying iResNets to the changeofvariables (2), the identity
(3) 
was shown, where . Furthermore, the SkillingHutchinson estimator (Skilling, 1989; Hutchinson, 1990) was used to to estimate the trace in the power series. Behrmann et al. (2019) used a fixed truncation to approximate the infinite series in (3). The Lipschitz constant of bounds the spectral radius of , and therefore determines the convergence rate of this power series. As such, the fixed truncation estimator introduces substantial bias in the log density and is unreliable if the Lipschitz constant is close to one, thus requiring a careful balance between bias and expressiveness. Without decoupling the objective and estimation bias, iResNets end up optimizing for the bias without improving the actual maximum likelihood objective (see Figure 1).
3 Residual Flows
3.1 Unbiased Log Density Estimation for Maximum Likelihood Estimation
Evaluation of the exact log density function in (3) requires infinite time due to the power series. Instead, we rely on randomization to derive an unbiased estimator that can be computed in finite time (with probability one) based on an existing concept (Kahn, 1955).
To illustrate the idea, let denote the th term of an infinite series, and suppose we always evaluate the first term then flip a coin to determine whether we stop or continue evaluating the remaining terms. By reweighting the remaining terms by , we obtain an unbiased estimator
This unbiased estimator has probability of being evaluated in finite time. We can obtain an estimator that is evaluated in finite time with probability one by applying this process infinitely many times to the remaining terms. Directly sampling the number of evaluated terms, we obtain the appropriately named “Russian roulette” estimator (Kahn, 1955)
(4) 
We note that the explanation above is only meant to be an intuitive guide and not a formal derivation. The peculiarities of dealing with infinite quantities dictate that we must make assumptions on , , or both in order for the equality in (4) to hold. While many existing works have made different assumptions depending on specific applications of (4), we state our result as a theorem where the only condition is that must have support over all of the indices.
Theorem 1 (Unbiased log density estimator).
Here we have also used the SkillingHutchinson trace estimator (Skilling, 1989; Hutchinson, 1990) to estimate the trace of the matrices . A detailed proof is given in Appendix B.
Note that since
is constrained to have a spectral radius less than unity, the power series converges exponentially. The variance of the Russian roulette estimator is small when the infinite series exhibits fast convergence
(Rhee and Glynn, 2015; Beatson and Adams, 2019), and in practice, we did not have to tune for variance reduction. Instead, in our experiments, we compute two terms exactly and then use the unbiased estimator on the remaining terms with a single sample from . This results in an expected compute cost of terms, which is less than the to terms that Behrmann et al. (2019) used for their biased estimator.Theorem 1 forms the core of Residual Flows, as we can now perform maximum likelihood training by backpropagating through (5) to obtain unbiased gradients. The price we pay for the unbiased estimator is variable compute and memory, as each sample of the log density uses a random number of terms in the power series.
3.2 MemoryEfficient Backpropagation
Memory can be a scarce resource, and running out of memory due to a large sample from the unbiased estimator can halt training unexpectedly. To this end, we propose two methods to reduce the memory consumption during training.
To see how naïve backpropagation can be problematic, the gradient w.r.t. parameters by directly differentiating through the power series (5) can be expressed as
(6) 
Unfortunately, this estimator requires each term to be stored in memory because needs to be applied to each term. The total memory cost is then where is the number of computed terms and is the number of residual blocks in the entire network. This is extremely memoryhungry during training, and a large random sample of can occasionally result in running out of memory.
Neumann gradient series.
Instead, we can specifically express the gradients as a power series derived from a Neumann series (see Appendix C). Applying the Russian roulette and trace estimators, we obtain the following theorem.
Theorem 2 (Unbiased logdeterminant gradient estimator).
Let and be a random variable with support over positive integers. Then
(7) 
where and .
As the power series in (7) does not need to be differentiated through, using this reduces the memory requirement by a factor of . This is especially useful when using the unbiased estimator as the memory will be constant regardless of the number of terms we draw from .
Backwardinforward: early computation of gradients.
We can further reduce memory by partially performing backpropagation during the forward evaluation. By taking advantage of being a scalar quantity, the partial derivative from the objective is
(8) 
For every residual block, we compute along with the forward pass, release the memory for the computation graph, then simply multiply by later during the main backprop. This reduces memory by another factor of to with negligible overhead.
Note that while these two tricks remove the memory cost from backpropagating through the terms, computing the pathwise derivatives from still requires the same amount of memory as a single evaluation of the residual network. Table 1 shows that the memory consumption can be enormous for naïve backpropagation, and using large networks would have been intractable.
3.3 Avoiding Gradient Saturation with the LipSwish Activation Function
As the log density depends on the first derivatives through the Jacobian , the gradients for training depend on second derivatives. Similar to the phenomenon of saturated activation functions, Lipschitzconstrained activation functions can have a gradient saturation problem. For instance, the ELU activation used by Behrmann et al. (2019) achieves the highest Lipschitz constant when , but this occurs when the second derivative is exactly zero in a very large region, implying there is a tradeoff between a large Lipschitz constant and nonvanishing gradients.
We thus desire two properties from our activation functions :

The first derivatives must be bounded as for all

The second derivatives should not asymptotically vanish when is close to one.
While many activation functions satisfy condition 1, most do not satisfy condition 2. We argue that the ELU and softplus activations are suboptimal due to gradient saturation.
We find that good activation functions satisfying condition 2 are smooth and nonmonotonic functions, such as Swish (Ramachandran et al., 2017). However, Swish by default does not satisfy condition 1 as . But scaling via
(9) 
where
is the sigmoid function, results in
for all values of . LipSwish is a simple modification to Swish that exhibits a less than unity Lipschitz property. In our experiments, we parameterize to be strictly positive by passing it through softplus. One caveat of using LipSwish is it cannot be computed inplace, resulting in double the amount of memory usage as ELU (Table 1).3.4 Generalizing iResNets via Induced Mixed Norms
Behrmann et al. (2019) used spectral normalization (Miyato et al., 2018) (which relies on power iteration to approximate the spectral norm) to enforce the Lipschitz constraint on . Specifically, this bounds the spectral norm of the Jacobian by the submultiplicativity property. If is a neural network with preactivation defined recursively using and , with , then the dataindependent upper bound
(10) 
holds, where are diagonal matrices containing the first derivatives of the activation functions. The inequality in (10) is a result of using a submultiplicative norm and assuming that the activation functions have Lipschitz less than unity. However, any induced matrix norm satisfies the submultiplicativity property, including mixed norms , where the input and output spaces have different vector norms.
As long as maps back to the original normed (complete) vector space, the Banach fixed point theorem used in the proof of invertibility of residual blocks (Behrmann et al., 2019) still holds. As such, we can choose arbitrary such that
(11) 
We use a more general form of power iteration (Johnston, 2016) for estimating induced mixed norms, which becomes the standard power iteration for . Furthermore, the special cases where or are of particular interest, as the matrix norms can be computed exactly (Tropp, 2004). Additionally, we can also optimize the norm orders during training by backpropagating through the modified power method. Lastly, we emphasize that the convergence of the infinite series (3) is guaranteed for any induced matrix norm, as they still upper bound the spectral radius (Horn and Johnson, 2012).
norm improve upon spectral norm, allowing more expressive models with fewer blocks. Standard deviation across 3 random seeds. (b) On CIFAR10, learning the norm orders give a small performance gain and the
norm performs much worse than spectral norm (). Comparisons are made using identical initialization.Figure 1(a) shows that we obtain some performance gain by using either learned norms or the infinity norm on a difficult 2D dataset, where similar performance can be achieved by using fewer residual blocks. While the infinity norm works well with fully connected layers, we find that it does not perform as well as the spectral norm for convolutional networks. Instead, Figure 1(b) shows that learned norms obtain marginal improvement on CIFAR10. Ultimately, while the idea of generalizing spectral normalization via learnable norm orders is interesting in its own right to be communicated here, we found that the improvements are very marginal. More details are in Appendix D.
4 Related Work
Estimation of Infinite Series.
Our derivation of the unbiased estimator follows from the general approach of using a randomized truncation (Kahn, 1955). This paradigm of estimation has been repeatedly rediscovered and applied in many fields, including solving of stochastic differential equations (McLeish, 2011; Rhee and Glynn, 2012, 2015), ray tracing for rendering paths of light (Arvo and Kirk, 1990), and estimating limiting behavior of optimization problems (Tallec and Ollivier, 2017; Beatson and Adams, 2019), among many other applications. Some recent works use Chebyshev polynomials to estimate the spectral functions of symmetric matrices Han et al. (2018); Adams et al. (2018); Ramesh and LeCun (2018); Boutsidis et al. (2008). These works estimate quantities that are similar to those presented in this work, but a key difference is that the Jacobian in our power series is not symmetric. We also note works that have rediscovered the random truncation approach (McLeish, 2011; Rhee and Glynn, 2015; Han et al., 2018) made assumptions on in order for it to be applicable to general infinite series. Fortunately, since the power series in Theorems 1 and 2 converge fast enough, we were able to make use of a different set of assumptions requiring only that has sufficient support, which was adapted from BouchardCôté (2018) (details in Appendix B).
Memoryefficient Backpropagation.
The issue of computing gradients in a memoryefficient manner was explored by Gomez et al. (2017) and Chang et al. (2018) for residual networks with an architecture devised by Dinh et al. (2014), and explored by Chen et al. (2018) for a continuous analogue of residual networks. These works focus on the pathwise gradients from the output of the network, whereas we focus on the gradients from the logdeterminant term in the change of variables equation specifically for generative modeling.
Invertible Deep Networks.
Flowbased generative models are a density estimation approach which has invertibility as its core design principle (Rezende and Mohamed, 2015; Deco and Brauer, 1995). Most recent work on flows focuses on designing maximally expressive architectures while maintaining invertibility and tractable log determinant computation (Dinh et al., 2014, 2017; Kingma and Dhariwal, 2018)
. An alternative route has been taken by Neural ODEs which show that a Jacobian trace can be computed instead of log determinants, if the transformation is specified by an ordinary differential equation
(Chen et al., 2018; Grathwohl et al., 2019). Invertible architectures are also of interest for discriminative problems, as their informationpreservation properties make them suitable candidates to analyze and regularize learned representations (Jacobsen et al., 2019).5 Experiments
5.1 Density & Generative Modeling
We use a similar architecture as Behrmann et al. (2019), except without the immediate invertible downsampling (Dinh et al., 2017) at the image pixellevel. Removing this substantially increases the amount of memory required (shown in Table 1) as there are more spatial dimensions at every layer, but increases the overall performance. We also increase the bound on the Lipschitz constants of each weight matrix to , whereas Behrmann et al. (2019) used to reduce the error of the biased estimator. More detailed explanations are in Appendix E.
Unlike prior works that use multiple GPUs, large batch sizes, and a few hundred epochs, Residual Flow models are trained with the standard batch size of 64 and converges in roughly 300350 epochs for MNIST and CIFAR10. Most network settings can fit on a single GPU (see Table
1), though we use 4 GPUs in our experiments to speed up training. In comparison, Glow was trained for 1800 epochs (Kingma and Dhariwal, 2018) and Flow++ reported to not have fully converged after 400 epochs (Ho et al., 2019). We exceed Glow’s performance (3.35) on CIFAR10 at around 50 epochs (Figure 6).Table 2 reports the bits per dimension ( where
) on standard benchmark datasets MNIST, CIFAR10, and downsampled ImageNet. We achieve competitive performance to stateoftheart flowbased models on both datasets. For evaluation, we computed 20 terms of the power series (
3) and use the unbiased estimator (5) to estimate the remaining terms. This reduces the standard deviation of the unbiased estimator to a negligible level.We are also competitive with stateoftheart flowbased models in regards to sample quality. Figure 3 shows random samples from the model trained on CelebA. Furthermore, samples from Residual Flow trained on CIFAR10 are slightly more globally coherent (Figure 4) than PixelCNN and variational dequantized Flow++, even though our likelihood is worse. This is only an informal comparison, and it is wellknown that visual fidelity and loglikelihood are not necessarily indicative of each other (Theis et al., 2015), but we believe residual networks may have better inductive bias than coupling blocks or autoregressive architectures as generative models. More samples are in Appendix A.
5.2 Ablation Experiments
We report ablation experiments for the unbiased estimator and the LipSwish activation function in Table 6. Even in settings where the Lipschitz constant and bias are relatively low, we observe a significant improvement from using the unbiased estimator. Training the larger iResNet model on CIFAR10 results in the biased estimator completely ignoring the actual likelihood objective altogether. In this setting, the biased estimate was lower than 0.8 bits/dim by 50 epochs, but the actual bits/dim wildly oscillates above 3.66 bits/dim and seems to never converge. Using LipSwish not only converges much faster but also results in better performance compared to softplus or ELU, especially in the high Lipschitz settings (Figure 6 and Table 6).
5.3 Hybrid Modeling
Next, we experiment on joint training of continuous and discrete data. Of particular interest is the ability to learn both a generative model and a classifier, referred to as a hybrid model
(Nalisnick et al., 2019). Let be the data and be a categorical random variable. The maximum likelihood objective can be separated into , where is modeled using a flowbased generative model and is a classifier network that shares learned features from the generative model. However, it is often the case that accuracy is the metric of interest and loglikelihood is only used as a surrogate training objective. In this case, (Nalisnick et al., 2019) suggests a weighted maximum likelihood objective,(12) 
where is a scaling constant. As is much lower dimensional than , setting emphasizes classification, and setting results in a classificationonly model which can be compared against.
As Nalisnick et al. (2019)
performs approximate Bayesian inference and uses different architectures than us, we perform our own experiments to compare residual blocks to coupling blocks
Dinh et al. (2017) as well as 11 convolutions (Kingma and Dhariwal, 2018). We use the same architecture as the density estimation experiments and append a classification branch that takes features at the final output of multiple scales (see details in Appendix E). For comparisons to coupling blocks, we use the same architecture forexcept we use ReLU activations and no longer constrain the Lipschitz constant.
Tables 3 & 4 show our experiment results. We outperform Nalisnick et al. (2019) on both pure classification and hybrid modeling. Furthermore, on MNIST we are able to obtain a decent classifier even when . In general, we find that residual blocks perform much better than coupling blocks at learning representations for both generative and discriminative tasks. Coupling blocks have very high bits per dimension when while performing worse at classification when , suggesting that they have restricted flexibility and can only perform one task well at a time.
6 Conclusion
We have shown that invertible residual networks can be turned into powerful generative models. The proposed unbiased flowbased generative model, coined Residual Flow, achieves competitive or better performance compared to alternative flowbased models in density estimation, sample quality, and hybrid modeling. More generally, we gave a recipe for introducing stochasticity in order to construct tractable flowbased models with a different set of constraints on layer architectures than competing approaches, which rely on exact logdeterminant computations. This opens up a new design space of expressive but Lipschitzconstrained architectures that has yet to be explored.
Acknowledgments
Jens Behrmann gratefully acknowledges the financial support from the German Science Foundation for RTG 2224 “: Parameter Identification  Analysis, Algorithms, Applications”
References
 Adams et al. [2018] Ryan P Adams, Jeffrey Pennington, Matthew J Johnson, Jamie Smith, Yaniv Ovadia, Brian Patton, and James Saunderson. Estimating the spectral density of large implicit matrices. arXiv preprint arXiv:1802.03451, 2018.
 Arvo and Kirk [1990] James Arvo and David Kirk. Particle transport and image synthesis. ACM SIGGRAPH Computer Graphics, 24(4):63–66, 1990.
 Beatson and Adams [2019] Alex Beatson and Ryan P. Adams. Efficient optimization of loops and limits with randomized telescoping sums. In International Conference on Machine Learning, 2019.
 Behrmann et al. [2019] Jens Behrmann, Will Grathwohl, Ricky T. Q. Chen, David Duvenaud, and JörnHenrik Jacobsen. Invertible residual networks. International Conference on Machine Learning, 2019.
 BouchardCôté [2018] Alexandre BouchardCôté. Topics in probability assignment 1. https://www.stat.ubc.ca/~bouchard/courses/stat547fa201819//files/assignment1solution.pdf, 2018. Accessed: 20190522.

Boutsidis et al. [2008]
Christos Boutsidis, Michael W Mahoney, and Petros Drineas.
Unsupervised feature selection for principal components analysis.
In Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 61–69. ACM, 2008. 
Chang et al. [2018]
Bo Chang, Lili Meng, Eldad Haber, Lars Ruthotto, David Begert, and Elliot
Holtham.
Reversible architectures for arbitrarily deep residual neural
networks.
In
AAAI Conference on Artificial Intelligence
, 2018.  Chen et al. [2018] Ricky T. Q. Chen, Yulia Rubanova, Jesse Bettencourt, and David Duvenaud. Neural ordinary differential equations. Advances in Neural Information Processing Systems, 2018.
 Deco and Brauer [1995] Gustavo Deco and Wilfried Brauer. Nonlinear higherorder statistical decorrelation by volumeconserving neural architectures. Neural Networks, 8(4):525–535, 1995.
 Dinh et al. [2014] Laurent Dinh, David Krueger, and Yoshua Bengio. NICE: Nonlinear independent components estimation. arXiv preprint arXiv:1410.8516, 2014.
 Dinh et al. [2017] Laurent Dinh, Jascha SohlDickstein, and Samy Bengio. Density estimation using real nvp. International Conference on Learning Representations, 2017.
 Gomez et al. [2017] Aidan N Gomez, Mengye Ren, Raquel Urtasun, and Roger B Grosse. The reversible residual network: Backpropagation without storing activations. In Advances in neural information processing systems, pages 2214–2224, 2017.
 Goodfellow et al. [2014] Ian Goodfellow, Jean PougetAbadie, Mehdi Mirza, Bing Xu, David WardeFarley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in neural information processing systems, pages 2672–2680, 2014.
 Gouk et al. [2018] Henry Gouk, Eibe Frank, Bernhard Pfahringer, and Michael Cree. Regularisation of neural networks by enforcing lipschitz continuity. arXiv preprint arXiv:1804.04368, 2018.
 Grathwohl et al. [2019] Will Grathwohl, Ricky T. Q. Chen, Jesse Bettencourt, Ilya Sutskever, and David Duvenaud. Ffjord: Freeform continuous dynamics for scalable reversible generative models. International Conference on Learning Representations, 2019.
 Han et al. [2018] Insu Han, Haim Avron, and Jinwoo Shin. Stochastic chebyshev gradient descent for spectral optimization. In Conference on Neural Information Processing Systems. 2018.

He et al. [2016]
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun.
Deep residual learning for image recognition.
In
Proceedings of the IEEE conference on computer vision and pattern recognition
, pages 770–778, 2016.  Ho et al. [2019] Jonathan Ho, Xi Chen, Aravind Srinivas, Yan Duan, and Pieter Abbeel. Flow++: Improving flowbased generative models with variational dequantization and architecture design. International Conference on Machine Learning, 2019.
 Horn and Johnson [2012] Roger A Horn and Charles R Johnson. Matrix analysis. Cambridge University Press, 2012.
 Hutchinson [1990] Michael F Hutchinson. A stochastic estimator of the trace of the influence matrix for Laplacian smoothing splines. Communications in StatisticsSimulation and Computation, 19(2):433–450, 1990.
 Jacobsen et al. [2019] JörnHenrik Jacobsen, Jens Behrmann, Richard Zemel, and Matthias Bethge. Excessive invariance causes adversarial vulnerability. International Conference on Learning Representations, 2019.
 Johnston [2016] Nathaniel Johnston. QETLAB: A MATLAB toolbox for quantum entanglement, version 0.9. http://qetlab.com, January 2016.
 Kahn [1955] Herman Kahn. Use of different monte carlo sampling techniques. 1955.
 Kingma and Ba [2015] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In International Conference on Learning Representations, 2015.
 Kingma and Welling [2014] Diederik P Kingma and Max Welling. Autoencoding variational bayes. International Conference on Learning Representations, 2014.
 Kingma and Dhariwal [2018] Durk P Kingma and Prafulla Dhariwal. Glow: Generative flow with invertible 1x1 convolutions. In Advances in Neural Information Processing Systems, pages 10215–10224, 2018.
 Loshchilov and Hutter [2019] Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. In International Conference on Learning Representations, 2019.
 McLeish [2011] Don McLeish. A general method for debiasing a monte carlo estimator. Monte Carlo Methods and Applications, 17(4):301–315, 2011.
 Miyato et al. [2018] Takeru Miyato, Toshiki Kataoka, Masanori Koyama, and Yuichi Yoshida. Spectral normalization for generative adversarial networks. In International Conference on Learning Representations, 2018.
 Nalisnick et al. [2019] Eric Nalisnick, Akihiro Matsukawa, Yee Whye Teh, Dilan Gorur, and Balaji Lakshminarayanan. Hybrid models with deep and invertible features. International Conference on Machine Learning, 2019.

Oord et al. [2016]
Aaron van den Oord, Nal Kalchbrenner, and Koray Kavukcuoglu.
Pixel recurrent neural networks.
International Conference on Machine Learning, 2016.  Petersen and Pedersen [2012] K. B. Petersen and M. S. Pedersen. The matrix cookbook, 2012.
 Polyak and Juditsky [1992] Boris T. Polyak and Anatoli Juditsky. Acceleration of stochastic approximation by averaging. 1992.
 Ramachandran et al. [2017] Prajit Ramachandran, Barret Zoph, and Quoc V Le. Searching for activation functions. arXiv preprint arXiv:1710.05941, 2017.
 Ramesh and LeCun [2018] Aditya Ramesh and Yann LeCun. Backpropagation for implicit spectral densities. Conference on Neural Information Processing Systems, abs/1806.00499, 2018.
 Rezende and Mohamed [2015] Danilo Rezende and Shakir Mohamed. Variational inference with normalizing flows. In Proceedings of the 32nd International Conference on Machine Learning, pages 1530–1538, 2015.
 Rhee and Glynn [2012] Changhan Rhee and Peter W Glynn. A new approach to unbiased estimation for sde’s. In Proceedings of the Winter Simulation Conference, page 17. Winter Simulation Conference, 2012.
 Rhee and Glynn [2015] Changhan Rhee and Peter W Glynn. Unbiased estimation with square root convergence for sde models. Operations Research, 63(5):1026–1043, 2015.

Skilling [1989]
John Skilling.
The eigenvalues of megadimensional matrices.
In Maximum Entropy and Bayesian Methods, pages 455–466. Springer, 1989.  Tallec and Ollivier [2017] Corentin Tallec and Yann Ollivier. Unbiasing truncated backpropagation through time. arXiv preprint arXiv:1705.08209, 2017.
 Theis et al. [2015] Lucas Theis, Aäron van den Oord, and Matthias Bethge. A note on the evaluation of generative models. arXiv preprint arXiv:1511.01844, 2015.
 Tropp [2004] Joel Aaron Tropp. Topics in sparse approximation. In PhD thesis. University of Texas, 2004.
 Zhang et al. [2019] Guodong Zhang, Chaoqi Wang, Bowen Xu, and Roger Grosse. Three mechanisms of weight decay regularization. In International Conference on Learning Representations, 2019.
Appendix A Random Samples
Appendix B Proofs
We start by formulating a Lemma, which gives the condition when the randomized truncated series is an unbiased estimator in a fairly general setting. Afterwards, we study our specific estimator and prove that the assumption of the Lemma is satisfied.
Note, that similar conditions have been stated in previous works, e.g. in McLeish (2011) and Rhee and Glynn (2012). However, we use the condition from (BouchardCôté, 2018), which only requires to have sufficient support.
To make the derivations selfcontained, we reformulate the conditions from (BouchardCôté, 2018) in the following way:
Lemma 3 (Unbiased randomized truncated series).
Let be a real random variable with for some . Further, let and for . Assume
and let be a random variable with support over the positive integers and . Then for
it holds
Proof.
First, denote
where by the triangle inequality. Since is nondecreasing, the monotone convergence theorem allows swapping the expectation and limit as . Furthermore, it is
where the assumption is used in the last step. Using the above, the dominated convergence theorem can be used to swap the limit and expectation for . Using similar derivations as above, it is
where we used the definition of and . ∎
Proof.
(Theorem 1)
To simplify notation, we denote . Furthermore, let
denote the real random variable and let and for , as in Lemma 3. To prove the claim of the theorem, we can use Lemma 3 and we only need to prove that the assumption holds for this specific case.
In order to exchange summation and expectation via Fubini’s theorem, we need to proof that for all . Using the assumption , it is
for an arbitrary . Hence,
(13) 
Proof.
(Theorem 2)
The result can be proven in an analogous fashion to the proof of Theorem 1, which is why we only present a short version without all steps.
By obtaining the bound
Fubini’s theorem can be applied to swap the expection and summation. Furthermore, by using the trace estimation and similar bounds as in the proof of Theorem 1, the assumption from Lemma 3 can be proven.
∎
Appendix C MemoryEfficient Gradient Estimation of LogDeterminant
Derivation of gradient estimator via differentiating power series:
Derivation of memoryefficient gradient estimator:
(14)  
(15)  
(16) 
Note, that (14
) follows from the chain rule of differentiation, for the derivative of the determinant in (
15), see (Petersen and Pedersen, 2012) (eq. 46). Furthermore, (16) follows from the properties of a NeumannSeries which converges due to .Hence, if we are able to compute the trace exactly, both approaches will return the same values for a given truncation . However, when estimating the trace via the Hutchinson trace estimator the estimation is not equal in general:
Another difference between both approaches is their memory consumption of the corresponding computational graph. The summation is not being tracked for the gradient, which allows to compute the gradient with constant memory (constant with respect to the truncation ).
Appendix D Generalized Spectral Norm
Using different induced norms on Checkerboard 2D.
We experimented with the checkerboard 2D dataset, which is a rather difficult twodimensional data to fit a flowbased model on due to the discontinuous nature of the true distribution. We used bruteforce computation of the logdeterminant for change of variables (which, in the 2D case, is faster than the unbiased estimator). In the 2D case, we found that norm always outperforms or at least matches the norm (ie. spectral norm). Figure 12 shows the learned densities with 200 residual blocks. The color represents the magnitude of , with brighter values indicating larger values. The norm model produces density estimates that are more evenly spread out across the space, whereas the spectral norm model focused its density to model betweendensity regions.
Learning norm orders on CIFAR10.
We used where is a learned weight. This bounds the norm orders to . We tried two different setups. One where all norm orders are free to change (conditioned on them satisfying the constraints (11)), and another setting where each states within each residual block share the same order. Figure 13 shows the improvement in bits from using learned norms. The gain in performance is marginal, and the final models only outperformed spectral norm by around bits/dim. Interestingly, we found that the learned norms stayed around , shown in Figure 14, especially for the input and output spaces of , ie. between blocks. This may suggest that spectral norm, or a norm with is already optimal in this setting.
Appendix E Experiment Setup
We use the standard setup of passing the data through a “unsquashing” layer (we used the logit transform
(Dinh et al., 2017)), followed by alternating multiple blocks and squeeze layers (Dinh et al., 2017). We use activation normalization (Kingma and Dhariwal, 2018)before and after every residual block. Each residual connection consists of
LipSwish 33 Conv LipSwish 11 Conv LipSwish 33 Conv
with hidden dimensions of . Below are the architectures for each dataset.
Mnist.
With 1e5.
Image LogitTransform() 16ResBlock [ Squeeze 16ResBlock ]2
Cifar10.
With .
Image LogitTransform() 16ResBlock [ Squeeze 16ResBlock ]2
Svhn
With .
Image LogitTransform() 16ResBlock [ Squeeze 16ResBlock ]2
ImageNet 3232.
With .
Image LogitTransform() 32ResBlock [ Squeeze 32ResBlock ]2
ImageNet 6464.
With .
Image Squeeze LogitTransform() 32ResBlock [ Squeeze 32ResBlock ]2
CelebA 5bit 6464.
With .
Image Squeeze LogitTransform() 16ResBlock [ Squeeze 16ResBlock ]3
For density modeling on MNIST and CIFAR10, we added 4 fully connected residual blocks at the end of the network, with intermediate hidden dimensions of 128. These residual blocks were not used in the hybrid modeling experiments or on other datasets.
For hybrid modeling on CIFAR10, we replaced the logit transform with normalization by the standard preprocessing of subtracting the mean and dividing by the standard deviation across the training data. The MNIST and SVHN architectures for hybrid modeling were the same as those for density modeling.
For augmenting our flowbased model with a classifier in the hybrid modeling experiments, we added an additional branch after every squeeze layer and at the end of the network. Each branch consisted of
33 Conv ActNorm ReLU AdaptiveAveragePooling(())
where the adaptive average pooling averages across all spatial dimensions and resulted in a vector of dimension . The outputs at every scale were concatenated together and fed into a linear softmax classifier.
Adaptive number of power iterations.
We used spectral normalization for convolutions (Gouk et al., 2018). To account for variable weight updates during training, we implemented an adaptive version of spectral normalization where we performed as many iterations as needed until the relative change in the estimated spectral norm was sufficiently small. As this also reduced the number of iterations when weight updates are small, this did not result in higher time complexity.
Optimization.
For stochastic gradient descent, we used Adam (Kingma and Ba, 2015) with a learning rate of and weight decay of applied outside the adaptive learning rate computation (Loshchilov and Hutter, 2019; Zhang et al., 2019). We used Polyak averaging (Polyak and Juditsky, 1992) for evaluation with a decay of .
Preprocessing.
For density estimation experiments, we used random horizontal flipping for CIFAR10 and CelebA. For hybrid modeling and classification experiments, we used random cropping after reflection padding with 4 pixels for SVHN and CIFAR10; CIFAR10 also included random horizontal flipping.
Comments
There are no comments yet.