Residual Flows for Invertible Generative Modeling

06/06/2019 ∙ by Ricky T. Q. Chen, et al. ∙ UNIVERSITY OF TORONTO Universität Bremen 2

Flow-based generative models parameterize probability distributions through an invertible transformation and can be trained by maximum likelihood. Invertible residual networks provide a flexible family of transformations where only Lipschitz conditions rather than strict architectural constraints are needed for enforcing invertibility. However, prior work trained invertible residual networks for density estimation by relying on biased log-density estimates whose bias increased with the network's expressiveness. We give a tractable unbiased estima1te of the log density, and reduce the memory required during training by a factor of ten. Furthermore, we improve invertible residual blocks by proposing the use of activation functions that avoid gradient saturation and generalizing the Lipschitz condition to induced mixed norms. The resulting approach, called Residual Flows, achieves state-of-the-art performance on density estimation amongst flow-based models, and outperforms networks that use coupling blocks at joint generative and discriminative modeling.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 7

page 11

page 12

page 13

page 16

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

[width=trim=18px 0 36px 25px, clip]plots/bias_vs_unbiased.pdf

Figure 1: i-ResNets suffer from substantial bias when using expressive networks, whereas Residual Flows principledly perform maximum likelihood with unbiased stochastic gradients.

Maximum likelihood is a core machine learning paradigm that poses learning as a distribution alignment problem. However, it is often unclear what family of distributions should be used to fit high-dimensional continuous data. In this regard, the change of variables theorem offers an appealing way to construct flexible distributions that allow tractable exact sampling and efficient evaluation of its density. This class of models is generally referred to as invertible or flow-based generative models 

(Deco and Brauer, 1995; Rezende and Mohamed, 2015).

With invertibility as its core design principle, flow-based models have shown to be capable of generating realistic images (Kingma and Dhariwal, 2018) and can achieve density estimation performance on-par with competing state-of-the-art approaches (Ho et al., 2019). In applications, they have been applied to study adversarial robustness (Jacobsen et al., 2019) and are used to train hybrid models with both generative and classification capabilities (Nalisnick et al., 2019) using a weighted maximum likelihood objective.

However, existing high-dimensional flow-based models rely on specialized architectures such as coupling blocks (Dinh et al., 2014, 2017) or solving a differential equation (Grathwohl et al., 2019). Such approaches have a strong inductive bias that can hinder their application in other tasks, such as learning representations that are suitable for both generative and discriminative tasks.

Recent work by Behrmann et al. (2019) showed that residual networks (He et al., 2016) can be made invertible by simply enforcing a Lipschitz constraint, allowing to use a very successful discriminative deep network architecture for unsupervised flow-based modeling. Unfortunately, the density evaluation requires computing an infinite series. The choice of a fixed truncation estimator used by Behrmann et al. (2019) leads to substantial bias that is tightly coupled with the expressiveness of the network, and cannot be said to be performing maximum likelihood as bias is introduced in the objective and gradients.

In this work, we introduce Residual Flows, a flow-based generative model that produces an unbiased estimate of the log density and has memory-efficient backpropagation through the log density computation. This allows us to use expressive architectures and train via maximum likelihood. Furthermore, we propose and experiment with the use of activations functions that avoid gradient saturation and induced mixed norms for Lipschitz-constrained neural networks.

2 Background

Maximum likelihood estimation.

To perform maximum likelihood with stochastic gradient descent, it is sufficient to have an unbiased estimator for the gradient as

(1)

where is the unknown data distribution which can be sampled from and is the model distribution. An unbiased estimator of the gradient also immediately follows from an unbiased estimator of the log density function, .

Change of variables theorem.

With an invertible transformation , the change of variables

(2)

captures the change in density of the transformed samples. A simple base distribution such as a standard normal is often used for . Tractable evaluation of (2) allows flow-based models to be trained using the maximum likelihood objective (1

). In contrast, variational autoencoders 

(Kingma and Welling, 2014) can only optimize a stochastic lower bound, and generative adversial networks (Goodfellow et al., 2014) require an extra discriminator network for training.

Invertible residual networks (i-ResNets).

Residual networks are composed of simple transformations . Behrmann et al. (2019) noted that this transformation is invertible by the Banach fixed point theorem if is contractive, i.e. with Lipschitz constant strictly less than unity, which was enforced using spectral normalization (Miyato et al., 2018; Gouk et al., 2018).

Applying i-ResNets to the change-of-variables (2), the identity

(3)

was shown, where . Furthermore, the Skilling-Hutchinson estimator (Skilling, 1989; Hutchinson, 1990) was used to to estimate the trace in the power series. Behrmann et al. (2019) used a fixed truncation to approximate the infinite series in (3). The Lipschitz constant of bounds the spectral radius of , and therefore determines the convergence rate of this power series. As such, the fixed truncation estimator introduces substantial bias in the log density and is unreliable if the Lipschitz constant is close to one, thus requiring a careful balance between bias and expressiveness. Without decoupling the objective and estimation bias, i-ResNets end up optimizing for the bias without improving the actual maximum likelihood objective (see Figure 1).

3 Residual Flows

3.1 Unbiased Log Density Estimation for Maximum Likelihood Estimation

Evaluation of the exact log density function in (3) requires infinite time due to the power series. Instead, we rely on randomization to derive an unbiased estimator that can be computed in finite time (with probability one) based on an existing concept (Kahn, 1955).

To illustrate the idea, let denote the -th term of an infinite series, and suppose we always evaluate the first term then flip a coin to determine whether we stop or continue evaluating the remaining terms. By reweighting the remaining terms by , we obtain an unbiased estimator

This unbiased estimator has probability of being evaluated in finite time. We can obtain an estimator that is evaluated in finite time with probability one by applying this process infinitely many times to the remaining terms. Directly sampling the number of evaluated terms, we obtain the appropriately named “Russian roulette” estimator (Kahn, 1955)

(4)

We note that the explanation above is only meant to be an intuitive guide and not a formal derivation. The peculiarities of dealing with infinite quantities dictate that we must make assumptions on , , or both in order for the equality in (4) to hold. While many existing works have made different assumptions depending on specific applications of (4), we state our result as a theorem where the only condition is that must have support over all of the indices.

Theorem 1 (Unbiased log density estimator).

Let with and

be a random variable with support over the positive integers. Then

(5)

where and .

Here we have also used the Skilling-Hutchinson trace estimator (Skilling, 1989; Hutchinson, 1990) to estimate the trace of the matrices . A detailed proof is given in Appendix B.

Note that since

is constrained to have a spectral radius less than unity, the power series converges exponentially. The variance of the Russian roulette estimator is small when the infinite series exhibits fast convergence 

(Rhee and Glynn, 2015; Beatson and Adams, 2019), and in practice, we did not have to tune for variance reduction. Instead, in our experiments, we compute two terms exactly and then use the unbiased estimator on the remaining terms with a single sample from . This results in an expected compute cost of terms, which is less than the to terms that Behrmann et al. (2019) used for their biased estimator.

Theorem 1 forms the core of Residual Flows, as we can now perform maximum likelihood training by backpropagating through (5) to obtain unbiased gradients. The price we pay for the unbiased estimator is variable compute and memory, as each sample of the log density uses a random number of terms in the power series.

3.2 Memory-Efficient Backpropagation

Memory can be a scarce resource, and running out of memory due to a large sample from the unbiased estimator can halt training unexpectedly. To this end, we propose two methods to reduce the memory consumption during training.

To see how naïve backpropagation can be problematic, the gradient w.r.t. parameters by directly differentiating through the power series (5) can be expressed as

(6)

Unfortunately, this estimator requires each term to be stored in memory because needs to be applied to each term. The total memory cost is then where is the number of computed terms and is the number of residual blocks in the entire network. This is extremely memory-hungry during training, and a large random sample of can occasionally result in running out of memory.

Neumann gradient series.

Instead, we can specifically express the gradients as a power series derived from a Neumann series (see Appendix C). Applying the Russian roulette and trace estimators, we obtain the following theorem.

Theorem 2 (Unbiased log-determinant gradient estimator).

Let and be a random variable with support over positive integers. Then

(7)

where and .

As the power series in (7) does not need to be differentiated through, using this reduces the memory requirement by a factor of . This is especially useful when using the unbiased estimator as the memory will be constant regardless of the number of terms we draw from .

Backward-in-forward: early computation of gradients.

We can further reduce memory by partially performing backpropagation during the forward evaluation. By taking advantage of being a scalar quantity, the partial derivative from the objective is

(8)

For every residual block, we compute along with the forward pass, release the memory for the computation graph, then simply multiply by later during the main backprop. This reduces memory by another factor of to with negligible overhead.

Note that while these two tricks remove the memory cost from backpropagating through the terms, computing the path-wise derivatives from still requires the same amount of memory as a single evaluation of the residual network. Table  1 shows that the memory consumption can be enormous for naïve backpropagation, and using large networks would have been intractable.

! MNIST CIFAR-10 CIFAR-10   ELU LipSwish   ELU LipSwish   ELU LipSwish   Relative Naïve Backprop 92.0 192.1 33.3 66.4 120.2 263.5 100% Neumann Gradient 13.4 31.2 5.5 11.3 17.6 40.8 15.7% Backward-in-Forward 8.7 19.8 3.8 7.4 11.5 26.1 10.3% Both Combined 4.9 13.6 3.0 5.9 6.6 18.0 7.1%

Table 1: Memory usage (GB) per minibatch of 64 samples when computing =10 terms in the corresponding power series. Uses immediate downsampling before any residual blocks.

3.3 Avoiding Gradient Saturation with the LipSwish Activation Function

As the log density depends on the first derivatives through the Jacobian , the gradients for training depend on second derivatives. Similar to the phenomenon of saturated activation functions, Lipschitz-constrained activation functions can have a gradient saturation problem. For instance, the ELU activation used by Behrmann et al. (2019) achieves the highest Lipschitz constant when , but this occurs when the second derivative is exactly zero in a very large region, implying there is a trade-off between a large Lipschitz constant and non-vanishing gradients.

We thus desire two properties from our activation functions :

  1. The first derivatives must be bounded as for all

  2. The second derivatives should not asymptotically vanish when is close to one.

While many activation functions satisfy condition 1, most do not satisfy condition 2. We argue that the ELU and softplus activations are suboptimal due to gradient saturation.

We find that good activation functions satisfying condition 2 are smooth and non-monotonic functions, such as Swish (Ramachandran et al., 2017). However, Swish by default does not satisfy condition 1 as . But scaling via

(9)

where

is the sigmoid function, results in

for all values of . LipSwish is a simple modification to Swish that exhibits a less than unity Lipschitz property. In our experiments, we parameterize to be strictly positive by passing it through softplus. One caveat of using LipSwish is it cannot be computed in-place, resulting in double the amount of memory usage as ELU (Table 1).

3.4 Generalizing i-ResNets via Induced Mixed Norms

Behrmann et al. (2019) used spectral normalization (Miyato et al., 2018) (which relies on power iteration to approximate the spectral norm) to enforce the Lipschitz constraint on . Specifically, this bounds the spectral norm of the Jacobian by the sub-multiplicativity property. If is a neural network with pre-activation defined recursively using and , with , then the data-independent upper bound

(10)

holds, where are diagonal matrices containing the first derivatives of the activation functions. The inequality in (10) is a result of using a sub-multiplicative norm and assuming that the activation functions have Lipschitz less than unity. However, any induced matrix norm satisfies the sub-multiplicativity property, including mixed norms , where the input and output spaces have different vector norms.

As long as maps back to the original normed (complete) vector space, the Banach fixed point theorem used in the proof of invertibility of residual blocks (Behrmann et al., 2019) still holds. As such, we can choose arbitrary such that

(11)

We use a more general form of power iteration (Johnston, 2016) for estimating induced mixed norms, which becomes the standard power iteration for . Furthermore, the special cases where or are of particular interest, as the matrix norms can be computed exactly (Tropp, 2004). Additionally, we can also optimize the norm orders during training by backpropagating through the modified power method. Lastly, we emphasize that the convergence of the infinite series (3) is guaranteed for any induced matrix norm, as they still upper bound the spectral radius (Horn and Johnson, 2012).

[width=trim=0 8px 0 10px, clip]plots/checkerboard_norms.pdf

(a) Checkerboard 2D

[width=trim=0 8px 0 10px, clip]plots/cifar10_norms.pdf

(b) CIFAR-10
Figure 2: Lipschitz constraints with different induced matrix norms. (a) On 2D density, both learned and

-norm improve upon spectral norm, allowing more expressive models with fewer blocks. Standard deviation across 3 random seeds. (b) On CIFAR-10, learning the norm orders give a small performance gain and the

-norm performs much worse than spectral norm (). Comparisons are made using identical initialization.

Figure 1(a) shows that we obtain some performance gain by using either learned norms or the infinity norm on a difficult 2D dataset, where similar performance can be achieved by using fewer residual blocks. While the infinity norm works well with fully connected layers, we find that it does not perform as well as the spectral norm for convolutional networks. Instead, Figure 1(b) shows that learned norms obtain marginal improvement on CIFAR-10. Ultimately, while the idea of generalizing spectral normalization via learnable norm orders is interesting in its own right to be communicated here, we found that the improvements are very marginal. More details are in Appendix D.

4 Related Work

Estimation of Infinite Series.

Our derivation of the unbiased estimator follows from the general approach of using a randomized truncation (Kahn, 1955). This paradigm of estimation has been repeatedly rediscovered and applied in many fields, including solving of stochastic differential equations (McLeish, 2011; Rhee and Glynn, 2012, 2015), ray tracing for rendering paths of light (Arvo and Kirk, 1990), and estimating limiting behavior of optimization problems (Tallec and Ollivier, 2017; Beatson and Adams, 2019), among many other applications. Some recent works use Chebyshev polynomials to estimate the spectral functions of symmetric matrices Han et al. (2018); Adams et al. (2018); Ramesh and LeCun (2018); Boutsidis et al. (2008). These works estimate quantities that are similar to those presented in this work, but a key difference is that the Jacobian in our power series is not symmetric. We also note works that have rediscovered the random truncation approach (McLeish, 2011; Rhee and Glynn, 2015; Han et al., 2018) made assumptions on in order for it to be applicable to general infinite series. Fortunately, since the power series in Theorems 1 and 2 converge fast enough, we were able to make use of a different set of assumptions requiring only that has sufficient support, which was adapted from Bouchard-Côté (2018) (details in Appendix B).

Memory-efficient Backpropagation.

The issue of computing gradients in a memory-efficient manner was explored by Gomez et al. (2017) and Chang et al. (2018) for residual networks with an architecture devised by Dinh et al. (2014), and explored by Chen et al. (2018) for a continuous analogue of residual networks. These works focus on the path-wise gradients from the output of the network, whereas we focus on the gradients from the log-determinant term in the change of variables equation specifically for generative modeling.

Invertible Deep Networks.

Flow-based generative models are a density estimation approach which has invertibility as its core design principle (Rezende and Mohamed, 2015; Deco and Brauer, 1995). Most recent work on flows focuses on designing maximally expressive architectures while maintaining invertibility and tractable log determinant computation (Dinh et al., 2014, 2017; Kingma and Dhariwal, 2018)

. An alternative route has been taken by Neural ODEs which show that a Jacobian trace can be computed instead of log determinants, if the transformation is specified by an ordinary differential equation

(Chen et al., 2018; Grathwohl et al., 2019). Invertible architectures are also of interest for discriminative problems, as their information-preservation properties make them suitable candidates to analyze and regularize learned representations (Jacobsen et al., 2019).

5 Experiments

5.1 Density & Generative Modeling

We use a similar architecture as Behrmann et al. (2019), except without the immediate invertible downsampling (Dinh et al., 2017) at the image pixel-level. Removing this substantially increases the amount of memory required (shown in Table 1) as there are more spatial dimensions at every layer, but increases the overall performance. We also increase the bound on the Lipschitz constants of each weight matrix to , whereas Behrmann et al. (2019) used to reduce the error of the biased estimator. More detailed explanations are in Appendix E.

Unlike prior works that use multiple GPUs, large batch sizes, and a few hundred epochs, Residual Flow models are trained with the standard batch size of 64 and converges in roughly 300-350 epochs for MNIST and CIFAR-10. Most network settings can fit on a single GPU (see Table  

1), though we use 4 GPUs in our experiments to speed up training. In comparison, Glow was trained for 1800 epochs (Kingma and Dhariwal, 2018) and Flow++ reported to not have fully converged after 400 epochs (Ho et al., 2019). We exceed Glow’s performance (3.35) on CIFAR-10 at around 50 epochs (Figure 6).

Table 2 reports the bits per dimension ( where

) on standard benchmark datasets MNIST, CIFAR-10, and downsampled ImageNet. We achieve competitive performance to state-of-the-art flow-based models on both datasets. For evaluation, we computed 20 terms of the power series (

3) and use the unbiased estimator (5) to estimate the remaining terms. This reduces the standard deviation of the unbiased estimator to a negligible level.

We are also competitive with state-of-the-art flow-based models in regards to sample quality. Figure 3 shows random samples from the model trained on CelebA. Furthermore, samples from Residual Flow trained on CIFAR-10 are slightly more globally coherent (Figure 4) than PixelCNN and variational dequantized Flow++, even though our likelihood is worse. This is only an informal comparison, and it is well-known that visual fidelity and log-likelihood are not necessarily indicative of each other (Theis et al., 2015), but we believe residual networks may have better inductive bias than coupling blocks or autoregressive architectures as generative models. More samples are in Appendix A.

! Model  MNIST  CIFAR-10 ImageNet 3232 ImageNet 6464 Real NVP (Dinh et al., 2017) 1.06 3.49 4.28 3.98 Glow (Kingma and Dhariwal, 2018) 1.05 3.35 4.09 3.81 FFJORD (Grathwohl et al., 2019) 0.99 3.40  —  — Flow++ (Ho et al., 2019)  — 3.29 (3.09)  —   (3.86)  —   (3.69) i-ResNet (Behrmann et al., 2019) 1.05 3.45  —  — Residual Flow (Ours) 0.97 3.29 4.02 3.78

Table 2: Results [bits/dim] on standard benchmark datasets for density estimation. In brackets are models that used “variational dequantization” (Ho et al., 2019), which we don’t compare against.

[width=0.495]plots/celeba_5bit_real_2x10.png [width=0.495]plots/celeba_5bit_samples_2x10.png

Figure 3: Qualitative samples. Real (left) and random samples (right) from a model trained on 5bit 6464 CelebA. The most visually appealing samples were picked out of 5 random batches.

[width=trim= 0 37px 0 241px, clip]imgs/cifar10_real.png

Real Data

[width=trim= 0 71px 0 207px, clip]imgs/cifar10_resflow.png

Residual Flow (3.29 bits/dim)

[width=trim = 0 314px 0 125px, clip]imgs/cifar10_pixelcnn.png

PixelCNN (3.14 bits/dim)

[width=trim= 0 170px 0 103px, clip]imgs/cifar10_flowpp.png

Variational Dequantized Flow++ (3.08 bits/dim)
Figure 4: Random samples from CIFAR-10 models.

5.2 Ablation Experiments

We report ablation experiments for the unbiased estimator and the LipSwish activation function in Table 6. Even in settings where the Lipschitz constant and bias are relatively low, we observe a significant improvement from using the unbiased estimator. Training the larger i-ResNet model on CIFAR-10 results in the biased estimator completely ignoring the actual likelihood objective altogether. In this setting, the biased estimate was lower than 0.8 bits/dim by 50 epochs, but the actual bits/dim wildly oscillates above 3.66 bits/dim and seems to never converge. Using LipSwish not only converges much faster but also results in better performance compared to softplus or ELU, especially in the high Lipschitz settings (Figure 6 and Table 6).

[width=trim=0 10px 0 5px, clip]plots/actfn.pdf
Figure 5: Effect of activation functions on CIFAR-10.
! Training Setting MNIST CIFAR-10 CIFAR-10 i-ResNet + ELU 1.05 3.45 3.664.78 Residual Flow + ELU 1.00 3.40 3.32 Residual Flow + LipSwish 0.97 3.39 3.29
Figure 6: Ablation results. Uses immediate downsampling before any residual blocks.

5.3 Hybrid Modeling

! MNIST SVHN Block Type Acc    BPD Acc    BPD Acc Acc    BPD Acc    BPD Acc Nalisnick et al. (2019) 99.33%    1.26 97.78%     95.74%    2.40 94.77% Coupling 99.50%    1.18 98.45%    1.04 95.42% 96.27%    2.73 95.15%    2.21 46.22%    + Conv 99.56%    1.15 98.93%    1.03 94.22% 96.72%    2.61 95.49%    2.17 46.58% Residual 99.53%    1.01 99.46%    0.99 98.69% 96.72%    2.29 95.79%    2.06 58.52%

Table 3: Comparison of residual vs. coupling blocks for the hybrid modeling task.

Next, we experiment on joint training of continuous and discrete data. Of particular interest is the ability to learn both a generative model and a classifier, referred to as a hybrid model 

(Nalisnick et al., 2019). Let be the data and be a categorical random variable. The maximum likelihood objective can be separated into , where is modeled using a flow-based generative model and is a classifier network that shares learned features from the generative model. However, it is often the case that accuracy is the metric of interest and log-likelihood is only used as a surrogate training objective. In this case, (Nalisnick et al., 2019) suggests a weighted maximum likelihood objective,

(12)

where is a scaling constant. As is much lower dimensional than , setting emphasizes classification, and setting results in a classification-only model which can be compared against.

! Block Type Acc    BPD Acc    BPD Acc Coupling 89.77%    4.30 87.58%    3.54 67.62%    + Conv 90.82%    4.09 87.96%    3.47 67.38% Residual 91.78%    3.62 90.47%    3.39 70.32%

Table 4: Hybrid modeling results on CIFAR-10.

As Nalisnick et al. (2019)

performs approximate Bayesian inference and uses different architectures than us, we perform our own experiments to compare residual blocks to coupling blocks 

Dinh et al. (2017) as well as 11 convolutions (Kingma and Dhariwal, 2018). We use the same architecture as the density estimation experiments and append a classification branch that takes features at the final output of multiple scales (see details in Appendix E). For comparisons to coupling blocks, we use the same architecture for

except we use ReLU activations and no longer constrain the Lipschitz constant.

Tables 34 show our experiment results. We outperform Nalisnick et al. (2019) on both pure classification and hybrid modeling. Furthermore, on MNIST we are able to obtain a decent classifier even when . In general, we find that residual blocks perform much better than coupling blocks at learning representations for both generative and discriminative tasks. Coupling blocks have very high bits per dimension when while performing worse at classification when , suggesting that they have restricted flexibility and can only perform one task well at a time.

6 Conclusion

We have shown that invertible residual networks can be turned into powerful generative models. The proposed unbiased flow-based generative model, coined Residual Flow, achieves competitive or better performance compared to alternative flow-based models in density estimation, sample quality, and hybrid modeling. More generally, we gave a recipe for introducing stochasticity in order to construct tractable flow-based models with a different set of constraints on layer architectures than competing approaches, which rely on exact log-determinant computations. This opens up a new design space of expressive but Lipschitz-constrained architectures that has yet to be explored.

Acknowledgments

Jens Behrmann gratefully acknowledges the financial support from the German Science Foundation for RTG 2224 “: Parameter Identification - Analysis, Algorithms, Applications”

References

  • Adams et al. [2018] Ryan P Adams, Jeffrey Pennington, Matthew J Johnson, Jamie Smith, Yaniv Ovadia, Brian Patton, and James Saunderson. Estimating the spectral density of large implicit matrices. arXiv preprint arXiv:1802.03451, 2018.
  • Arvo and Kirk [1990] James Arvo and David Kirk. Particle transport and image synthesis. ACM SIGGRAPH Computer Graphics, 24(4):63–66, 1990.
  • Beatson and Adams [2019] Alex Beatson and Ryan P. Adams. Efficient optimization of loops and limits with randomized telescoping sums. In International Conference on Machine Learning, 2019.
  • Behrmann et al. [2019] Jens Behrmann, Will Grathwohl, Ricky T. Q. Chen, David Duvenaud, and Jörn-Henrik Jacobsen. Invertible residual networks. International Conference on Machine Learning, 2019.
  • Bouchard-Côté [2018] Alexandre Bouchard-Côté. Topics in probability assignment 1. https://www.stat.ubc.ca/~bouchard/courses/stat547-fa2018-19//files/assignment1-solution.pdf, 2018. Accessed: 2019-05-22.
  • Boutsidis et al. [2008] Christos Boutsidis, Michael W Mahoney, and Petros Drineas.

    Unsupervised feature selection for principal components analysis.

    In Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 61–69. ACM, 2008.
  • Chang et al. [2018] Bo Chang, Lili Meng, Eldad Haber, Lars Ruthotto, David Begert, and Elliot Holtham. Reversible architectures for arbitrarily deep residual neural networks. In

    AAAI Conference on Artificial Intelligence

    , 2018.
  • Chen et al. [2018] Ricky T. Q. Chen, Yulia Rubanova, Jesse Bettencourt, and David Duvenaud. Neural ordinary differential equations. Advances in Neural Information Processing Systems, 2018.
  • Deco and Brauer [1995] Gustavo Deco and Wilfried Brauer. Nonlinear higher-order statistical decorrelation by volume-conserving neural architectures. Neural Networks, 8(4):525–535, 1995.
  • Dinh et al. [2014] Laurent Dinh, David Krueger, and Yoshua Bengio. NICE: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516, 2014.
  • Dinh et al. [2017] Laurent Dinh, Jascha Sohl-Dickstein, and Samy Bengio. Density estimation using real nvp. International Conference on Learning Representations, 2017.
  • Gomez et al. [2017] Aidan N Gomez, Mengye Ren, Raquel Urtasun, and Roger B Grosse. The reversible residual network: Backpropagation without storing activations. In Advances in neural information processing systems, pages 2214–2224, 2017.
  • Goodfellow et al. [2014] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in neural information processing systems, pages 2672–2680, 2014.
  • Gouk et al. [2018] Henry Gouk, Eibe Frank, Bernhard Pfahringer, and Michael Cree. Regularisation of neural networks by enforcing lipschitz continuity. arXiv preprint arXiv:1804.04368, 2018.
  • Grathwohl et al. [2019] Will Grathwohl, Ricky T. Q. Chen, Jesse Bettencourt, Ilya Sutskever, and David Duvenaud. Ffjord: Free-form continuous dynamics for scalable reversible generative models. International Conference on Learning Representations, 2019.
  • Han et al. [2018] Insu Han, Haim Avron, and Jinwoo Shin. Stochastic chebyshev gradient descent for spectral optimization. In Conference on Neural Information Processing Systems. 2018.
  • He et al. [2016] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In

    Proceedings of the IEEE conference on computer vision and pattern recognition

    , pages 770–778, 2016.
  • Ho et al. [2019] Jonathan Ho, Xi Chen, Aravind Srinivas, Yan Duan, and Pieter Abbeel. Flow++: Improving flow-based generative models with variational dequantization and architecture design. International Conference on Machine Learning, 2019.
  • Horn and Johnson [2012] Roger A Horn and Charles R Johnson. Matrix analysis. Cambridge University Press, 2012.
  • Hutchinson [1990] Michael F Hutchinson. A stochastic estimator of the trace of the influence matrix for Laplacian smoothing splines. Communications in Statistics-Simulation and Computation, 19(2):433–450, 1990.
  • Jacobsen et al. [2019] Jörn-Henrik Jacobsen, Jens Behrmann, Richard Zemel, and Matthias Bethge. Excessive invariance causes adversarial vulnerability. International Conference on Learning Representations, 2019.
  • Johnston [2016] Nathaniel Johnston. QETLAB: A MATLAB toolbox for quantum entanglement, version 0.9. http://qetlab.com, January 2016.
  • Kahn [1955] Herman Kahn. Use of different monte carlo sampling techniques. 1955.
  • Kingma and Ba [2015] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In International Conference on Learning Representations, 2015.
  • Kingma and Welling [2014] Diederik P Kingma and Max Welling. Auto-encoding variational bayes. International Conference on Learning Representations, 2014.
  • Kingma and Dhariwal [2018] Durk P Kingma and Prafulla Dhariwal. Glow: Generative flow with invertible 1x1 convolutions. In Advances in Neural Information Processing Systems, pages 10215–10224, 2018.
  • Loshchilov and Hutter [2019] Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. In International Conference on Learning Representations, 2019.
  • McLeish [2011] Don McLeish. A general method for debiasing a monte carlo estimator. Monte Carlo Methods and Applications, 17(4):301–315, 2011.
  • Miyato et al. [2018] Takeru Miyato, Toshiki Kataoka, Masanori Koyama, and Yuichi Yoshida. Spectral normalization for generative adversarial networks. In International Conference on Learning Representations, 2018.
  • Nalisnick et al. [2019] Eric Nalisnick, Akihiro Matsukawa, Yee Whye Teh, Dilan Gorur, and Balaji Lakshminarayanan. Hybrid models with deep and invertible features. International Conference on Machine Learning, 2019.
  • Oord et al. [2016] Aaron van den Oord, Nal Kalchbrenner, and Koray Kavukcuoglu.

    Pixel recurrent neural networks.

    International Conference on Machine Learning, 2016.
  • Petersen and Pedersen [2012] K. B. Petersen and M. S. Pedersen. The matrix cookbook, 2012.
  • Polyak and Juditsky [1992] Boris T. Polyak and Anatoli Juditsky. Acceleration of stochastic approximation by averaging. 1992.
  • Ramachandran et al. [2017] Prajit Ramachandran, Barret Zoph, and Quoc V Le. Searching for activation functions. arXiv preprint arXiv:1710.05941, 2017.
  • Ramesh and LeCun [2018] Aditya Ramesh and Yann LeCun. Backpropagation for implicit spectral densities. Conference on Neural Information Processing Systems, abs/1806.00499, 2018.
  • Rezende and Mohamed [2015] Danilo Rezende and Shakir Mohamed. Variational inference with normalizing flows. In Proceedings of the 32nd International Conference on Machine Learning, pages 1530–1538, 2015.
  • Rhee and Glynn [2012] Chang-han Rhee and Peter W Glynn. A new approach to unbiased estimation for sde’s. In Proceedings of the Winter Simulation Conference, page 17. Winter Simulation Conference, 2012.
  • Rhee and Glynn [2015] Chang-han Rhee and Peter W Glynn. Unbiased estimation with square root convergence for sde models. Operations Research, 63(5):1026–1043, 2015.
  • Skilling [1989] John Skilling.

    The eigenvalues of mega-dimensional matrices.

    In Maximum Entropy and Bayesian Methods, pages 455–466. Springer, 1989.
  • Tallec and Ollivier [2017] Corentin Tallec and Yann Ollivier. Unbiasing truncated backpropagation through time. arXiv preprint arXiv:1705.08209, 2017.
  • Theis et al. [2015] Lucas Theis, Aäron van den Oord, and Matthias Bethge. A note on the evaluation of generative models. arXiv preprint arXiv:1511.01844, 2015.
  • Tropp [2004] Joel Aaron Tropp. Topics in sparse approximation. In PhD thesis. University of Texas, 2004.
  • Zhang et al. [2019] Guodong Zhang, Chaoqi Wang, Bowen Xu, and Roger Grosse. Three mechanisms of weight decay regularization. In International Conference on Learning Representations, 2019.

Appendix A Random Samples

[width=trim= 0 0 0 35px, clip]imgs/cifar10_real.png

Real Data

[width=trim= 0 0 0 35px, clip]imgs/cifar10_resflow.png

Residual Flow

[width=]imgs/cifar10_pixelcnn.png

PixelCNN

[width=trim= 0 0 0 34px, clip]imgs/cifar10_flowpp.png

Flow++
Figure 7: Random samples from CIFAR-10 models. PixelCNN (Oord et al., 2016) and Flow++ samples reprinted from Ho et al. (2019), with permission.

[width=trim= 0 0 0 31px, clip]imgs/mnist_real.png

Real Data

[width=trim= 0 0 0 31px, clip]imgs/mnist_resflow.png

Residual Flow
Figure 8: Random samples from MNIST.

[width=]imgs/imagenet32_real.png

Real Data

[width=]imgs/imagenet32_resflow.png

Residual Flow
Figure 9: Random samples from ImageNet 3232.

[width=]imgs/imagenet64_real.png

Real Data

[width=]imgs/imagenet64_resflow.png

Residual Flow
Figure 10: Random samples from ImageNet 6464.

[width=]imgs/celeba_5bit_real.png

Real Data

[width=]imgs/celeba_5bit_resflow.png

Residual Flow
Figure 11: Random samples from 5bit CelebA 6464.

Appendix B Proofs

We start by formulating a Lemma, which gives the condition when the randomized truncated series is an unbiased estimator in a fairly general setting. Afterwards, we study our specific estimator and prove that the assumption of the Lemma is satisfied.

Note, that similar conditions have been stated in previous works, e.g. in McLeish (2011) and Rhee and Glynn (2012). However, we use the condition from (Bouchard-Côté, 2018), which only requires to have sufficient support.

To make the derivations self-contained, we reformulate the conditions from (Bouchard-Côté, 2018) in the following way:

Lemma 3 (Unbiased randomized truncated series).

Let be a real random variable with for some . Further, let and for . Assume

and let be a random variable with support over the positive integers and . Then for

it holds

Proof.

First, denote

where by the triangle inequality. Since is non-decreasing, the monotone convergence theorem allows swapping the expectation and limit as . Furthermore, it is

where the assumption is used in the last step. Using the above, the dominated convergence theorem can be used to swap the limit and expectation for . Using similar derivations as above, it is

where we used the definition of and . ∎

Proof.

(Theorem 1)
To simplify notation, we denote . Furthermore, let

denote the real random variable and let and for , as in Lemma 3. To prove the claim of the theorem, we can use Lemma 3 and we only need to prove that the assumption holds for this specific case.

In order to exchange summation and expectation via Fubini’s theorem, we need to proof that for all . Using the assumption , it is

for an arbitrary . Hence,

(13)

Since for via the Skilling-Hutchinson trace estimator (Hutchinson, 1990; Skilling, 1989), it is

To show that (13) is bounded, we derive the bound

where denote the eigenvalues and the spectral radius. Inserting this bound into (13) and using yields

Hence, the assumption of Lemma 3 is verified. ∎

Proof.

(Theorem 2)
The result can be proven in an analogous fashion to the proof of Theorem 1, which is why we only present a short version without all steps.

By obtaining the bound

Fubini’s theorem can be applied to swap the expection and summation. Furthermore, by using the trace estimation and similar bounds as in the proof of Theorem 1, the assumption from Lemma 3 can be proven.

Appendix C Memory-Efficient Gradient Estimation of Log-Determinant

Derivation of gradient estimator via differentiating power series:

Derivation of memory-efficient gradient estimator:

(14)
(15)
(16)

Note, that (14

) follows from the chain rule of differentiation, for the derivative of the determinant in (

15), see (Petersen and Pedersen, 2012) (eq. 46). Furthermore, (16) follows from the properties of a Neumann-Series which converges due to .

Hence, if we are able to compute the trace exactly, both approaches will return the same values for a given truncation . However, when estimating the trace via the Hutchinson trace estimator the estimation is not equal in general:

Another difference between both approaches is their memory consumption of the corresponding computational graph. The summation is not being tracked for the gradient, which allows to compute the gradient with constant memory (constant with respect to the truncation ).

Appendix D Generalized Spectral Norm

[width=]imgs/checkerboard_data.jpg

Data

[width=]imgs/checkerboard_lop2.jpg

(5.13 bits)

[width=]imgs/checkerboard_lopinf.jpg

(5.09 bits)
Figure 12: Learned densities on Checkerboard 2D.

Using different induced -norms on Checkerboard 2D.

We experimented with the checkerboard 2D dataset, which is a rather difficult two-dimensional data to fit a flow-based model on due to the discontinuous nature of the true distribution. We used brute-force computation of the log-determinant for change of variables (which, in the 2D case, is faster than the unbiased estimator). In the 2D case, we found that -norm always outperforms or at least matches the norm (ie. spectral norm). Figure 12 shows the learned densities with 200 residual blocks. The color represents the magnitude of , with brighter values indicating larger values. The -norm model produces density estimates that are more evenly spread out across the space, whereas the spectral norm model focused its density to model between-density regions.

[width=0.5trim=0px 10px 0px 0px, clip]plots/mixed_norms.pdf

Figure 13: Improvement from using generalized spectral norm on CIFAR-10.

Learning norm orders on CIFAR-10.

We used where is a learned weight. This bounds the norm orders to . We tried two different setups. One where all norm orders are free to change (conditioned on them satisfying the constraints (11)), and another setting where each states within each residual block share the same order. Figure 13 shows the improvement in bits from using learned norms. The gain in performance is marginal, and the final models only outperformed spectral norm by around bits/dim. Interestingly, we found that the learned norms stayed around , shown in Figure 14, especially for the input and output spaces of , ie. between blocks. This may suggest that spectral norm, or a norm with is already optimal in this setting.

[width=trim=0px 10px 0px 0px, clip]plots/orders.pdf

Figure 14: Learned norm orders on CIFAR-10. Each residual block is visualized as a single line. The input and two hidden states for each block use different normed spaces. We observe multiple trends: (i) the norms for the first hidden states are consistently higher than the input, and lower for the second. (ii) The orders for the hidden states drift farther away from 2 as depth increases. (iii) The ending order of one block and the starting order of the next are generally consistent and close to 2.

Appendix E Experiment Setup

We use the standard setup of passing the data through a “unsquashing” layer (we used the logit transform 

(Dinh et al., 2017)), followed by alternating multiple blocks and squeeze layers (Dinh et al., 2017). We use activation normalization (Kingma and Dhariwal, 2018)

before and after every residual block. Each residual connection consists of

LipSwish 33 Conv LipSwish 11 Conv LipSwish 33 Conv

with hidden dimensions of . Below are the architectures for each dataset.

Mnist.

With 1e-5.

Image LogitTransform() 16ResBlock [ Squeeze 16ResBlock ]2

Cifar-10.

With .

Image LogitTransform() 16ResBlock [ Squeeze 16ResBlock ]2

Svhn

With .

Image LogitTransform() 16ResBlock [ Squeeze 16ResBlock ]2

ImageNet 3232.

With .

Image LogitTransform() 32ResBlock [ Squeeze 32ResBlock ]2

ImageNet 6464.

With .

Image Squeeze LogitTransform() 32ResBlock [ Squeeze 32ResBlock ]2

CelebA 5bit 6464.

With .

Image Squeeze LogitTransform() 16ResBlock [ Squeeze 16ResBlock ]3

For density modeling on MNIST and CIFAR-10, we added 4 fully connected residual blocks at the end of the network, with intermediate hidden dimensions of 128. These residual blocks were not used in the hybrid modeling experiments or on other datasets.

For hybrid modeling on CIFAR-10, we replaced the logit transform with normalization by the standard preprocessing of subtracting the mean and dividing by the standard deviation across the training data. The MNIST and SVHN architectures for hybrid modeling were the same as those for density modeling.

For augmenting our flow-based model with a classifier in the hybrid modeling experiments, we added an additional branch after every squeeze layer and at the end of the network. Each branch consisted of

33 Conv ActNorm ReLU AdaptiveAveragePooling(())

where the adaptive average pooling averages across all spatial dimensions and resulted in a vector of dimension . The outputs at every scale were concatenated together and fed into a linear softmax classifier.

Adaptive number of power iterations.

We used spectral normalization for convolutions (Gouk et al., 2018). To account for variable weight updates during training, we implemented an adaptive version of spectral normalization where we performed as many iterations as needed until the relative change in the estimated spectral norm was sufficiently small. As this also reduced the number of iterations when weight updates are small, this did not result in higher time complexity.

Optimization.

For stochastic gradient descent, we used Adam (Kingma and Ba, 2015) with a learning rate of and weight decay of applied outside the adaptive learning rate computation (Loshchilov and Hutter, 2019; Zhang et al., 2019). We used Polyak averaging (Polyak and Juditsky, 1992) for evaluation with a decay of .

Preprocessing.

For density estimation experiments, we used random horizontal flipping for CIFAR-10 and CelebA. For hybrid modeling and classification experiments, we used random cropping after reflection padding with 4 pixels for SVHN and CIFAR-10; CIFAR-10 also included random horizontal flipping.