1 Introduction
Reversible deep networks are one of the first classes of deep networks that provide statistical guarantees and access to key quantities like densities and gradients for generative modelling. Most notably, they allow for the exact computation of the determinant of their Jacobian, crucial for maximumlikelihood based generative models dinh2016density ; kingma2018glow ; chen2018neural . Further, they provide guarantees on mutual information between input and intermediate outputs, making them a suitable tool to analyze invariants of learned representations jacobsen2018irevnet ; anonymous2019excessive
. The ability to invert deep networks alleviates the need to store intermediate activations during backpropagation
gomez2017reversible ; Chang2018ReversibleAF and offers a promising approach in Inverse Problems (ardizzone2018analyzing, ). However, architectures used to achieve reversibility are typically restricted to have analytical inverses dinh2014nice .Reversible networks rely on fixed dimension splitting heuristics, but common splittings interleaved with nonvolume conserving elements are constraining and their choice has a big impact on performance
kingma2018glow ; dinh2016density . This makes building reversible networks a difficult task, while at the same time, it is unclear how such exotic designs relate to widely used architectures like ResNets (he2016identity, ).Recently, instead of specifying a discrete architecture, it was proposed to parametrize the derivative of hidden states by a neural network
(chen2018neural, ; ffjord, ). This approach amounts to learning Lipschitz dynamics of an ordinary differential equation (ODE). While the ODE over a finite time interval is invertible (chen2018neural, ), the discretization relies on adaptive ODE solvers to guarantee accurate solutions. This induces computational costs which are hard to control and even get worse during training (ffjord, ). In contrast, ResNets have fixed computational cost.To close this gap, we leverage the viewpoint of ResNets as Euler discretization of ODEs (haberRuthotto, ; ruthottoHaber, ; luODE, ; naisnet, ) and prove that invertible ResNets can be constructed by simply changing the normalization scheme of standard ResNets. This approach allows unconstrained architectures for each residual block, while only requiring a Lipschitz constant smaller than one for each block. We numerically compute their inverse, which has memory cost. Finally, we show that invertible ResNets perform on par with their noninvertible counterparts on classifying MNIST and CIFAR10 images.
2 Unifying ResNets, Invertible Networks and ODEs
The remarkable similarity of ResNets and Eulermethods from ODEs
(1) 
with activations (states) , layerindex (timestep) , scaling (stepsize) and residual block (dynamic of ODE
) has attracted growing research in the intersection of deep learning and dynamical systems
(luODE, ; haberRuthotto, ; ruthottoHaber, ; chen2018neural, ). However, little attention has been paid to the dynamics backwards in time(2) 
which amounts to the implicit backward Euler discretization. In particular, solving the dynamics backwards in time would implement an inverse of the corresponding ResNet. As stated in the following theorem, a simple condition suffices to make the dynamics solvable and thus renders the ResNet invertible.
Theorem 1 (Sufficient condition for invertible ResNets).
Let with denote a ResNet with blocks . Then, the ResNet is invertible if
(3) 
where is the Lipschitzconstant of .
Proof.
Since ResNet is a composition of functions, it is invertible if each block is invertible. Let be arbitrary and consider the backward Euler discretization . Rewriting as a iteration yields
(4) 
where is the fixed point if the iteration converges. As is an operator on a Banach space, the contraction condition guarantees convergence due to the Banach fixed point theorem. ∎
The condition above was also stated in (anonymous2019information, ) (Appendix D), however, their proof restricts the domain of the residual block to be bounded and applies only to linear operators as the inverse was given by a convergent Neumannseries. Note, that the condition is not necessary, e.g. other approaches (dinh2014nice, ; dinh2016density, ; jacobsen2018irevnet, ; Chang2018ReversibleAF, ; kingma2018glow, ) rely on triangular structures to create analytical inverses. After proving above condition for invertible ResNets, we discuss connections to related approaches:
Invertibility and ODEs: While ResNets are only guaranteed to be invertible if each residual block is contractive (), ODEs given by
(5) 
with Lipschitz continuous are reversible (chen2018neural, ). By choosing a scaling (time steps ), such that , bijectivity of the ODE is preserved under the Euler discretization.
Maximal singular value of each layers convolutional operator for various CIFAR10 trained ResNets.
Left: Vanilla and Batchnorm ResNet singular values. It is likely that the baseline ResNets are not invertible as roughly two thirds of their layers have singular values fairly above one, making the blocks noncontractive. Right: Singular values for our 4 spectrally normalized ResNets. The regularization is effective and in every case the single ResNet block remains a contraction.Stability of ODE: There are two main approaches to study stability of ODEs, 1) behavior for and 2) Lipschitz stability over finite time intervals . Based on timeinvariant dynamics , (naisnet, ) constructed asymptotically stable ResNets using antisymmetric layers such that (with
denoting the realpart of eigenvalues,
spectral radius and the Jacobian at point x). By projecting weights based on the Gershgorin circle theorem, they further fulfilled , yielding asymptotically stable ResNets with shared weights over layers.On the other hand, (haberRuthotto, ; ruthottoHaber, ) considered timedependent dynamics corresponding to standard ResNets. Similarly, by using antisymmetric layers they induced stability via leveraging the stability theory of timeinvariant ODEs. Yet, the applicability of such an approach to nonlinear, timedependent dynamics is unclear as they only refer to kinematic eigenvalues (ascher1995numerical, ) which generalize the theory of linear timeinvariant ODEs to some subclassses (e.g. diagonally dominant matrices, Example 3.7 in (ascher1995numerical, )) of timedependent ODEs. Hence, a full understanding of the asymptotic stability of ODEs associated to standard ResNets remains open.
Contrarily, initial value problems on are wellposed for Lipschitz continuous dynamics (ascher2008numerical, ). Thus, the invertible ResNet with can be understood as a stabilizer of an ODE for step size without a restriction to antisymmetric layers as in (ruthottoHaber, ; haberRuthotto, ; naisnet, ).
Turning discrete into continuous dynamics: The view of deep networks as dynamics over time offers two fundamental learning approaches: 1) Direct learning of continuous ODE dynamics parametrized by neural networks as in (chen2018neural, ; ffjord, ) and 2) Indirect learning of ODE dynamics using discretization architectures like ResNets (haberRuthotto, ; ruthottoHaber, ; luODE, ; naisnet, ).
By fixing the ResNet , the dynamic is only fixed at time points , corresponding to each block
. For example, a linear interpolation in time turns the discretization back in to a continuous set of dynamics. However, in this indirect approach the dynamics depend on the discretization, while the direct approach
(chen2018neural, ; ffjord, ) learns the ODE and adapts the discretization based on the fixed ODE. Thus, the effect of multistep discretization schemes (luODE, ) in the indirect approach is unclear as the nature of the discretization changes the underlying ODE dynamics.3 Experiments
We show that standard and normalized ResNets perform on par in image classification, while normalized ResNets are guaranteed to be invertible. We train a standard preactivation ResNet (he2016identity, )
with 55 bottleneck blocks on CIFAR10 and MNIST. All experiments use identical settings for all hyperparameters. We replace subsampling of strided convolution layers with "invertible downsampling" operations
jacobsen2018irevnet to allow invertibility, see appendix A for training and architectural details. To obtain the numerical inverse, we apply 100 fixed point iterations for each block. However, this is just to guarantee full convergence, but in practice much fewer iterations suffice. The tradeoff between reconstruction error and number of iterations is analyzed in appendix B.ResNet:  Vanilla  BN  
Classification  CIFAR10  6.95  6.00  6.30  8.23  17.71  24.66 
% Error  MNIST  0.61  0.65  0.64  0.61  1.12  1.65 
Reconstruction  CIFAR10  1.4  inf  1.3e7  9.5e8  1.9e8  7.2e9 
Normed L2 Error  MNIST  2.5e7  1.0  2.2e7  1.9e7  1.6e7  7.3e8 
Guaranteed Inverse:  No  No  Yes  Yes  Yes  Yes 
Satisfying : We implement residual blocks as , where are convolutional layers and
denotes ReLU. Thus,
if , where denotes the spectral norm. Unlike (miyato2018spectral, ), we directly estimate the spectral norm of
by performing poweriteration using and as proposed in (gouk, ). Note, that a poweriteration on the parameter matrix (miyato2018spectral, ) only gives a bound on , see (tsuzuku2018lipschitz, ).We use only 1 poweriteration during SGDtraining, which yields an underestimate . Then, we normalize as if . Since is an underestimate, is not guaranteed. Thus, after training we compute
exactly using the SVD on the Fourier transformed parameter matrix following
(singularValConv, ) to show holds in all cases.Classification and reconstruction results for two baseline ResNets (with and without BatchNorm) and four invertible ResNets with different spectral normalization coefficients are shown in Table 1. The results illustrate that our proposed invertible ResNets perform on par with the baselines for larger settings in terms of classification performance, while being provably invertible. When applying very conservative normalization (small ), the classification error becomes higher on both datasets.
The normalized L2 reconstruction errors show, that our regularization is effective and the inverse is close to exact. Intruigingly, our analysis also reveals that ResNets without BatchNorm are invertible after training on MNIST, whereas the BatchNorm ResNet is not. Further, both ResNets with and without BatchNorm are not invertible after training on CIFAR10, as can also be seen from the singular value plots in figure 4.
See figure 3 for qualitative reconstruction results with 100 fixed point iteration steps. Note that the reconstruction error decays quickly and the errors are already imperceptible after 520 iterations, which is the cost of 520 times the forward pass and empirically corresponds to 0.150.75 seconds for reconstructing 100 CIFAR10 images. Computing the inverse is fast even for the largest normalizaton coefficient, but becomes faster with stronger regularization. The iterations needed for full convergence is approximately cut into half when reducing the spectral normalization coefficient by 0.2, see appendix B for a detailed plot.
CIFAR Samples:  
Vanilla ResNet Inverse:  
Normalized ResNet Inverse:  
MNIST Samples:  
Vanilla ResNet Inverse:  
Normalized ResNet Inverse: 
In summary, we observe that invertibility without additional constraints is unlikely, but possible, whereas it is hard to predict if networks will have this property. In our proposed normalized ResNets however, we do have the theoretical guarantee of the existence of an inverse without harming classification performance.
Acknowledgements
We gratefully acknowledge the financial support from the German Science Foundation for RTG 2224 ": Parameter Identification  Analysis, Algorithms, Applications"
References
 (1) Anonymous. Excessive invariance causes adversarial vulnerability. In Submitted to International Conference on Learning Representations, 2019. under review.
 (2) Anonymous. Information regularized neural networks. In Submitted to International Conference on Learning Representations, 2019. under review.
 (3) Lynton Ardizzone, Jakob Kruse, Sebastian Wirkert, Daniel Rahner, Eric W Pellegrini, Ralf S Klessen, Lena MaierHein, Carsten Rother, and Ullrich Köthe. Analyzing inverse problems with invertible neural networks. arXiv preprint arXiv:1808.04730, 2018.
 (4) U.M. Ascher. Numerical methods for evolutionary differential equations. Computational science and engineering. Society for Industrial and Applied Mathematics, 2008.
 (5) U.M. Ascher, R.M.M. Mattheij, and R.D. Russell. Numerical Solution of Boundary Value Problems for Ordinary Differential Equations. Classics in Applied Mathematics. Society for Industrial and Applied Mathematics, 1995.

(6)
Bo Chang, Lili Meng, Eldad Haber, Lars Ruthotto, David Begert, and Elliot
Holtham.
Reversible architectures for arbitrarily deep residual neural
networks.
ThirtySecond AAAI Conference on Artificial Intelligence
, 2018.  (7) Tian Qi Chen, Yulia Rubanova, Jesse Bettencourt, and David Duvenaud. Neural ordinary differential equations. Advances in Neural Information Processing Systems, 2018.
 (8) Marco Ciccone, Marco Gallieri, Jonathan Masci, Christian Osendorfer, and Faustino J. Gomez. Naisnet: Stable deep networks from nonautonomous differential equations. arXiv preprint arXiv:1804.07209, 2018.
 (9) Laurent Dinh, David Krueger, and Yoshua Bengio. Nice: Nonlinear independent components estimation. arXiv preprint arXiv:1410.8516, 2014.
 (10) Laurent Dinh, Jascha SohlDickstein, and Samy Bengio. Density estimation using real nvp. International Conference on Learning Representations, 2017.
 (11) Aidan N Gomez, Mengye Ren, Raquel Urtasun, and Roger B Grosse. The reversible residual network: Backpropagation without storing activations. Advances in Neural Information Processing Systems, 2017.
 (12) Henry Gouk, Eibe Frank, Bernhard Pfahringer, and Michael Cree. Regularisation of neural networks by enforcing lipschitz continuity. arXiv preprint arXiv:1804.04368, 2018.
 (13) Will Grathwohl, Ricky T. Q. Chen, Jesse Bettencourt, Ilya Sutskever, and David Duvenaud. Ffjord: Freeform continuous dynamics for scalable reversible generative models. arXiv preprint arXiv:1810.01367, 2018.
 (14) Eldad Haber and Lars Ruthotto. Stable architectures for deep neural networks. Inverse Problems, 34(1):014004, 2018.

(15)
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun.
Identity mappings in deep residual networks.
In
European conference on computer vision
, pages 630–645. Springer, 2016.  (16) JörnHenrik Jacobsen, Arnold W.M. Smeulders, and Edouard Oyallon. irevnet: Deep invertible networks. In International Conference on Learning Representations, 2018.
 (17) Diederik P Kingma and Prafulla Dhariwal. Glow: Generative flow with invertible 1x1 convolutions. Advances in Neural Information Processing Systems, 2018.
 (18) Yiping Lu, Aoxiao Zhong, Quanzheng Li, and Bin Dong. Beyond finite layer neural networks: Bridging deep architectures and numerical differential equations. arXiv preprint arXiv:1710.10121, 2017.
 (19) Takeru Miyato, Toshiki Kataoka, Masanori Koyama, and Yuichi Yoshida. Spectral normalization for generative adversarial networks. In International Conference on Learning Representations, 2018.
 (20) Lars Ruthotto and Eldad Haber. Deep neural networks motivated by partial differential equations. arXiv preprint arXiv:1804.04272, 2018.
 (21) Hanie Sedghi, Vineet Gupta, and Philip M. Long. The singular values of convolutional layers. arXiv preprint arXiv:1805.10408, 2018.
 (22) Yusuke Tsuzuku, Issei Sato, and Masashi Sugiyama. Lipschitzmargin training: Scalable certification of perturbation invariance for deep neural networks. Advances in Neural Information Processing Systems, 2018.
Appendix A Experimental Details
We use preactivation ResNets with 55 convolutional bottleneck blocks with 3 convolution layers each and kernel sizes of 3x3, 1x1, 3x3 respectively. In the BatchNorm version, we apply a batch normalization before every ReLU activation function. The multiplier for the bottleneck is 4. The network has 2 downsampling stages after 18 and 36 blocks each and also concatenates zeros to its input, to increase the number of channels of the input by a factor of 4, a strategy initially described as injective iRevNet
[16] and also used in Glow on MNIST [17].We train for 200 epochs with momentum SGD and a learning rate of 0.1, decayed by a factor of 0.2 after 60, 120 and 160 epochs. Weight decay is set to 5e4 and a dropout of probability 0.1 is applied in the residual block, which is also taken into account when choosing the normalization coefficients. We apply shifts for MNIST and shifts and flips for CIFAR10 as random dataaugmentation during training. The inputs for MNIST are normalized to [0.5,0.5] and for CIFAR10 as well, after preprocessing each image via subtracting the mean and dividing by the standard deviation of the training set.
Comments
There are no comments yet.