) networks, training nets beyond a certain depth with gradient descent was limited by the vanishing gradient problemhochreiter1991a ; bengio1994a . These very deep networks (VDNNs) have skip connections that provide shortcuts for the gradient to flow back through hundreds of layers. Unfortunately, training them still requires extensive hyper-parameter tuning, and, even if there were a principled way to determine the optimal number of layers or processing depth for a given task, it still would be fixed for all patterns.
Recently, several researchers have started to view VDNNs from a dynamical systems perspective. Haber and Ruthotto haber2017 analyzed the stability of ResNets by framing them as an Euler integration of an ODE, and lu2018beyond showed how using other numerical integration methods induces various existing network architectures such as PolyNet zhang2017 , FractalNet larsson2016 and RevNet gomez2017 . A fundamental problem with the dynamical systems underlying these architectures is that they are autonomous: the input pattern sets the initial condition, only directly affecting the first processing stage. This means that if the system converges, there is either exactly one fixpoint or exactly one limit cycle strogatz2014 . Neither case is desirable from a learning perspective because a dynamical system should have input-dependent convergence properties so that representations are useful for learning. One possible approach to achieve this is to have a non-autonomous system where, at each iteration, the system is forced by an external input.
This paper introduces a novel network architecture, called the “Non-Autonomous Input-Output Stable Network” (NAIS-Net), that is derived from a dynamical system that is both time-invariant (weights are shared) and non-autonomous.111The DenseNet architecture lang1988 ; huang2017densely is non-autonomous, but time-varying. NAIS-Net is a general residual architecture where a block (see figure 1) is the unrolling of a time-invariant system, and non-autonomy is implemented by having the external input applied to each of the unrolled processing stages in the block through skip connections. ResNets are similar to NAIS-Net except that ResNets are time-varying and only receive the external input at the first layer of the block.
With this design, we can derive sufficient conditions under which the network exhibits input-dependent equilibria that are globally asymptotically stable for every initial condition. More specifically, in section 3, we prove that with activations, NAIS-Net has exactly one input-dependent equilibrium, while with ReLU activations it has multiple stable equilibria per input pattern. Moreover, the NAIS-Net architecture allows not only the internal stability of the system to be analyzed but, more importantly, the input-output stability — the difference between the representations generated by two different inputs belonging to a bounded set will also be bounded at each stage of the unrolling.222In the supplementary material, we also show that these results hold both for shared and unshared weights.
In section 4, we provide an efficient implementation that enforces the stability conditions for both fully-connected and convolutional layers in the stochastic optimization setting. These implementations are compared experimentally with ResNets on both CIFAR-10 and CIFAR-100 datasets, in section 5, showing that NAIS-Nets achieve comparable classification accuracy with a much better generalization gap. NAIS-Nets can also be 10 to 20 times deeper than the original ResNet without increasing the total number of network parameters, and, by stacking several stable NAIS-Net blocks, models that implement pattern-dependent processing depth can be trained without requiring any normalization at each step (except when there is a change in layer dimensionality, to speed up training).
The next section presents a more formal treatment of the dynamical systems perspective of neural networks, and a brief overview of work to date in this area.
2 Background and Related Work
Representation learning is about finding a mapping from input patterns to encodings that disentangle the underlying variational factors of the input set. With such an encoding, a large portion of typical supervised learning tasks (e.g. classification and regression) should be solvable using just a simple model like logistic regression. A key characteristic of such a mapping is its invariance to input transformations that do not alter these factors for a given input333Such invariance conditions can be very powerful inductive biases on their own: For example, requiring invariance to time transformations in the input leads to popular RNN architectures tallec2018a .. In particular, random perturbations of the input should in general not be drastically amplified in the encoding. In the field of control theory, this property is central to stability analysis which investigates the properties of dynamical systems under which they converge to a single steady state without exhibiting chaos khalil2001 ; strogatz2014 ; sontag_book .
gradient problems, leading to the development of Long Short-Term Memoryhochreiter1997b to alleviate the former. More recently, general conditions for RNN stability have been presented zilly17a ; kanai2017a ; laurent_recurrent_2016 ; vorontsov2017 based on general insights related to Matrix Norm analysis. Input-output stability khalil2001 has also been analyzed for simple RNNs steil1999input ; knight_stability_2008 ; haschke2005input ; singh2016stability .
Recently, the stability of deep feed-forward networks was more closely investigated, mostly due to adversarial attacks szegedy2013intriguing on trained networks. It turns out that sensitivity to (adversarial) input perturbations in the inference process can be avoided by ensuring certain conditions on the spectral norms of the weight matrices cisse_parseval_2017 ; yoshida2017a . Additionally, special properties of the spectral norm of weight matrices mitigate instabilities during the training of Generative Adversarial Networks miyato2018a .
That is, in order to compute a vector representation at layer(or time for recurrent networks), additively update
with some non-linear transformationof which depends on parameters . The reason usual given for why Eq. (1) allows VDNNs to be trained is that the explicit identity connections avoid the vanishing gradient problem.
The semantics of the forward path are however still considered unclear. A recent interpretation is that these feed-forward architectures implement iterative inference greff2016 ; jastrzebski2017 . This view is reinforced by observing that Eq. (1) is a forward Euler discretization ascher1998a
of the ordinary differential equation (ODE)if for all in Eq. (1). This connection between dynamical systems and feed-forward architectures was recently also observed by several other authors weinan2017a . This point of view leads to a large family of new network architectures that are induced by various numerical integration methods lu2018beyond . Moreover, stability problems in both the forward as well the backward path of VDNNs have been addressed by relying on well-known analytical approaches for continuous-time ODEs haber2017 ; chang2017multi . In the present paper, we instead address the problem directly in discrete-time, meaning that our stability result is preserved by the network implementation. With the exception of liao_bridging_2016 , none of this prior research considers time-invariant, non-autonomous systems.
3 Non-Autonomous Input-Output Stable Nets (NAIS-Nets)
This section provides stability conditions for both fully-connected and convolutional NAIS-Net layers. We formally prove that NAIS-Net provides a non-trivial input-dependent output for each iteration as well as in the asymptotic case (). The following dynamical system:
is used throughout the paper, where is the latent state, is the network input, and . For ease of notation, in the remainder of the paper the explicit dependence on the parameters, , will be omitted.
Fully Connected NAIS-Net Layer.
Our fully connected layer is defined by
where and are the state and input transfer matrices, and is a bias. The activation
is a vector of (element-wise) instances of an activation function, denoted aswith . In this paper, we only consider the hyperbolic tangent,
, and Rectified Linear Units (ReLU) activation functions. Note that by setting, and the step the original ResNet formulation is obtained.
Convolutional NAIS-Net Layer.
The architecture can be easily extended to Convolutional Networks by replacing the matrix multiplications in Eq. (3) with a convolution operator:
Consider the case of channels. The convolutional layer in Eq. (4) can be rewritten, for each latent map , in the equivalent form:
where: is the layer state matrix for channel , is the layer input data matrix for channel
(where an appropriate zero padding has been applied) at layer, is the state convolution filter from state channel to state channel , is its equivalent for the input, and is a bias. The activation, , is still applied element-wise. The convolution for
has a fixed stride, a filter size and a zero padding of , such that .444 If , then can be extended with an appropriate number of constant zeros (not connected).
Convolutional layers can be rewritten in the same form as fully connected layers (see proof of Lemma 1 in the supplementary material). Therefore, the stability results in the next section will be formulated for the fully connected case, but apply to both.
Here, the stability conditions for NAIS-Nets which were instrumental to their design are laid out. We are interested in using a cascade of unrolled NAIS blocks (see Figure 1), where each block is described by either Eq. (3) or Eq. (4). Since we are dealing with a cascade of dynamical systems, then stability of the entire network can be enforced by having stable blocks khalil2001 .
The state-transfer Jacobian for layer is defined as:
where the argument of the activation function, , is denoted as . Take an arbitrarily small scalar and define the set of pairs for which the activations are not saturated as:
Theorem 1 below proves that the non-autonomuous residual network produces a bounded output given a bounded, possibly noisy, input, and that the network state converges to a constant value as the number of layers tends to infinity, if the following stability condition holds:
For any , the Jacobian satisfies:
where is the spectral radius.
The steady states, , are determined by a continuous function of . This means that a small change in cannot result in a very different . For activation, depends linearly on , therefore the block needs to be unrolled for a finite number of iterations, , for the mapping to be non-linear. That is not the case for ReLU, which can be unrolled indefinitely and still provide a piece-wise affine mapping.
In Theorem 1, the Input-Output (IO) gain function, , describes the effect of norm-bounded input perturbations on the network trajectory. This gain provides insight as to the level of robust invariance of the classification regions to changes in the input data with respect to the training set. In particular, as the gain is decreased, the perturbed solution will be closer to the solution obtained from the training set. This can lead to increased robustness and generalization with respect to a network that does not statisfy Condition 1. Note that the IO gain, , is linear, and hence the block IO map is Lipschitz even for an infinite unroll length. The IO gain depends directly on the norm of the state transfer Jacobian, in Eq. (8), as indicated by the term in Theorem 1.555see supplementary material for additional details and all proofs, where the untied case is also covered.
(Asymptotic stability for shared weights)
If Condition 1 holds, then NAIS-Net with ReLU or activations is Asymptotically Stable with respect to input dependent equilibrium points. More formally:
The trajectory is described by , where is a suitable matrix norm.
With activation, the steady state is independent of the initial state, and it is a linear function of the input, namely, . The network is Globally Asymptotically Stable.
With ReLU activation, is given by a continuous piecewise affine function of and . The network is Locally Asymptotically Stable with respect to each .
If the activation is , then the network is Globally Input-Output (robustly) Stable for any additive input perturbation . The trajectory is described by:
where is the input-output gain. For any , if then the following set is robustly positively invariant ():
If the activation is ReLU, then the network is Globally Input-Output practically Stable. In other words, we have:
The constant is the norm ball radius for .
. One possible approach is to relax the constraint to a singular value constraintkanai2017a which is applicable to both fully connected as well as convolutional layer types yoshida2017a
. However, this approach is only applicable if the identity matrix in the Jacobian (Eq. (6)) is scaled by a factor kanai2017a . In this work we instead fulfil the spectral radius constraint directly.
The basic intuition for the presented algorithms is the fact that for a simple Jacobian of the form , , Condition 1 is fulfilled, if
has eigenvalues with real part inand imaginary part in the unit circle. In the supplemental material we prove that the following algorithms fulfill Condition 1 following this intuition. Note that, in the following, the presented procedures are to be performed for each block of the network.
In the fully connected case, we restrict the matrix to by symmetric and negative definite by choosing the following parameterization for them:
where is trained, and is a hyper-parameter. Then, we propose a bound on the Frobenius norm, . Algorithm 1, performed during training, implements the following666The more relaxed condition is sufficient for Theorem 1 to hold locally (supplementary material).:
The symmetric parametrization assumed in the fully connected case can not be used for a convolutional layer. We will instead make use of the following result:
The convolutional layer Eq. (4) with zero-padding , and filter size has a Jacobian of the form Eq. (6). with . The diagonal elements of this matrix, namely, , , are the central elements of the -th convolutional filter mapping , into , denoted by . The other elements in row , , are the remaining filter values mapping to .
To fulfill the stability condition, the first step is to set , where is trainable parameter satisfying , and is a hyper-parameter. Then we will suitably bound the -norm of the Jacobian by constraining the remaining filter elements. The steps are summarized in Algorithm 2 which is inspired by the Gershgorin Theorem Horn:2012:MA:2422911 . The following result is obtained:
Note that the algorithm complexity scales with the number of filters. A simple design choice for the layer is to set , which results in being fixed at .777Setting removes the need for hyper-parameter but does not necessarily reduce conservativeness as it will further constrain the remaining element of the filter bank. This is further discussed in the supplementary.
Experiments were conducted comparing NAIS-Net with ResNet, and variants thereof, using both fully-connected (MNIST, section 5.1) and convolutional (CIFAR-10/100, section 5.2) architectures to quantitatively assess the performance advantage of having a VDNN where stability is enforced.
Single neuron trajectory and convergence. (Left)Average loss of NAIS-Net with different residual architectures over the unroll length. Note that both ResNet-SH-Stable and NAIS-Net satisfy the stability conditions for convergence, but only NAIS-Net is able to learn, showing the importance of non-autonomy. Cross-entropy loss vs processing depth. (Right) Activation of a NAIS-Net single neuron for input samples from each class on MNIST. Trajectories not only differ with respect to the actual steady-state but also with respect to the convergence time.
5.1 Preliminary Analysis on MNIST
For the MNIST dataset lecun1998mnist a single-block NAIS-Net was compared with 9 different -layer ResNet variants each with a different combination of the following features: SH (shared weights i.e. time-invariant), NA (non-autonomous i.e. input skip connections), BN
(with Batch Normalization),Stable (stability enforced by Algorithm 1). For example, ResNet-SH-NA-BN refers to a 30-layer ResNet that is time-invariant because weights are shared across all layers (SH), non-autonomous because it has skip connections from the input to all layers (NA), and uses batch normalization (BN). Since NAIS-Net is time-invariant, non-autonomous, and input/output stable (i.e. SH-NA-Stable), the chosen ResNet variants represent ablations of the these three features. For instance, ResNet-SH-NA is a NAIS-Net without I/O stability being enforced by the reprojection step described in Algorithm 1, and ResNet-NA, is a non-stable NAIS-Net that is time-variant, i.e non-shared-weights, etc. The NAIS-Net was unrolled for
iterations for all input patterns. All networks were trained using stochastic gradient descent with momentumand learning rate
, for 150 epochs.
Test accuracy for NAIS-NET was , while ResNet-SH-BN was second best with , but without BatchNorm (ResNet-SH) it only achieved (averaged over 10 runs).
After training, the behavior of each network variant was analyzed by passing the activation,
, though the softmax classifier and measuring the cross-entropy loss. The loss at each iteration describes the trajectory of each sample in the latent space: the closer the sample to the correct steady state the closer the loss to zero (see Figure3). All variants initially refine their predictions at each iteration since the loss tends to decreases at each layer, but at different rates. However, NAIS-Net is the only one that does so monotonically, not increasing loss as approaches . Figure 3 shows how neuron activations in NAIS-Net converge to different steady state activations for different input patterns instead of all converging to zero as is the case with ResNet-SH-Stable, confirming the results of haber2017 . Importantly, NAIS-Net is able to learn even with the stability constraint, showing that non-autonomy is key to obtaining representations that are stable and good for learning the task.
NAIS-Net also allows training of unbounded processing depth without any feature normalization steps. Note that BN actually speeds up loss convergence, especially for ResNet-SH-NA-BN (i.e. unstable NAIS-Net). Adding BN makes the behavior very similar to NAIS-Net because BN also implicitly normalizes the Jacobian, but it does not ensure that its eigenvalues are in the stability region.
5.2 Image Classification on CIFAR-10/100
Experiments on image classification were performed on standard image recognition benchmarks CIFAR-10 and CIFAR-100 krizhevsky2009cifar . These benchmarks are simple enough to allow for multiple runs to test for statistical significance, yet sufficiently complex to require convolutional layers.
The following standard architecture was used to compare NAIS-Net with ResNet888https://github.com/tensorflow/models/tree/master/official/resnet: three sets of residual blocks with , , and filters, respectively, for a total of stacked blocks. NAIS-Net was tested in two versions: NAIS-Net1 where each block is unrolled just once, for a total processing depth of 108, and NAIS-Net10 where each block is unrolled 10 times per block, for a total processing depth of 540. The initial learning rate of was decreased by a factor of at epochs , and and the experiment were run for 450 epochs. Note that each block in the ResNet of he2015a has two convolutions (plus BatchNorm and ReLU) whereas NAIS-Net unrolls with a single convolution. Therefore, to make the comparison of the two architectures as fair as possible by using the same number of parameters, a single convolution was also used for ResNet.
Table 4 compares the performance on the two datasets, averaged over 5 runs. For CIFAR-10, NAIS-Net and ResNet performed similarly, and unrolling NAIS-Net for more than one iteration had little affect. This was not the case for CIFAR-100 where NAIS-Net10 improves over NAIS-Net1 by
. Moreover, although mean accuracy is slightly lower than ResNet, the variance is considerably lower. Figure4 shows that NAIS-Net is less prone to overfitting than a classic ResNet, reducing the generalization gap by 33%. This is a consequence of the stability constraint which imparts a degree of robust invariance to input perturbations (see Section 3). It is also important to note that NAIS-Net can unroll up to layers, and still train without any problems.
5.3 Pattern-Dependent Processing Depth
For simplicity, the number of unrolling steps per block in the previous experiments was fixed. A more general and potentially more powerful setup is to have the processing depth adapt automatically. Since NAIS-Net blocks are guaranteed to converge to a pattern-dependent steady state after an indeterminate number of iterations, processing depth can be controlled dynamically by terminating the unrolling process whenever the distance between a layer representation, , and that of the immediately previous layer, , drops below a specified threshold. With this mechanism, NAIS-Net can determine the processing depth for each input pattern. Intuitively, one could speculate that similar input patterns would require similar processing depth in order to be mapped to the same region in latent space. To explore this hypothesis, NAIS-Net was trained on CIFAR-10 with an unrolling threshold of . At test time the network was unrolled using the same threshold.
shows selected images from four different classes organized according to the final network depth used to classify them after training. The qualitative differences seen from low to high depth suggests that NAIS-Net is using processing depth as an additional degree of freedom so that, for a given training run, the network learns to use models of different complexity (depth) for different types of inputs within each class. To be clear, the hypothesis is not that depth correlates to some notion of input complexity where the same images are always classified at the same depth across runs.
We presented NAIS-Net, a non-autonomous residual architecture that can be unrolled until the latent space representation converges to a stable input-dependent state. This is achieved thanks to stability and non-autonomy properties. We derived stability conditions for the model and proposed two efficient reprojection algorithms, both for fully-connected and convolutional layers, to enforce the network parameters to stay within the set of feasible solutions during training.
NAIS-Net achieves asymptotic stability and, as consequence of that, input-output stability. Stability makes the model more robust and we observe a reduction of the generalization gap by quite some margin, without negatively impacting performance. The question of scalability to benchmarks such as ImageNetdeng2009a will be a main topic of future work.
We believe that cross-breeding machine learning and control theory will open up many new interesting avenues for research, and that more robust and stable variants of commonly used neural networks, both feed-forward and recurrent, will be possible.
We want to thank Wojciech Jaśkowski, Rupesh Srivastava and the anonymous reviewers for their comments on the idea and initial drafts of the paper.
- (1) U. M. Ascher and L. R. Petzold. Computer methods for ordinary differential equations and differential-algebraic equations, volume 61. Siam, 1998.
- (2) P. Baldi and K. Hornik. Universal approximation and learning of trajectories using oscillators. In Advances in Neural Information Processing Systems, pages 451–457, 1996.
- (3) E. Battenberg, J. Chen, R. Child, A. Coates, Y. Gaur, Y. Li, H. Liu, S. Satheesh, D. Seetapun, A. Sriram, and Z. Zhu. Exploring neural transducers for end-to-end speech recognition. CoRR, abs/1707.07413, 2017.
- (4) Y. Bengio, P. Simard, and P. Frasconi. Learning long-term dependencies with gradient descent is difficult. Neural Networks, 5(2):157–166, 1994.
- (5) Bo Chang, Lili Meng, Eldad Haber, Frederick Tung, and David Begert. Multi-level residual networks from dynamical systems view. arXiv preprint arXiv:1710.10348, 2017.
- (6) K. Cho, B. Van Merriënboer, C. Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. Learning phrase representations using rnn encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078, 2014.
- (7) M. Cisse, P. Bojanowski, E. Grave, Y. Dauphin, and N. Usunier. Parseval networks: Improving robustness to adversarial examples. In Doina Precup and Yee Whye Teh, editors, Proceedings of the 34th International Conference on Machine Learning, volume 70, pages 854–863, Sydney, Australia, 06–11 Aug 2017. PMLR.
- (8) J. Deng, W. Dong, R. Socher, L. J. Li, K. Li, and L. Fei-Fei. ImageNet: A Large-Scale Hierarchical Image Database. In , 2009.
- (9) K. Doya. Bifurcations in the learning of recurrent neural networks. In Circuits and Systems, 1992. ISCAS’92. Proceedings., 1992 IEEE International Symposium on, volume 6, pages 2777–2780. IEEE, 1992.
- (10) David Duvenaud, Oren Rippel, Ryan Adams, and Zoubin Ghahramani. Avoiding pathologies in very deep networks. In Artificial Intelligence and Statistics, pages 202–210, 2014.
- (11) M. Figurnov, A. Sobolev, and D. Vetrov. Probabilistic adaptive computation time. CoRR, abs/1712.00386, 2017.
- (12) Marco Gallieri. LASSO-MPC – Predictive Control with -Regularised Least Squares. Springer-Verlag, 2016.
A. Gomez, M. Ren, R. Urtasun, and R. B. Grosse.
The reversible residual network: Backpropagation without storing activations.In NIPS, 2017.
- (14) A. Graves. Adaptive computation time for recurrent neural networks. CoRR, abs/1603.08983, 2016.
- (15) K. Greff, R. K. Srivastava, and J. Schmidhuber. Highway and residual networks learn unrolled iterative estimation. arXiv preprint arXiv:1612.07771, 2016.
- (16) K. Gregor and Y. LeCun. Learning fast approximations of sparse coding. In International Conference on Machine Learning (ICML), 2010.
- (17) E. Haber and L. Ruthotto. Stable architectures for deep neural networks. arXiv preprint arXiv:1705.03341, 2017.
- (18) R. Haschke and J. J. Steil. Input space bifurcation manifolds of recurrent neural networks. Neurocomputing, 64:25–38, 2005.
- (19) K. He, X. Zhang, S. Ren, and J. Sun. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. arXiv preprint arXiv:1502.01852, 2015.
- (20) Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 770–778, Dec 2016.
- (21) S. Hochreiter. Untersuchungen zu dynamischen neuronalen netzen. diploma thesis, 1991. Advisor:J. Schmidhuber.
- (22) S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural Computation, 9(8):1735–1780, 1997.
- (23) R. A. Horn and C. R. Johnson. Matrix Analysis. Cambridge University Press, New York, NY, USA, 2nd edition, 2012.
- (24) G. Huang, Z. Liu, L. van der Maaten, and K. Q. Weinberger. Densely connected convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017.
- (25) S. Jastrzebski, D. Arpit, N. Ballas, V. Verma, T. Che, and Y. Bengio. Residual connections encourage iterative inference. arXiv preprint arXiv:1710.04773, 2017.
S. Kanai, Y. Fujiwara, and S. Iwamura.
Preventing gradient explosions in gated recurrent units.In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30, pages 435–444. Curran Associates, Inc., 2017.
- (27) H. K. Khalil. Nonlinear Systems. Pearson Education, 3rd edition, 2014.
- (28) J. N. Knight. Stability analysis of recurrent neural networks with applications. Colorado State University, 2008.
- (29) A. Krizhevsky and G. Hinton. Learning multiple layers of features from tiny images. 2009.
A. Krizhevsky, I. Sutskever, and G. E. Hinton.
ImageNet classification with deep convolutional neural networks.In Advances in Neural Information Processing Systems (NIPS), 2012.
- (31) J.K. Lang and M. J. Witbrock. Learning to tell two spirals apart. In D. Touretzky, G. Hinton, and T. Sejnowski, editors, Proceedings of the Connectionist Models Summer School, pages 52–59, Mountain View, CA, 1988.
- (32) G. Larsson, M. Maire, and G. Shakhnarovich. Fractalnet: Ultra-deep neural networks without residuals. arXiv preprint arXiv:1605.07648, 2016.
- (33) T. Laurent and J. von Brecht. A recurrent neural network without chaos. arXiv preprint arXiv:1612.06212, 2016.
The MNIST database of handwritten digits.http://yann.lecun.com/exdb/mnist/, 1998.
- (35) Qianli Liao and Tomaso Poggio. Bridging the gaps between residual learning, recurrent neural networks and visual cortex. arXiv preprint arXiv:1604.03640, 2016.
- (36) Y. Lu, A. Zhong, D. Bin, and Q. Li. Beyond finite layer neural networks: Bridging deep architectures and numerical differential equations, 2018.
- (37) T. Miyato, T. Kataoka, M. Koyama, and Y. Yoshida. Spectral normalization for generative adversarial networks. International Conference on Learning Representations, 2018.
F. Monti, D. Boscaini, J. Masci, E. Rodolà, J. Svoboda, and M. M.
Geometric deep learning on graphs and manifolds using mixture model cnns.In CVPR2017, 2017.
- (39) R. Pascanu, T. Mikolov, and Y. Bengio. On the difficulty of training recurrent neural networks. In International Conference on Machine Learning, pages 1310–1318, 2013.
- (40) J. Singh and N. Barabanov. Stability of discrete time recurrent neural networks and nonlinear optimization problems. Neural Networks, 74:58–72, 2016.
- (41) E. Sontag. Mathematical Control Theory: Deterministic Finite Dimensional Systems. Springer-Verlag, 2nd edition, 1998.
- (42) R. K. Srivastava, K. Greff, and J. Schmidhuber. Highway networks. arXiv preprint arXiv:1505.00387, May 2015.
- (43) Jochen J Steil. Input Output Stability of Recurrent Neural Networks. Cuvillier Göttingen, 1999.
- (44) S. H. Strogatz. Nonlinear dynamics and chaos: with applications to physics, biology, chemistry, and engineering. Westview Press, 2nd edition, 2015.
- (45) I. Sutskever, O. Vinyals, and Le. Q. V. Sequence to sequence learning with neural networks. CoRR, abs/1409.3215, 2014.
- (46) C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199, 2013.
- (47) C. Tallec and Y. Ollivier. Can recurrent neural networks warp time? International Conference on Learning Representations, 2018.
- (48) A. Veit and S. Belongie. Convolutional networks with adaptive computation graphs. CoRR, 2017.
- (49) E. Vorontsov, C. Trabelsi, S. Kadoury, and C. Pal. On orthogonality and learning recurrent networks with long term dependencies. arXiv preprint arXiv:1702.00071, 2017.
- (50) E. Weinan. A proposal on machine learning via dynamical systems. Communications in Mathematics and Statistics, 5(1):1–11, 2017.
- (51) Pei Yuan Wu. Products of positive semidefinite matrices. Linear Algebra and Its Applications, 1988.
- (52) Y. Yoshida and T. Miyato. Spectral norm regularization for improving the generalizability of deep learning. arXiv preprint arXiv:1705.10941, 2017.
- (53) X. Zhang, Z. Li, C. C. Loy, and D. Lin. Polynet: A pursuit of structural diversity in very deep networks. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 3900–3908. IEEE, 2017.
- (54) S. Zheng, S. Jayasumana, B. Romera-Paredes, V. Vineet, Z. Su, D. Du, C. Huang, and P. H. S. Torr. Conditional random fields as recurrent neural networks. In Proceedings of the IEEE International Conference on Computer Vision, pages 1529–1537, 2015.
- (55) J. G. Zilly, R. K. Srivastava, J. Koutník, and J. Schmidhuber. Recurrent highway networks. In ICML2017, pages 4189–4198. PMLR, 2017.
Appendix A Basic Definitions for the Tied Weight Case
Recall, from the main paper, that the stability of a NAIS-Net block with fully connected or convolutional architecture can be analyzed by means of the following vectorised representation:
where is the unroll index for the considered block. Since the blocks are cascaded, stability of each block implies stability of the full network. Hence, this supplementary material focuses on theoretical results for a single block.
a.1 Relevant Sets and Operators
Denote the slope of the activation function vector, , as the diagonal matrix, , with entries:
The following definitions will be use to obtain the stability results, where :
In particular, the set is such that the activation function is not saturated as its derivative has a non-zero lower bound.
a.1.2 Linear Algebra Elements
The notation, is used to denote a suitable matrix norm. This norm will be characterized specifically on a case by case basis. The same norm will be used consistently throughout definitions, assumptions and proofs.
We will often use the following:
Consider two matrices , and with being a complex scalar. If is an eigenvalue of then is an eigenvalue of .
Given any eigenvalues of with corresponding eigenvector
with corresponding eigenvectorwe have that:
Throughout the material, the notation is used to denote the -th row of a matrix .
a.1.3 Non-autonomuous Behaviour Set
The following set will be consider throughout the paper:
(Non-autonomous behaviour set) The set is referred to as the set of fully non-autonomous behaviour in the extended state-input space, and its set-projection over , namely,
is the set of fully non-autonomous behaviour in the state space. This is the only set in which every output dimension of the ResNet with input skip connection can be directly influenced by the input, given a non-zero999The concept of controllability is not introduced here. In the case of deep networks we just need to be non-zero to provide input skip connections. For the general case of time-series identification and control, please refer to the definitions in . matrix .
Note that, for a activation, then we simply have that (with for ). For a ReLU activation, on the other hand, for each layer we have:
a.2 Stability Definitions for Tied Weights
This section provides a summary of definitions borrowed from control theory that are used to describe and derive our main result. The following definitions have been adapted from  and refer to the general dynamical system:
Since we are dealing with a cascade of dynamical systems (see Figure 1 in (main paper)), then stability of the entire network can be enforced by having stable blocks . In the remainder of this material, we will therefore address a single unroll. We will cover both the tied and untied weight case, starting from the latter as it is the most general.
a.2.1 Describing Functions
The following functions are instrumental to describe the desired behaviour of the network output at each layer or time step.
(-function) A continuous function is said to be a -function () if it is strictly increasing, with .
(-function) A continuous function is said to be a -function () if it is a -function and if it is radially unbounded, that is as .
(-function) A continuous function is said to be a -function () if it is a -function in its first argument, it is positive definite and non- increasing in the second argument, and if as .
The following definitions are given for time-invariant RNNs, namely DNN with tied weights. They can also be generalised to the case of untied weights DNN and time-varying RNNs by considering worst case conditions over the layer (time) index . In this case the properties are said to hold uniformly for all . This is done in Section B. The tied-weight case follows.
a.2.2 Invariance, Stability and Robustness
(Positively Invariant Set) A set is said to be positively invariant (PI) for a dynamical system under an input if
(Robustly Positively Invariant Set) The set is said to be robustly positively invariant (RPI) to additive input perturbations if is PI for any input .
(Asymptotic Stability) The system Eq. (20) is called Globally Asymptotically Stable around its equilibrium point if it satisfies the following two conditions:
Stability. Given any , such that if , then .
Attractivity. such that if , then as .
If only the first condition is satisfied, then the system is globally stable. If both conditions are satisfied only for some then the stability properties hold only locally and the system is said to be locally asymptotically stable.
Local stability in a PI set is equivalent to the existence of a -function and a finite constant such that:
If then the system is asymptotically stable. If the positively-invariant set is then stability holds globally.
Define the system output as , where is a continuous, Lipschitz function. Input-to-Output stability provides a natural extension of asymptotic stability to systems with inputs or additive uncertainty101010Here we will consider only the simple case of , therefore we can simply use notions of Input-to-State Stability (ISS)..
(Input-Output (practical) Stability) Given an RPI set , a constant nominal input and a nominal steady state such that , the system Eq. (20) is said to be input-output (practically) stable to bounded additive input perturbations (IOpS) in if there exists a -function and a function and a constant :