NAIS-Net: Stable Deep Networks from Non-Autonomous Differential Equations

04/19/2018 ∙ by Marco Ciccone, et al. ∙ NNAISENSE Politecnico di Milano 0

This paper introduces "Non-Autonomous Input-Output Stable Network" (NAIS-Net), a very deep architecture where each stacked processing block is derived from a time-invariant non-autonomous dynamical system. Non-autonomy is implemented by skip connections from the block input to each of the unrolled processing stages and allows stability to be enforced so that blocks can be unrolled adaptively to a pattern-dependent processing depth. We prove that the network is globally asymptotically stable so that for every initial condition there is exactly one input-dependent equilibrium assuming tanh units, and multiple stable equilibria for ReLU units. An efficient implementation that enforces the stability under derived conditions for both fully-connected and convolutional layers is also presented. Experimental results show how NAIS-Net exhibits stability in practice, yielding a significant reduction in generalization gap compared to ResNets.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 9

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Deep neural networks are now the state-of-the-art in a variety of challenging tasks, ranging from object recognition to natural language processing and graph analysis  

krizhevsky2012a ; BattenbergCCCGL17 ; zilly17a ; SutskeverVL14 ; MontiBMRSB17 . With enough layers, they can, in principle, learn arbitrarily complex abstract representations through an iterative process greff2016 where each layer transforms the output from the previous layer non-linearly until the input pattern is embedded in a latent space where inference can be done efficiently.

Until the advent of Highway srivastava2015 and Residual (ResNet; he2015b

) networks, training nets beyond a certain depth with gradient descent was limited by the vanishing gradient problem 

hochreiter1991a ; bengio1994a . These very deep networks (VDNNs) have skip connections that provide shortcuts for the gradient to flow back through hundreds of layers. Unfortunately, training them still requires extensive hyper-parameter tuning, and, even if there were a principled way to determine the optimal number of layers or processing depth for a given task, it still would be fixed for all patterns.

Recently, several researchers have started to view VDNNs from a dynamical systems perspective. Haber and Ruthotto haber2017 analyzed the stability of ResNets by framing them as an Euler integration of an ODE, and lu2018beyond showed how using other numerical integration methods induces various existing network architectures such as PolyNet zhang2017 , FractalNet larsson2016 and RevNet gomez2017 . A fundamental problem with the dynamical systems underlying these architectures is that they are autonomous: the input pattern sets the initial condition, only directly affecting the first processing stage. This means that if the system converges, there is either exactly one fixpoint or exactly one limit cycle strogatz2014 . Neither case is desirable from a learning perspective because a dynamical system should have input-dependent convergence properties so that representations are useful for learning. One possible approach to achieve this is to have a non-autonomous system where, at each iteration, the system is forced by an external input.

Figure 1: NAIS-Net architecture. Each block represents a time-invariant iterative process as the first layer in the -th block, , is unrolled into a pattern-dependent number, , of processing stages, using weight matrices and . The skip connections from the input, , to all layers in block make the process non-autonomous. Blocks can be chained together (each block modeling a different latent space) by passing final latent representation, ), of block as the input to block .

This paper introduces a novel network architecture, called the “Non-Autonomous Input-Output Stable Network” (NAIS-Net), that is derived from a dynamical system that is both time-invariant (weights are shared) and non-autonomous.111The DenseNet architecture lang1988 ; huang2017densely is non-autonomous, but time-varying. NAIS-Net is a general residual architecture where a block (see figure 1) is the unrolling of a time-invariant system, and non-autonomy is implemented by having the external input applied to each of the unrolled processing stages in the block through skip connections. ResNets are similar to NAIS-Net except that ResNets are time-varying and only receive the external input at the first layer of the block.

With this design, we can derive sufficient conditions under which the network exhibits input-dependent equilibria that are globally asymptotically stable for every initial condition. More specifically, in section 3, we prove that with activations, NAIS-Net has exactly one input-dependent equilibrium, while with ReLU activations it has multiple stable equilibria per input pattern. Moreover, the NAIS-Net architecture allows not only the internal stability of the system to be analyzed but, more importantly, the input-output stability — the difference between the representations generated by two different inputs belonging to a bounded set will also be bounded at each stage of the unrolling.222In the supplementary material, we also show that these results hold both for shared and unshared weights.

In section 4, we provide an efficient implementation that enforces the stability conditions for both fully-connected and convolutional layers in the stochastic optimization setting. These implementations are compared experimentally with ResNets on both CIFAR-10 and CIFAR-100 datasets, in section 5, showing that NAIS-Nets achieve comparable classification accuracy with a much better generalization gap. NAIS-Nets can also be 10 to 20 times deeper than the original ResNet without increasing the total number of network parameters, and, by stacking several stable NAIS-Net blocks, models that implement pattern-dependent processing depth can be trained without requiring any normalization at each step (except when there is a change in layer dimensionality, to speed up training).

The next section presents a more formal treatment of the dynamical systems perspective of neural networks, and a brief overview of work to date in this area.

2 Background and Related Work

Representation learning is about finding a mapping from input patterns to encodings that disentangle the underlying variational factors of the input set. With such an encoding, a large portion of typical supervised learning tasks (e.g. classification and regression) should be solvable using just a simple model like logistic regression. A key characteristic of such a mapping is its invariance to input transformations that do not alter these factors for a given input

333Such invariance conditions can be very powerful inductive biases on their own: For example, requiring invariance to time transformations in the input leads to popular RNN architectures tallec2018a .. In particular, random perturbations of the input should in general not be drastically amplified in the encoding. In the field of control theory, this property is central to stability analysis which investigates the properties of dynamical systems under which they converge to a single steady state without exhibiting chaos khalil2001 ; strogatz2014 ; sontag_book .

In machine learning, stability has long been central to the study of recurrent neural networks (RNNs) with respect to the vanishing 

hochreiter1991a ; bengio1994a ; pascanu2013a , and exploding doya1992bifurcations ; baldi1996universal ; pascanu2013a

gradient problems, leading to the development of Long Short-Term Memory 

hochreiter1997b to alleviate the former. More recently, general conditions for RNN stability have been presented zilly17a ; kanai2017a ; laurent_recurrent_2016 ; vorontsov2017 based on general insights related to Matrix Norm analysis. Input-output stability khalil2001 has also been analyzed for simple RNNs steil1999input ; knight_stability_2008 ; haschke2005input ; singh2016stability .

Recently, the stability of deep feed-forward networks was more closely investigated, mostly due to adversarial attacks szegedy2013intriguing on trained networks. It turns out that sensitivity to (adversarial) input perturbations in the inference process can be avoided by ensuring certain conditions on the spectral norms of the weight matrices cisse_parseval_2017 ; yoshida2017a . Additionally, special properties of the spectral norm of weight matrices mitigate instabilities during the training of Generative Adversarial Networks miyato2018a .

Almost all successfully trained VDNNs hochreiter1997b ; he2015b ; srivastava2015 ; cho2014learning share the following core building block:

(1)

That is, in order to compute a vector representation at layer

(or time for recurrent networks), additively update

with some non-linear transformation

of which depends on parameters . The reason usual given for why Eq. (1) allows VDNNs to be trained is that the explicit identity connections avoid the vanishing gradient problem.

The semantics of the forward path are however still considered unclear. A recent interpretation is that these feed-forward architectures implement iterative inference greff2016 ; jastrzebski2017 . This view is reinforced by observing that Eq. (1) is a forward Euler discretization ascher1998a

of the ordinary differential equation (ODE)

if for all in Eq. (1). This connection between dynamical systems and feed-forward architectures was recently also observed by several other authors weinan2017a . This point of view leads to a large family of new network architectures that are induced by various numerical integration methods lu2018beyond . Moreover, stability problems in both the forward as well the backward path of VDNNs have been addressed by relying on well-known analytical approaches for continuous-time ODEs haber2017 ; chang2017multi . In the present paper, we instead address the problem directly in discrete-time, meaning that our stability result is preserved by the network implementation. With the exception of  liao_bridging_2016 , none of this prior research considers time-invariant, non-autonomous systems.

Conceptually, our work shares similarities with approaches that build network according to iterative algorithms gregor2010a ; zheng2015a and recent ideas investigating pattern-dependent processing time graves2016a ; veit2017a ; figurnov2017a .

3 Non-Autonomous Input-Output Stable Nets (NAIS-Nets)

This section provides stability conditions for both fully-connected and convolutional NAIS-Net layers. We formally prove that NAIS-Net provides a non-trivial input-dependent output for each iteration as well as in the asymptotic case (). The following dynamical system:

(2)

is used throughout the paper, where is the latent state, is the network input, and . For ease of notation, in the remainder of the paper the explicit dependence on the parameters, , will be omitted.

Fully Connected NAIS-Net Layer.

Our fully connected layer is defined by

(3)

where and are the state and input transfer matrices, and is a bias. The activation

is a vector of (element-wise) instances of an activation function, denoted as

with . In this paper, we only consider the hyperbolic tangent,

, and Rectified Linear Units (ReLU) activation functions. Note that by setting

, and the step the original ResNet formulation is obtained.

Convolutional NAIS-Net Layer.

The architecture can be easily extended to Convolutional Networks by replacing the matrix multiplications in Eq. (3) with a convolution operator:

(4)

Consider the case of channels. The convolutional layer in Eq. (4) can be rewritten, for each latent map , in the equivalent form:

(5)

where: is the layer state matrix for channel , is the layer input data matrix for channel

(where an appropriate zero padding has been applied) at layer

, is the state convolution filter from state channel to state channel , is its equivalent for the input, and is a bias. The activation, , is still applied element-wise. The convolution for

has a fixed stride

, a filter size and a zero padding of , such that .444 If , then can be extended with an appropriate number of constant zeros (not connected).

Convolutional layers can be rewritten in the same form as fully connected layers (see proof of Lemma 1 in the supplementary material). Therefore, the stability results in the next section will be formulated for the fully connected case, but apply to both.

Stability Analysis.

Here, the stability conditions for NAIS-Nets which were instrumental to their design are laid out. We are interested in using a cascade of unrolled NAIS blocks (see Figure 1), where each block is described by either Eq. (3) or Eq. (4). Since we are dealing with a cascade of dynamical systems, then stability of the entire network can be enforced by having stable blocks khalil2001 .

The state-transfer Jacobian for layer is defined as:

(6)

where the argument of the activation function, , is denoted as . Take an arbitrarily small scalar and define the set of pairs for which the activations are not saturated as:

(7)

Theorem 1 below proves that the non-autonomuous residual network produces a bounded output given a bounded, possibly noisy, input, and that the network state converges to a constant value as the number of layers tends to infinity, if the following stability condition holds:

Condition 1.

For any , the Jacobian satisfies:

(8)

where is the spectral radius.

The steady states, , are determined by a continuous function of . This means that a small change in cannot result in a very different . For activation, depends linearly on , therefore the block needs to be unrolled for a finite number of iterations, , for the mapping to be non-linear. That is not the case for ReLU, which can be unrolled indefinitely and still provide a piece-wise affine mapping.

In Theorem 1, the Input-Output (IO) gain function, , describes the effect of norm-bounded input perturbations on the network trajectory. This gain provides insight as to the level of robust invariance of the classification regions to changes in the input data with respect to the training set. In particular, as the gain is decreased, the perturbed solution will be closer to the solution obtained from the training set. This can lead to increased robustness and generalization with respect to a network that does not statisfy Condition 1. Note that the IO gain, , is linear, and hence the block IO map is Lipschitz even for an infinite unroll length. The IO gain depends directly on the norm of the state transfer Jacobian, in Eq. (8), as indicated by the term in Theorem 1.555see supplementary material for additional details and all proofs, where the untied case is also covered.

Theorem 1.

(Asymptotic stability for shared weights)
If Condition 1 holds, then NAIS-Net with ReLU or activations is Asymptotically Stable with respect to input dependent equilibrium points. More formally:

(9)

The trajectory is described by , where is a suitable matrix norm.

In particular:

  • With activation, the steady state is independent of the initial state, and it is a linear function of the input, namely, . The network is Globally Asymptotically Stable.

    With ReLU activation, is given by a continuous piecewise affine function of and . The network is Locally Asymptotically Stable with respect to each .

  • If the activation is , then the network is Globally Input-Output (robustly) Stable for any additive input perturbation . The trajectory is described by:

    (10)

    where is the input-output gain. For any , if then the following set is robustly positively invariant ():

    (11)
  • If the activation is ReLU, then the network is Globally Input-Output practically Stable. In other words, we have:

    (12)

    The constant is the norm ball radius for .

  Input: , , , .  
  if  then
     
       
  else
     
  end if
  Output:
Algorithm 1 Fully Connected Reprojection
  Input: , and .
  for each feature map  do
     
     
     if  then
         
         
     end if
  end for
  Output: ,
Algorithm 2 CNN Reprojection
Figure 2: Proposed algorithms for enforcing stability.

4 Implementation

In general, an optimization problem with a spectral radius constraint as in Eq. (8) is hard kanai2017a

. One possible approach is to relax the constraint to a singular value constraint 

kanai2017a which is applicable to both fully connected as well as convolutional layer types yoshida2017a

. However, this approach is only applicable if the identity matrix in the Jacobian (Eq. (

6)) is scaled by a factor kanai2017a . In this work we instead fulfil the spectral radius constraint directly.

The basic intuition for the presented algorithms is the fact that for a simple Jacobian of the form , , Condition 1 is fulfilled, if

has eigenvalues with real part in

and imaginary part in the unit circle. In the supplemental material we prove that the following algorithms fulfill Condition 1 following this intuition. Note that, in the following, the presented procedures are to be performed for each block of the network.

Fully-connected blocks.

In the fully connected case, we restrict the matrix to by symmetric and negative definite by choosing the following parameterization for them:

(13)

where is trained, and is a hyper-parameter. Then, we propose a bound on the Frobenius norm, . Algorithm 1, performed during training, implements the following666The more relaxed condition is sufficient for Theorem 1 to hold locally (supplementary material).:

Theorem 2.

(Fully-connected weight projection)
Given , the projection , with , ensures that is such that Condition 1 is satisfied for and therefore Theorem 1 holds.

Note that is also sufficient for stability, however, the from Theorem 2 makes the trajectory free from oscillations (critically damped), see Figure 3. This is further discussed in Appendix.

Convolutional blocks.

The symmetric parametrization assumed in the fully connected case can not be used for a convolutional layer. We will instead make use of the following result:

Lemma 1.

The convolutional layer Eq. (4) with zero-padding , and filter size has a Jacobian of the form Eq. (6). with . The diagonal elements of this matrix, namely, , , are the central elements of the -th convolutional filter mapping , into , denoted by . The other elements in row , , are the remaining filter values mapping to .

To fulfill the stability condition, the first step is to set , where is trainable parameter satisfying , and is a hyper-parameter. Then we will suitably bound the -norm of the Jacobian by constraining the remaining filter elements. The steps are summarized in Algorithm 2 which is inspired by the Gershgorin Theorem Horn:2012:MA:2422911 . The following result is obtained:

Theorem 3.

(Convolutional weight projection)
Algorithm 2 fulfils Condition 1 for the convolutional layer, for , hence Theorem 1 holds.

Note that the algorithm complexity scales with the number of filters. A simple design choice for the layer is to set , which results in being fixed at .777Setting removes the need for hyper-parameter but does not necessarily reduce conservativeness as it will further constrain the remaining element of the filter bank. This is further discussed in the supplementary.

5 Experiments

Experiments were conducted comparing NAIS-Net with ResNet, and variants thereof, using both fully-connected (MNIST, section 5.1) and convolutional (CIFAR-10/100, section 5.2) architectures to quantitatively assess the performance advantage of having a VDNN where stability is enforced.

Figure 3:

Single neuron trajectory and convergence. (Left)

Average loss of NAIS-Net with different residual architectures over the unroll length. Note that both ResNet-SH-Stable and NAIS-Net satisfy the stability conditions for convergence, but only NAIS-Net is able to learn, showing the importance of non-autonomy. Cross-entropy loss vs processing depth. (Right) Activation of a NAIS-Net single neuron for input samples from each class on MNIST. Trajectories not only differ with respect to the actual steady-state but also with respect to the convergence time.

5.1 Preliminary Analysis on MNIST

For the MNIST dataset lecun1998mnist a single-block NAIS-Net was compared with 9 different -layer ResNet variants each with a different combination of the following features: SH (shared weights i.e. time-invariant), NA (non-autonomous i.e. input skip connections), BN

(with Batch Normalization),

Stable (stability enforced by Algorithm 1). For example, ResNet-SH-NA-BN refers to a 30-layer ResNet that is time-invariant because weights are shared across all layers (SH), non-autonomous because it has skip connections from the input to all layers (NA), and uses batch normalization (BN). Since NAIS-Net is time-invariant, non-autonomous, and input/output stable (i.e. SH-NA-Stable), the chosen ResNet variants represent ablations of the these three features. For instance, ResNet-SH-NA is a NAIS-Net without I/O stability being enforced by the reprojection step described in Algorithm 1, and ResNet-NA, is a non-stable NAIS-Net that is time-variant, i.e non-shared-weights, etc. The NAIS-Net was unrolled for

iterations for all input patterns. All networks were trained using stochastic gradient descent with momentum

and learning rate

, for 150 epochs.

Results.

Test accuracy for NAIS-NET was , while ResNet-SH-BN was second best with , but without BatchNorm (ResNet-SH) it only achieved (averaged over 10 runs).

After training, the behavior of each network variant was analyzed by passing the activation,

, though the softmax classifier and measuring the cross-entropy loss. The loss at each iteration describes the trajectory of each sample in the latent space: the closer the sample to the correct steady state the closer the loss to zero (see Figure 

3). All variants initially refine their predictions at each iteration since the loss tends to decreases at each layer, but at different rates. However, NAIS-Net is the only one that does so monotonically, not increasing loss as approaches . Figure 3 shows how neuron activations in NAIS-Net converge to different steady state activations for different input patterns instead of all converging to zero as is the case with ResNet-SH-Stable, confirming the results of haber2017 . Importantly, NAIS-Net is able to learn even with the stability constraint, showing that non-autonomy is key to obtaining representations that are stable and good for learning the task.

NAIS-Net also allows training of unbounded processing depth without any feature normalization steps. Note that BN actually speeds up loss convergence, especially for ResNet-SH-NA-BN (i.e. unstable NAIS-Net). Adding BN makes the behavior very similar to NAIS-Net because BN also implicitly normalizes the Jacobian, but it does not ensure that its eigenvalues are in the stability region.

5.2 Image Classification on CIFAR-10/100

Experiments on image classification were performed on standard image recognition benchmarks CIFAR-10 and CIFAR-100 krizhevsky2009cifar . These benchmarks are simple enough to allow for multiple runs to test for statistical significance, yet sufficiently complex to require convolutional layers.

Setup.

The following standard architecture was used to compare NAIS-Net with ResNet888https://github.com/tensorflow/models/tree/master/official/resnet: three sets of residual blocks with , , and filters, respectively, for a total of stacked blocks. NAIS-Net was tested in two versions: NAIS-Net1 where each block is unrolled just once, for a total processing depth of 108, and NAIS-Net10 where each block is unrolled 10 times per block, for a total processing depth of 540. The initial learning rate of was decreased by a factor of at epochs , and and the experiment were run for 450 epochs. Note that each block in the ResNet of he2015a has two convolutions (plus BatchNorm and ReLU) whereas NAIS-Net unrolls with a single convolution. Therefore, to make the comparison of the two architectures as fair as possible by using the same number of parameters, a single convolution was also used for ResNet.

Results.
Model CIFAR-10 CIFAR-100
train/test train/test
ResNet 99.860.03 97.42 0.06
91.720.38 66.34 0.82
NAIS-Net1 99.370.08 86.90 1.47
91.240.10 65.00 0.52
NAIS-Net10 99.500.02 86.91 0.42
91.250.46 66.07 0.24
Figure 4: CIFAR Results. (Left) Classification accuracy on the CIFAR-10 and CIFAR-100 datasets averaged over 5 runs. Generalization gap on CIFAR-10. (Right) Dotted curves (training set) are very similar for the two networks but NAIS-Net has a considerably lower test curve (solid).

Table 4 compares the performance on the two datasets, averaged over 5 runs. For CIFAR-10, NAIS-Net and ResNet performed similarly, and unrolling NAIS-Net for more than one iteration had little affect. This was not the case for CIFAR-100 where NAIS-Net10 improves over NAIS-Net1 by

. Moreover, although mean accuracy is slightly lower than ResNet, the variance is considerably lower. Figure 

4 shows that NAIS-Net is less prone to overfitting than a classic ResNet, reducing the generalization gap by 33%. This is a consequence of the stability constraint which imparts a degree of robust invariance to input perturbations (see Section 3). It is also important to note that NAIS-Net can unroll up to layers, and still train without any problems.

5.3 Pattern-Dependent Processing Depth

For simplicity, the number of unrolling steps per block in the previous experiments was fixed. A more general and potentially more powerful setup is to have the processing depth adapt automatically. Since NAIS-Net blocks are guaranteed to converge to a pattern-dependent steady state after an indeterminate number of iterations, processing depth can be controlled dynamically by terminating the unrolling process whenever the distance between a layer representation, , and that of the immediately previous layer, , drops below a specified threshold. With this mechanism, NAIS-Net can determine the processing depth for each input pattern. Intuitively, one could speculate that similar input patterns would require similar processing depth in order to be mapped to the same region in latent space. To explore this hypothesis, NAIS-Net was trained on CIFAR-10 with an unrolling threshold of . At test time the network was unrolled using the same threshold.

Figure 10

shows selected images from four different classes organized according to the final network depth used to classify them after training. The qualitative differences seen from low to high depth suggests that NAIS-Net is using processing depth as an additional degree of freedom so that, for a given training run, the network learns to use models of different complexity (depth) for different types of inputs within each class. To be clear, the hypothesis is not that depth correlates to some notion of input complexity where the same images are always classified at the same depth across runs.

(a) frog
(b) bird
(c) ship
(d) airplane
Figure 5: Image samples with corresponding NAIS-Net depth. The figure shows samples from CIFAR-10 grouped by final network depth, for four different classes. The qualitative differences evident in images inducing different final depths indicate that NAIS-Net adapts processing systematically according characteristics of the data. For example, “frog” images with textured background are processed with fewer iterations than those with plain background. Similarly, “ship” and “airplane” images having a predominantly blue color are processed with lower depth than those that are grey/white, and “bird” images are grouped roughly according to bird size with larger species such as ostriches and turkeys being classified with greater processing depth. A higher definition version of the figure is made available in the supplementary materials.

6 Conclusions

We presented NAIS-Net, a non-autonomous residual architecture that can be unrolled until the latent space representation converges to a stable input-dependent state. This is achieved thanks to stability and non-autonomy properties. We derived stability conditions for the model and proposed two efficient reprojection algorithms, both for fully-connected and convolutional layers, to enforce the network parameters to stay within the set of feasible solutions during training.

NAIS-Net achieves asymptotic stability and, as consequence of that, input-output stability. Stability makes the model more robust and we observe a reduction of the generalization gap by quite some margin, without negatively impacting performance. The question of scalability to benchmarks such as ImageNet 

deng2009a will be a main topic of future work.

We believe that cross-breeding machine learning and control theory will open up many new interesting avenues for research, and that more robust and stable variants of commonly used neural networks, both feed-forward and recurrent, will be possible.

Aknowledgements

We want to thank Wojciech Jaśkowski, Rupesh Srivastava and the anonymous reviewers for their comments on the idea and initial drafts of the paper.

References

Appendix A Basic Definitions for the Tied Weight Case

Recall, from the main paper, that the stability of a NAIS-Net block with fully connected or convolutional architecture can be analyzed by means of the following vectorised representation:

(14)

where is the unroll index for the considered block. Since the blocks are cascaded, stability of each block implies stability of the full network. Hence, this supplementary material focuses on theoretical results for a single block.

a.1 Relevant Sets and Operators

a.1.1 Notation

Denote the slope of the activation function vector, , as the diagonal matrix, , with entries:

(15)

The following definitions will be use to obtain the stability results, where :

(16)

In particular, the set is such that the activation function is not saturated as its derivative has a non-zero lower bound.

a.1.2 Linear Algebra Elements

The notation, is used to denote a suitable matrix norm. This norm will be characterized specifically on a case by case basis. The same norm will be used consistently throughout definitions, assumptions and proofs.

We will often use the following:

Lemma 2.

(Eigenvalue shift)

Consider two matrices , and with being a complex scalar. If is an eigenvalue of then is an eigenvalue of .

Proof.

Given any eigenvalues of

with corresponding eigenvector

we have that:

(17)

Throughout the material, the notation is used to denote the -th row of a matrix .

a.1.3 Non-autonomuous Behaviour Set

The following set will be consider throughout the paper:

Definition A.1.

(Non-autonomous behaviour set) The set is referred to as the set of fully non-autonomous behaviour in the extended state-input space, and its set-projection over , namely,

(18)

is the set of fully non-autonomous behaviour in the state space. This is the only set in which every output dimension of the ResNet with input skip connection can be directly influenced by the input, given a non-zero999The concept of controllability is not introduced here. In the case of deep networks we just need to be non-zero to provide input skip connections. For the general case of time-series identification and control, please refer to the definitions in [41]. matrix .

Note that, for a activation, then we simply have that (with for ). For a ReLU activation, on the other hand, for each layer we have:

(19)

a.2 Stability Definitions for Tied Weights

This section provides a summary of definitions borrowed from control theory that are used to describe and derive our main result. The following definitions have been adapted from [12] and refer to the general dynamical system:

(20)

Since we are dealing with a cascade of dynamical systems (see Figure 1 in (main paper)), then stability of the entire network can be enforced by having stable blocks [27]. In the remainder of this material, we will therefore address a single unroll. We will cover both the tied and untied weight case, starting from the latter as it is the most general.

a.2.1 Describing Functions

The following functions are instrumental to describe the desired behaviour of the network output at each layer or time step.

Definition A.2.

(-function) A continuous function is said to be a -function () if it is strictly increasing, with .

Definition A.3.

(-function) A continuous function is said to be a -function () if it is a -function and if it is radially unbounded, that is as .

Definition A.4.

(-function) A continuous function is said to be a -function () if it is a -function in its first argument, it is positive definite and non- increasing in the second argument, and if as .

The following definitions are given for time-invariant RNNs, namely DNN with tied weights. They can also be generalised to the case of untied weights DNN and time-varying RNNs by considering worst case conditions over the layer (time) index . In this case the properties are said to hold uniformly for all . This is done in Section B. The tied-weight case follows.

a.2.2 Invariance, Stability and Robustness

Definition A.5.

(Positively Invariant Set) A set is said to be positively invariant (PI) for a dynamical system under an input if

(21)
Definition A.6.

(Robustly Positively Invariant Set) The set is said to be robustly positively invariant (RPI) to additive input perturbations if is PI for any input .

Definition A.7.

(Asymptotic Stability) The system Eq. (20) is called Globally Asymptotically Stable around its equilibrium point if it satisfies the following two conditions:

  1. Stability. Given any , such that if , then .

  2. Attractivity. such that if , then as .

If only the first condition is satisfied, then the system is globally stable. If both conditions are satisfied only for some then the stability properties hold only locally and the system is said to be locally asymptotically stable.

Local stability in a PI set is equivalent to the existence of a -function and a finite constant such that:

(22)

If then the system is asymptotically stable. If the positively-invariant set is then stability holds globally.

Define the system output as , where is a continuous, Lipschitz function. Input-to-Output stability provides a natural extension of asymptotic stability to systems with inputs or additive uncertainty101010Here we will consider only the simple case of , therefore we can simply use notions of Input-to-State Stability (ISS)..

Definition A.8.

(Input-Output (practical) Stability) Given an RPI set , a constant nominal input and a nominal steady state such that , the system Eq. (20) is said to be input-output (practically) stable to bounded additive input perturbations (IOpS) in if there exists a -function and a function and a constant :

(23)
Definition A.9.

(Input-Output (Robust) Stability) Given an RPI set , a constant nominal input and a nominal steady state such that , the system Eq. (20) is said to be input-output (robustly) stable to bounded additive input perturbations (IOS) in if there exists a -function and a function such that:

(24)

a.3 Jacobian Condition for Stability

The -step state transfer Jacobian for tied weights is:

(25)

Recall, from the main paper, that the following stability condition is introduced:

Condition 2.

(Condition 1 from main paper)

For any , the Jacobian satisfies:

(26)

where is the spectral radius.

a.4 Stability Result for Tied Weights

Stability of NAIS-Net is described in Theorem 1 from the main paper. For the sake of completeness, a longer version of this theorem is introduced here and will be proven in Section C. Note that proving the following result is equivalent to proving Theorem 1 in the main paper.

Theorem 1.

(Asymptotic stability for shared weights)

If Condition 2 holds, then NAIS-Net with ReLU or activations is Asymptotically Stable with respect to input dependent equilibrium points. More formally:

(27)

The trajectory is described by:

(28)

where is a suitable matrix norm and is given in Eq. (26).

In particular:

  1. With activation, the steady state is independent of the initial state, namely, . The steady state is given by:

    The network is Globally Asymptotically Stable with respect to .

  2. If the activation is then the network is Globally Input-Output (robustly) Stable for any input perturbation . The trajectory is described by:

    (29)

    with input-output gain

    (30)

    If , then the following set is robustly positively invariant:

    (31)

    In other words:

    (32)
  3. In the case of ReLU activation, the network admits multiple equilibria, , and is Locally Asymptotically Stable with respect to each . In other words

    The steady state is determined by a continuous piece-wise linear function of the input and the of initial condition. All network steady states are also contained in the set:

    (33)

    Moreover, if is verified for some and for a finite unroll length, , then we have that:

    (34)
  4. If the activation is ReLU, then the network is Globally Input-Output practically Stable. The trajectory is described by:

    (35)

    The input-output gain is given by Eq. (30), and the constant is the norm ball radius for the initial condition, namely,

    If the additive input perturbation satisfies , then the state converges to the ultimate bound:

    (36)

    In other words:

    (37)

    for any input perturbations, . The perturbed steady state for a constant depends also on the initial condition and it is a point in the set:

    (38)

    where and is the nominal input.

Appendix B NAIS-Net with Untied Weights

b.1 Proposed Network with Untied Weights

The proposed network architecture with skip connections and our robust stability results can be extended to the untied weight case. In particular, a single NAIS-Net block is analysed, where the weights are not shared throughout the unroll.

b.1.1 Fully Connected Layers

Consider in the following Deep ResNet with input skip connections and untied weights:

(39)

where indicates the layer, is the input data, , and is a continuous, differentiable function with bounded slope. The activation operator is a vector of (element-wise) instances of a non-linear activation function. In the case of tied weights, the DNN Eq. (39) can be seen as a finite unroll of an RNN, where the layer index becomes a time index and the input is passed through the RNN at each time steps. This is fundamentally a linear difference equation also known as a discrete time dynamic system. The same can be said for the untied case with the difference that here the weights of the RNN will be time varying (this is a time varying dynamical system).

b.1.2 Convolutional Layers

For convolutional networks, the proposed layer architecture can be extended as:

(40)

where , is the layer state matrix for channel , while , is the layer input data matrix for channel (where an appropriate zero padding has been applied) at layer . An equivalent representation to Eq. (40), for a given layer , can be computed in a similar way as done for the tied weight case in Appendix D.2

. In particular, denote the matrix entries for the filter tensors

and and as follows: as the state convolution filter from state channel to state channel , and is the input convolution filter from input channel to state channel , and is a bias matrix for the state channel .

Once again, convolutional layers can be analysed in a similar way to fully connected layers, by means of the following vectorised representation:

(41)

By means of the vectorised representation Eq. (41), the theoretical results proposed in this section will hold for both fully connected and convolutional layers.

b.2 Non-autonomous set

Recall that, for a activation, for each layer we have a different set (with for ). For ReLU activation, we have instead:

(42)

b.3 Stability Definitions for Untied Weights

For the case of untied weights, let us consider the origin as a reference point () as no other steady state is possible without assuming convergence for . This is true if and if . The following definition is given for stability that is verified uniformly with respect to the changing weights:

Definition B.1.

(Uniform Stability and Uniform Robustness) Consider and . The network origin is said to be uniformly asymptotically or simply uniformly stable and, respectively, uniformly practically stable (IOpS) or uniformly Input-Output Stable (IOS) if, respectively, Definition A.7, A.8 and A.9 hold with a unique set of describing functions, for all possible values of the layer specific weights, .

b.4 Jacobian Condition for Stability

The state transfer Jacobian for untied weights is:

(43)

The following assumption extends our results to the untied weight case:

Condition 3.

For any , the Jacobian satisfies:

(44)

where is the spectral radius.

Condition Eq. (44) can be enforced during training for each layer using the procedures presented in the paper.

b.5 Stability Result for Untied Weights

Recall that we have taken the origin as the reference equilibrium point, namely, is a steady state if and if . Without loss of generality, we will assume and treat as a disturbance, , for the robust case. The following result is obtained:

Theorem 2.

(Main result for untied weights)

If Condition 3 holds, then NAIS–net with untied weights and with ReLU or activations is Globally Uniformly Stable. In other words there is a set that is an ultimate bound, namely:

(45)

The describing functions are:

(46)

where is the matrix norm providing the tightest bound to the left-hand side of Eq. (44), where is defined.

In particular, we have:

  1. If the activation is then the network is Globally Uniformly Input-Output robustly Stable for any input perturbation . Under no input actions, namely if , then the origin is Globally Asymptotically Stable. If where is the norm ball of radius , then the following set is RPI:

    (47)
  2. If the activation is ReLu then the network with zero nominal input is Globally Uniformly Input-Output practically Stable for input perturbations in a compact . The describing function are given by Eq. (46) and the constant term is , where is the norm ball radius for the initial condition, namely,

    (48)

    This set is PI under no input action.

    If where is the norm ball of radius , then the state converges to the following ultimate bound:

    (49)

Note that, if the network consists of a combination of fully connected and convolutional layers, then a single norm inequality with corresponding and can be obtained by means of matrix norm identities. For instance, since for any norm we have that , with , one could consider the global describing function . Similarly for .

Appendix C Stability Proofs

Stability of a system can then be assessed for instance by use of the so-called Lyapunov indirect method [44], [27], [41]. Namely, if the linearised system around an equilibrium is stable then the original system is also stable. This also applies for linearisations around all possible trajectories if there is a single equilibrium point, as in our case. In the following, the bias term is sometimes omitted without loss of generality (one can add it to the input). Note that the proofs are also valid for RNNs with varying input sequences , with asymptotic results for converging input sequences, . Recall that by definition and let’s consider the case of , without loss of generality as this can be also assumed as part of the input.

The untied case is covered first. The proposed results make use of the -step state transfer Jacobian Eq. (43). Note that this is not the same as the full input-to-output Jacobian, for instance as the one defined in [10]. The full Jacobian will also contain the input to state map, given by Eq. (C), which does not affect stability. The input to state map will be used later on to investigate robustness as well as the asymptotic network behaviour. First, we will focus on stability, which is determined by the 1-step state transfer map. For the sake of brevity, denote the layer state Jacobian from Eq. (43) as:

Define the discrete time convolution sum as:

The above represent the forced response of a linear time invariant (LTI) system, namely the response to an input from a zero initial condition, where is the (in the LTI case stationary) impulse response. Conversely, the free response of an autonomous system from a non-zero initial condition is given by:

The free response tends to zero for an asymptotically stable system. Considering linearised dynamics allows us to use superposition, in other words, the linearised system response can be analysed as the sum of the free and forced response:

Note that this is not true for the original network, just for its linearisation.

For the considered network, the forced response of the linearised (time varying) system to an impulse at time , evaluated at time , is given by:

(50)
(51)

Therefore, the forced response of the linearised system is:

(52)

Note that: