1 Introduction
The way in which deep learning was initially used to transform data representations was by nested compositions of affine transformations followed by nonlinear activations. The affine transformation can be for example a fully connected weight matrix or convolution operation. Residual networks
[6] introduce an identity skip connection that bypasses these transformations, thus allowing the nonlinear activation to act as a perturbation term from the identity. Veit et al. [13]introduced an algebraic structure showing that residual networks can be understood as the entire collection of all possible forward pass paths of subnetworks, although this algebraic structure ignores the intuition that the the nonlinear activation is acting as a perturbation from identity. Lin and Jegelka showed that a residual network with a single node per layer and ReLU activation can act as a universal approximator
[9], where it is learning something similar to a piecewise linear finitemesh approximation of the data manifold.Recent work consistent with the original intuition of learning perturbations from the identity has shown that residual networks, with their firstorder perturbation term, can be formulated as a finite difference approximation of a firstorder differential equation [5]. This has the interesting consequence that residual networks are smooth dynamic equations through the layers of the network. Additionally, one may then define entire classes of differentiable transformations over the layers, and then induce network architectures from their finite difference approximations.
Work by Chang et al. [3] considered residual neural networks as forward difference approximations to transformations as well. This work has been extended to develop new network architectures by using central differencing, as opposed to forward differencing, to approximate the set of coupled first order differential equations, called the Midpoint Network [2]
. Similarly, other researchers have used different numerical schemes to approximate the first order ordinary differential equations, such as the linear multistep method to develop the Linear Multisteparchitecture
[10]. This is different from the previous work [5] where entire classes of finite differencing approximations to order differential equations are defined. Haber and Ruthutto [4]considered how stability techniques from finite difference methods can be applied to improve first and second order smooth neural networks. For example, they suggest requiring that the real part of the eigenvalues from the Jacobian transformations be approximately equal to zero. This ensures that little information about the signal is lost, and that the input data does not diverge as it progresses through the network.
In the current work in Section 2, closed form solutions are found for the state space representations for both general network architectures as well as general additive densely connected network architectures [7], where a summation operation replaces the concatenation operation. The reason for this is the concatenation operation explicitly increases the embedding dimension, while the summation operation implicitly increases the embedding dimension. It will then be shown in Section 3 that the embedding dimension for a network is increased by a factor of when compared to an equivalent (standard) network and (residual) network, and thus the number of parameters needed to learn is reduced by a factor of to maintain transformations on the same embedding dimension. Section 4 presents the results of experiments for validation of the proposed theory while the details are provided in the Appendix. The paper is concluded in Section 5 along with recommendations for future research.
2 Smooth Network Architectures
This section develops a relation between skip connections in network architectures and algebraic structures of dynamical systems of equations. The network architecture can be thought of as a map , where is the data manifold, is the set of input data/initial conditions and is the set for an layer deep neural network. We will write to denote the coordinate representation for the data manifold at layer . In fact the manifold is a Riemannian manifold as it has the additional structure of possessing a smoothly varying metric on its cotangent bundle [5], however for the current purpose we will only consider the manifold’s structure to be .
In order to reduce notational burdens, as well as to keep the analysis as general as possible, we will denote the layer nonlinearity as the map where is the output of layer . For example if it is a fully connected layer with bias and sigmoid nonlinearity then , or if it is a convolution block in a residual network then
where the is the convolution operation, and are the learned filter banks and LReLU and BN
are the leakyReLU activation and batchnormalization functions. The nonlinear function
can be thought of as a forcing function, from dynamical systems theory.A standard architecture without skip connections has the following form:
(1) 
The first subsection of this section will define and review smooth residual [6] architectures. The second subsection expands on the first subsection to define and study the entire class of architectures [5], and develop the state space formulation for these architectures to show that the effective embedding dimension increases by a multiple of for architectures of these types. Similarly, the third subsection will develop the state space formulation for densely connected networks [7], and will show that for these dense networks with many layerwise skip connections, the effective embedding dimension again increases by a multiple of .
2.1 Residual Networks as Dynamical Equations
The residual network [6] has a single skip connection and is therefore simply a dynamic transformation:
(2) 
The term on the right hand side of Equation 2 is explicitly introduced here to remind us that this is a perturbation term. The accuracy of this assumption is verified by experiment in Section 4.2.
If the equation is defined over , then the partitioning of the dynamical system [5] takes the following form:
(3) 
where can in general vary with as the still goes to zero as . To reduce notation, this paper will write for all . Notations are slightly changed here, by taking and indexing the layers by the fractional index instead of the integer index ; however this is inherent to switching notations between finite difference equations and continuous differential equations.
2.2 Architectures Induced from Smooth Transformations
Following the work of Hauser and Ray [5], we will call network architectures as being architectures depending on how many times the finite difference operators have been applied to the map .
We define the forwards and backwards finite difference operators to be and , respectively. Furthermore, to see the various order derivatives of at the layer , we use these finite difference operators to make the finite difference approximations for and general , while explicitly writing the perturbation term in terms of .
(4a)  
(4b)  
(4c) 
The notation is defined as many applications of the operator and is the binomial coefficient, read as choose. We take one forwards difference and the remaining as backwards differences so that the next layer (forwards) is a function of the previous layers (backwards).
From this formulation, depending on the order of smoothness, the network is implicitly creating interior/ghost elements, borrowing language from finite difference methods, to properly define the initial conditions. One can view a ghost element as pseudo element that lies outside the domain used to control the gradient. For example with a architecture from Equation 4b, one needs the initial position and velocity in order to be able to define as a function of and . In the next subsection it will be shown that the dense network [7] can be interpreted as the interior/ghost elements needed to initialize the dynamical equation.
To see the equivalent state space formulation of the order equation defined by Equation 4c, first we define the states as the various order finite differencing of the transformation at :
(5a)  
(5b)  
(5c) 
We then have the recursive relation , initialized at the basecase from Equation 4c, as the means to find the closed form solution by induction. Assuming , we have the following:
(6) 
The first equality follows from the recursive relation and the second from the base case. This shows that the state space formulation of the neural network is given:
(7) 
In matrix form, the state space formulation is as follows:
(8) 
We use the notation where is the identity matrix and is the matrix of all zeros. From Equation 7, and equivalently Equation 8, it is understood that if there are many nodes at layer , i.e. maps to , then a order smooth neural network can be represented in the state space form as that maps to . Furthermore, it is seen that the
many state variables are transformed by the shared activation function
which has a parameter matrix, as opposed to a full parameter matrix, thus reducing the number of parameters by a factor of .2.3 Additive Dense Network for General
The additive dense network, which is inspired by the dense network [7], is defined for general by the following system of equations:
(9) 
To put this into a statespace form, we will need to transform this into a system of finite difference equations. The general order difference equation, with one forward difference and all of the remaining backwards is used because from a dense network perspective, the value at (forward) is a function of (backwards).
(10) 
Notice that we used . Equation 11 is equivalent to the additive dense network formulation from Equation 9, only reformulated to a form that lends itself to interpretation using finite differencing. We then define the network states as the various order finite differences across layers:
(12) 
We still need to find the representations of the ’s in terms of the states . To do this, we will use the property of binomial inversions of sequences [11].
(13) 
The left hand side of Equation 13 is the definition of states from Equation 12 written explicitly as the backwardsdifference of a sequence , and the implication arrow is the binomial inversion of sequences. This is the representation of the ’s in terms of the states .
It is now straightforward to find the state space representation of the general order dense network.
(14) 
Equation 14 is true , and so may be more clear when written as a matrix equation:
(15) 
Remember that if there are many nodes per layer, then each maps to and so these matrices are block matrices. For example the entry is the matrix with the number along all of the diagonals, for and . Similarly, the matrix is the matrix of all zeros.
Equation 14, and equivalently Equation 15, is the statespace representation of the additive dense network for general . It is seen that by introducing many lags into the dense network, the dimension of the state space increases by a multiple of for an equivalent firstorder system, since we are concatenating all of the ’s to define the complete state of the system as , which maps to .
Using the notation from dynamical systems and control theory, this can also be represented succinctly as follows:
(16) 
where is defined as the row of the second blockmatrix of Equation 15. It is seen that the neural network activations for all acts as the controller of this system as the system moves forward in layers (analogous to time). In this sense, the gradient descent training process is learning a controller that maps the data from input to target.
3 Network Capacity and Skip Connections
The objective of this section is to partially explain why imposing highorder skip connections on the network architecture is likely to be beneficial. A first order system has one state variable, e.g. position, while a second order system has two state variables, e.g. position and velocity. In general a order system has many state variables, for .
Recall that when maps to then the equivalent firstorder system maps to , for a order system. This holds since each of the many functions mapping to operate independently of each other through their independently learned weight matricies, and so their concatenation spans .
This immediately implies that the weight matrix for transforming the order system is , while the weight matrix for transforming the equivalent firstorder system is . Therefore by imposing many skip connections on the network architecture, from a dynamical systems perspective, we only need to learn up to as many parameters to maintain the same embedding dimension, when compared to the equivalent zeroth or first order system. Also notice that the weight matrix for transforming the
’s to the state vectors
’s is a lower block diagonal matrix, and so it is full rank, and so state variables defined by this transformation matrix do not introduce degeneracies.4 Numerical Experiments
This section describes experiments designed to understand and validate the proposed theory. The simulations were run in tensorflow
[1], trained via error backpropagation
[12]with gradients estimated by the Adam optimizer
[8].4.1 Visualizing Implicit Dimensions
An experiment was conducted to visualize and understand these implicit dimensions induced from the higherorder dynamical system. The onedimensional data was constructed such that of the data is the red class while the other
is the blue class, and the blue data is separated so that half is to the left of the red data and half is to the right. It might seem that there is no sequence of singleneuron transformations that would put this data into a form that can be linearly separated by hyperplane, and at best one could achieve an accuracy of
. This is the case with the standard residual network, as seen in Figure 2(a). The architecture only has one state variable, namely position, and therefore cannot place a hyperplane to linearly separate the data along the positional dimension.In contrast, the architecture has two state variables, namely position and velocity , and therefore its equivalent first order system is twodimensional. When visualizing both state variables one sees that the data does in fact get shaped such that a hyperplane only along the positional dimension can correctly separate the data with accuracy. If one were only looking at the positional state variable, i.e. the output of the single node, it would seem as if the red and blue curves were passing through each other, however in the entire twodimensional space we see that is not the case. Even though this network only has a singlenode per layer, and the weight matrices are just single scalars, the equivalent firstorder dynamical system has two dimensions and therefore the onedimensional data can be twisted in this twodimensional phase space into a form such that it is linearly separable in only the one positional dimension.
4.2 Estimating the Magnitude of the Perturbations
was fit by linear regression, yielding
for the first block, and for the second.The purpose of this subsection is to attempt to quantify the magnitude of the perturbation, and therefore validate the perturbation approximations being made. In order for to be a valid perturbation expansion from the transformation , we require . This implies that the magnitude of should be such that . Additionally, assuming the image is traveling a constant distance from input to output, one would expect the average size of the perturbation to be roughly . That is, as one increases the number of layers the average size of each partition region (mesh size) should get smaller as . Experiments were conducted on MNIST, measuring the size of the perturbation term for a network with two sections of residual blocks of sizes and with the number of blocks in each section being and the results are seen in Figure 4. Details of this experiment are given in the appendix. Several conclusions are drawn from this experiment and are discussed below.

It is seen that the magnitude of the perturbation term, for sufficiently large , is in fact much less than one. At least in this setting, this experimentally validates the intuition that residual networks are learning perturbations from the identity function.

It is seen that with increasing the number of layers , the magnitude of the perturbation goes as , suggesting that there exists a total distance the image travels as it passes through the network. This implies that the image can be interpreted as moving along a trajectory from input to output, in which case the network is a finite difference approximation to the differential equation governing this trajectory. Performing a linear regression on yields that the image travels a "computational distance" of through the first section and through the second section. This may suggest that the first section is more important when processing the image than the second section. If taken literally, it would imply that the average MNIST image is traveling a total "computational distance" of from the lowlevel input representation to the highlevel abstract output representation. This measure is a depthinvariant computational distance the data travels through the network.

The above analytical approach suggests a systematic way of determining the depth of a network when designing new network architectures. If one requires a certain minimum mesh size, after estimating the ’s, one can then calculate the minimum number of layers required to achieve a mesh of this size. For example on this MNIST experiment, if one requires a minimum average mesh size of , then the first section should have about layers while the second only needs layers.
4.3 Comparison of various order network architectures
The purpose of this subsection is to experimentally compare the classification performance of various order architectures that are described in this paper. The architectures that are tested are the networks for as well as the additive dense network for ; note that the additive dense network is the same as the network. In all of the experiments, first the ResNet architecture was designed to work well, and then using these exact conditions the described skip connections were then introduced, changing nothing else. Further details of the experiments can be found in the appendix.
adddense  adddense  adddense  

CIFAR10  
SVHN 
It is seen in Table 1 that in both CIFAR10 as well as SVHN the , and architectures all perform similarly well, the architecture performs much more poorly, and the three additive dense networks perform fairly well. On CIFAR10 the architecture achieved the lowest test error, while on SVHN this was achieved by the architecture. A likely reason why the architecture is performing significantly worse than the rest could be because this architecture imposes significant restrictions on how data flows through the network, thus the network does not have sufficient flexibility in how it can process the data.
5 Conclusions and Future Work
This paper has developed a theory of skip connections in neural networks in the state space setting of dynamical systems with appropriate algebraic structures. This theory was then applied to find closed form solutions for the state space representations of both networks as well as dense networks. This immediately shows that these order network architectures are equivalent, from a dynamical systems perspective, to defining many firstorder systems. In the design, this reduces the number of parameters needed to learn by a factor of while retaining the same state space embedding dimension for the equivalent and networks.
Three experiments were conducted to validate and understand the proposed theory. The first had a carefully designed dataset such that restricted to a certain number of nodes, the neural network is only able to properly separate the classes by using the implicit state variables in addition to its position, such as velocity. The second experiment on MNIST was used to measure the magnitude of the perturbation term with varying levels of layers, resulting in a depthinvariant computational distance the data travels, from lowlevel input representation to highlevel output representation. The third experiment compared various order architectures on benchmark image classification tasks. This paper explains in part why skip connections have been so successful, and further motivates the development of architectures of these types.
While there are many possible directions for further theoretical and experimental research, the authors suggest the following topics of future work:

Rigorous design of network architectures from the algebraic properties of the space space model, as opposed to engineering intuitions.

Analysis of the topologies of data manifolds to determine relationships between data manifolds and minimum embedding dimension, in a similar manner to the Whitney embedding theorems.

Investigations of the computational distance for different, more complex data sets. As mentioned before, this invariant measure could be potentially used to systematically define the depth of the network, as well as to characterize the complexity of the data.
Acknowledgments
Samer Saab Jr has been supported by the Walker Fellowship from the Applied Research Laboratory at the Pennsylvania State University. The work reported here has been supported in part by the U.S. Air Force Office of Scientific Research (AFOSR) under Grant Nos. FA95501510400 and FA95501810135 in the area of dynamic datadriven application systems (DDDAS). Any opinions, findings and conclusions or recommendations expressed in this publication are those of the authors and do not necessarily reflect the views of the sponsoring agencies.
Appendix: Description of Numerical Experiments
For the experiment of Section 4.2, no data augmentation was used and a constant batch size of was used. In the network, each block has the form , where the is the convolution operation, and are the learned filters and LReLU and BN are the leakyReLU activation and batchnormalization functions. For specifying image sizes, we use the notation . The first section of the network of constant feature map size operates on
images, and a stride of
is then applied and mapped to . After the second section, global average pooling was performed to reduce the size to length vectors, and fed into a fully connected layer for softmax classification.For Section 4.3, the batch size was updated automatically, from , when a trailing window of the validation error stopped decreasing. In CIFAR10, of the training samples were used for validation, while in SVHN a random collection of of the training and extra data was used for training, while the remaining
was used for validation. The only data augmentation used during training was the images were flipped leftright and padded with
zeros and randomly cropped to .In the networks of Section 4.3, each section of constant feature map size contained residual blocks, all having forcing functions:
The first, second and third sections operate on images of sizes , and , respectively, with downsampling by convolution strides of , and increasing the number of channels by using filter banks of size and . Global average pooling was then performed on the last layers to reduce the size to many length vectors, and with each of the vectors then fed into a fully connected layer of size , leakyReLu applied and then fully connected for softmax classification.
References
 [1] Martín Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dandelion Mané, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Viégas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. TensorFlow: Largescale machine learning on heterogeneous systems, 2015. Software available from tensorflow.org.
 [2] Bo Chang, Lili Meng, Eldad Haber, Lars Ruthotto, David Begert, and Elliot Holtham. Reversible architectures for arbitrarily deep residual neural networks. arXiv preprint arXiv:1709.03698, 2017.
 [3] Bo Chang, Lili Meng, Eldad Haber, Frederick Tung, and David Begert. Multilevel residual networks from dynamical systems view. arXiv preprint arXiv:1710.10348, 2017.
 [4] Eldad Haber and Lars Ruthotto. Stable architectures for deep neural networks. Inverse Problems, 34(1):014004, 2017.
 [5] Michael Hauser and Asok Ray. Principles of riemannian geometry in neural networks. In Advances in Neural Information Processing Systems, pages 2804–2813, 2017.

[6]
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun.
Deep residual learning for image recognition.
In
Proceedings of the IEEE conference on computer vision and pattern recognition
, pages 770–778, 2016.  [7] Gao Huang, Zhuang Liu, Kilian Q Weinberger, and Laurens van der Maaten. Densely connected convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, volume 1, page 3, 2017.
 [8] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
 [9] Hongzhou Lin and Stefanie Jegelka. Resnet with oneneuron hidden layers is a universal approximator. arXiv preprint arXiv:1806.10909, 2018.
 [10] Yiping Lu, Aoxiao Zhong, Quanzheng Li, and Bin Dong. Beyond finite layer neural networks: Bridging deep architectures and numerical differential equations. arXiv preprint arXiv:1710.10121, 2017.
 [11] Helmut Proctinger et al. Some information about the binomial transform. 1993.
 [12] David E Rumelhart, Geoffrey E Hinton, and Ronald J Williams. Learning internal representations by error propagation. Technical report, California Univ San Diego La Jolla Inst for Cognitive Science, 1985.
 [13] Andreas Veit, Michael J Wilber, and Serge Belongie. Residual networks behave like ensembles of relatively shallow networks. In Advances in Neural Information Processing Systems, pages 550–558, 2016.
Comments
There are no comments yet.