State Space Representations of Deep Neural Networks

06/11/2018 ∙ by Michael Hauser, et al. ∙ Penn State University 0

This paper deals with neural networks as dynamical systems governed by differential or difference equations. It shows that the introduction of skip connections into network architectures, such as residual networks and dense networks, turns a system of static equations into a system of dynamical equations with varying levels of smoothness on the layer-wise transformations. Closed form solutions for the state space representations of general dense networks, as well as k^th order smooth networks, are found in general settings. Furthermore, it is shown that imposing k^th order smoothness on a network architecture with d-many nodes per layer increases the state space dimension by a multiple of k, and so the effective embedding dimension of the data manifold is k · d-many dimensions. It follows that network architectures of these types reduce the number of parameters needed to maintain the same embedding dimension by a factor of k^2 when compared to an equivalent first-order, residual network, significantly motivating the development of network architectures of these types. Numerical simulations were run to validate parts of the developed theory.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 10

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The way in which deep learning was initially used to transform data representations was by nested compositions of affine transformations followed by nonlinear activations. The affine transformation can be for example a fully connected weight matrix or convolution operation. Residual networks 

[6] introduce an identity skip connection that bypasses these transformations, thus allowing the nonlinear activation to act as a perturbation term from the identity. Veit et al. [13]

introduced an algebraic structure showing that residual networks can be understood as the entire collection of all possible forward pass paths of subnetworks, although this algebraic structure ignores the intuition that the the nonlinear activation is acting as a perturbation from identity. Lin and Jegelka showed that a residual network with a single node per layer and ReLU activation can act as a universal approximator 

[9], where it is learning something similar to a piecewise linear finite-mesh approximation of the data manifold.

Recent work consistent with the original intuition of learning perturbations from the identity has shown that residual networks, with their first-order perturbation term, can be formulated as a finite difference approximation of a first-order differential equation [5]. This has the interesting consequence that residual networks are smooth dynamic equations through the layers of the network. Additionally, one may then define entire classes of differentiable transformations over the layers, and then induce network architectures from their finite difference approximations.

Work by Chang et al. [3] considered residual neural networks as forward difference approximations to transformations as well. This work has been extended to develop new network architectures by using central differencing, as opposed to forward differencing, to approximate the set of coupled first order differential equations, called the Midpoint Network [2]

. Similarly, other researchers have used different numerical schemes to approximate the first order ordinary differential equations, such as the linear multistep method to develop the Linear Multistep-architecture 

[10]. This is different from the previous work [5] where entire classes of finite differencing approximations to order differential equations are defined. Haber and Ruthutto [4]

considered how stability techniques from finite difference methods can be applied to improve first and second order smooth neural networks. For example, they suggest requiring that the real part of the eigenvalues from the Jacobian transformations be approximately equal to zero. This ensures that little information about the signal is lost, and that the input data does not diverge as it progresses through the network.

In the current work in Section 2, closed form solutions are found for the state space representations for both general network architectures as well as general additive densely connected network architectures  [7], where a summation operation replaces the concatenation operation. The reason for this is the concatenation operation explicitly increases the embedding dimension, while the summation operation implicitly increases the embedding dimension. It will then be shown in Section 3 that the embedding dimension for a network is increased by a factor of when compared to an equivalent (standard) network and (residual) network, and thus the number of parameters needed to learn is reduced by a factor of to maintain transformations on the same embedding dimension. Section 4 presents the results of experiments for validation of the proposed theory while the details are provided in the Appendix. The paper is concluded in Section 5 along with recommendations for future research.

2 Smooth Network Architectures

This section develops a relation between skip connections in network architectures and algebraic structures of dynamical systems of equations. The network architecture can be thought of as a map , where is the data manifold, is the set of input data/initial conditions and is the set for an -layer deep neural network. We will write to denote the coordinate representation for the data manifold at layer . In fact the manifold is a Riemannian manifold as it has the additional structure of possessing a smoothly varying metric on its cotangent bundle  [5], however for the current purpose we will only consider the manifold’s structure to be .

In order to reduce notational burdens, as well as to keep the analysis as general as possible, we will denote the -layer nonlinearity as the map where is the output of layer . For example if it is a fully connected layer with bias and sigmoid non-linearity then , or if it is a convolution block in a residual network then

where the is the convolution operation, and are the learned filter banks and LReLU and BN

are the leaky-ReLU activation and batch-normalization functions. The nonlinear function

can be thought of as a forcing function, from dynamical systems theory.

A standard architecture without skip connections has the following form:

(1)

The first subsection of this section will define and review smooth residual  [6] architectures. The second subsection expands on the first subsection to define and study the entire class of architectures  [5], and develop the state space formulation for these architectures to show that the effective embedding dimension increases by a multiple of for architectures of these types. Similarly, the third subsection will develop the state space formulation for densely connected networks  [7], and will show that for these dense networks with -many layer-wise skip connections, the effective embedding dimension again increases by a multiple of .

2.1 Residual Networks as Dynamical Equations

The residual network  [6] has a single skip connection and is therefore simply a dynamic transformation:

(2)

The term on the right hand side of Equation 2 is explicitly introduced here to remind us that this is a perturbation term. The accuracy of this assumption is verified by experiment in Section 4.2.

If the equation is defined over , then the partitioning of the dynamical system  [5] takes the following form:

(3)

where can in general vary with as the still goes to zero as . To reduce notation, this paper will write for all . Notations are slightly changed here, by taking and indexing the layers by the fractional index instead of the integer index ; however this is inherent to switching notations between finite difference equations and continuous differential equations.

2.2 Architectures Induced from Smooth Transformations

(a) A architecture is a second-order equation.
(b) The equivalent state-space model of the network.
Figure 1: The block diagram of the architecture (left), derived from , and its equivalent first-order state-space model (right), where and . It is seen that if the second-order model has -many nodes, i.e. maps to , then its state-space representation is maps to . The state-space model is updated as and .

Following the work of Hauser and Ray   [5], we will call network architectures as being architectures depending on how many times the finite difference operators have been applied to the map .

We define the forwards and backwards finite difference operators to be and , respectively. Furthermore, to see the various order derivatives of at the layer , we use these finite difference operators to make the finite difference approximations for and general , while explicitly writing the perturbation term in terms of .

(4a)
(4b)
(4c)

The notation is defined as -many applications of the operator and is the binomial coefficient, read as -choose-. We take one forwards difference and the remaining as backwards differences so that the next layer (forwards) is a function of the previous layers (backwards).

From this formulation, depending on the order of smoothness, the network is implicitly creating interior/ghost elements, borrowing language from finite difference methods, to properly define the initial conditions. One can view a ghost element as pseudo element that lies outside the domain used to control the gradient. For example with a architecture from Equation 4b, one needs the initial position and velocity in order to be able to define as a function of and . In the next subsection it will be shown that the dense network  [7] can be interpreted as the interior/ghost elements needed to initialize the dynamical equation.

To see the equivalent state space formulation of the order equation defined by Equation 4c, first we define the states as the various order finite differencing of the transformation at :

(5a)
(5b)
(5c)

We then have the recursive relation , initialized at the basecase from Equation 4c, as the means to find the closed form solution by induction. Assuming , we have the following:

(6)

The first equality follows from the recursive relation and the second from the base case. This shows that the state space formulation of the neural network is given:

(7)

In matrix form, the state space formulation is as follows:

(8)

We use the notation where is the identity matrix and is the matrix of all zeros. From Equation 7, and equivalently Equation 8, it is understood that if there are -many nodes at layer , i.e. maps to , then a -order smooth neural network can be represented in the state space form as that maps to . Furthermore, it is seen that the

-many state variables are transformed by the shared activation function

which has a -parameter matrix, as opposed to a full -parameter matrix, thus reducing the number of parameters by a factor of .

The schematic of the architecture, with its equivalent first-order state-space representation, is given in Figure 1. The architecture is given by Equation 4b, which can be conveniently rewritten as . Setting and , the state-space model is updated as and . Thus, given maps to , will map to .

2.3 Additive Dense Network for General

(a) An additive dense network with .
(b) The equivalent state-space model of the additive dense network.
Figure 2: The block diagram of the additive dense network architecture architecture (top) and its equivalent state-space model (bottom), where and . It is seen that if the model has -many nodes at each layer , i.e. maps to , then its state-space representation maps to . Note the concatenation block in the standard dense network has been replaced with a summation block, although in the state space form it is seen that using a summation block still leads to the states being implicitly concatenated.

The additive dense network, which is inspired by the dense network  [7], is defined for general by the following system of equations:

(9)

To put this into a state-space form, we will need to transform this into a system of finite difference equations. The general -order difference equation, with one forward difference and all of the remaining backwards is used because from a dense network perspective, the value at (forward) is a function of (backwards).

(10)

Substituting Equation 9 into Equation 10 yields the following:

(11)

Notice that we used . Equation 11 is equivalent to the additive dense network formulation from Equation 9, only reformulated to a form that lends itself to interpretation using finite differencing. We then define the network states as the various order finite differences across layers:

(12)

We still need to find the representations of the ’s in terms of the states . To do this, we will use the property of binomial inversions of sequences  [11].

(13)

The left hand side of Equation 13 is the definition of states from Equation 12 written explicitly as the backwards-difference of a sequence , and the implication arrow is the binomial inversion of sequences. This is the representation of the ’s in terms of the states .

It is now straightforward to find the state space representation of the general -order dense network.

(14)

Equation 14 is true , and so may be more clear when written as a matrix equation:

(15)

Remember that if there are -many nodes per layer, then each maps to and so these matrices are block matrices. For example the entry is the matrix with the number along all of the diagonals, for and . Similarly, the matrix is the matrix of all zeros.

Equation 14, and equivalently Equation 15, is the state-space representation of the additive dense network for general . It is seen that by introducing -many lags into the dense network, the dimension of the state space increases by a multiple of for an equivalent first-order system, since we are concatenating all of the ’s to define the complete state of the system as , which maps to .

Using the notation from dynamical systems and control theory, this can also be represented succinctly as follows:

(16)

where is defined as the row of the second block-matrix of Equation 15. It is seen that the neural network activations for all acts as the controller of this system as the system moves forward in layers (analogous to time). In this sense, the gradient descent training process is learning a controller that maps the data from input to target.

Notice that in the state space formulation in Equation 15, it is immediate that the additive dense network, when , collapses to the residual network of Equation 2. Also notice from Equation 14 that additive dense networks have the form for .

3 Network Capacity and Skip Connections

The objective of this section is to partially explain why imposing high-order skip connections on the network architecture is likely to be beneficial. A first order system has one state variable, e.g. position, while a second order system has two state variables, e.g. position and velocity. In general a order system has -many state variables, for .

Recall that when maps to then the equivalent first-order system maps to , for a order system. This holds since each of the -many functions mapping to operate independently of each other through their independently learned weight matricies, and so their concatenation spans .

This immediately implies that the weight matrix for transforming the order system is , while the weight matrix for transforming the equivalent first-order system is . Therefore by imposing -many skip connections on the network architecture, from a dynamical systems perspective, we only need to learn up to as many parameters to maintain the same embedding dimension, when compared to the equivalent zeroth or first order system. Also notice that the weight matrix for transforming the

’s to the state vectors

’s is a lower block diagonal matrix, and so it is full rank, and so state variables defined by this transformation matrix do not introduce degeneracies.

4 Numerical Experiments

This section describes experiments designed to understand and validate the proposed theory. The simulations were run in tensorflow 

[1]

, trained via error backpropagation 

[12]

with gradients estimated by the Adam optimizer 

[8].

4.1 Visualizing Implicit Dimensions

(a) A architecture achieves accuracy.
(b) A architecture achieves accuracy.
Figure 3: Experiments comparing how single-node per layer architectures linearly separate one-dimensional data. The -axis is position , i.e. the value of the single node at layer , while the -axis is the velocity ; at the velocity is set equal to zero. The architecture has only one state variable, namely position, and is therefore unable to properly separate the data. In comparison the architecture, while still only having a single node per layer, has two state space variables, namely position and velocity, and is therefore able to use both of these to correctly separate the data in the positional dimension of the single node per layer architecture.

An experiment was conducted to visualize and understand these implicit dimensions induced from the higher-order dynamical system. The one-dimensional data was constructed such that of the data is the red class while the other

is the blue class, and the blue data is separated so that half is to the left of the red data and half is to the right. It might seem that there is no sequence of single-neuron transformations that would put this data into a form that can be linearly separated by hyperplane, and at best one could achieve an accuracy of

. This is the case with the standard residual network, as seen in Figure 2(a). The architecture only has one state variable, namely position, and therefore cannot place a hyperplane to linearly separate the data along the positional dimension.

In contrast, the architecture has two state variables, namely position and velocity , and therefore its equivalent first order system is two-dimensional. When visualizing both state variables one sees that the data does in fact get shaped such that a hyperplane only along the positional dimension can correctly separate the data with accuracy. If one were only looking at the positional state variable, i.e. the output of the single node, it would seem as if the red and blue curves were passing through each other, however in the entire two-dimensional space we see that is not the case. Even though this network only has a single-node per layer, and the weight matrices are just single scalars, the equivalent first-order dynamical system has two dimensions and therefore the one-dimensional data can be twisted in this two-dimensional phase space into a form such that it is linearly separable in only the one positional dimension.

4.2 Estimating the Magnitude of the Perturbations

(a) Average perturbation size from the first residual section.
(b) Average perturbation size from the second residual section.
Figure 4: Experiments on MNIST measuring the size of the perturbation term for a (residual) network. The same basic network structure was used with two sections of feature maps of sizes and . The magnitude of the perturbation term is measured against the number of blocks per section, with the number of blocks per section . With a total computational distance each image travels through the network, the average mesh size should go as . The depth invariant computational distance

was fit by linear regression, yielding

for the first block, and for the second.

The purpose of this subsection is to attempt to quantify the magnitude of the perturbation, and therefore validate the perturbation approximations being made. In order for to be a valid perturbation expansion from the transformation , we require . This implies that the magnitude of should be such that . Additionally, assuming the image is traveling a constant distance from input to output, one would expect the average size of the perturbation to be roughly . That is, as one increases the number of layers the average size of each partition region (mesh size) should get smaller as . Experiments were conducted on MNIST, measuring the size of the perturbation term for a network with two sections of residual blocks of sizes and with the number of blocks in each section being and the results are seen in Figure 4. Details of this experiment are given in the appendix. Several conclusions are drawn from this experiment and are discussed below.

  • It is seen that the magnitude of the perturbation term, for sufficiently large , is in fact much less than one. At least in this setting, this experimentally validates the intuition that residual networks are learning perturbations from the identity function.

  • It is seen that with increasing the number of layers , the magnitude of the perturbation goes as , suggesting that there exists a total distance the image travels as it passes through the network. This implies that the image can be interpreted as moving along a trajectory from input to output, in which case the network is a finite difference approximation to the differential equation governing this trajectory. Performing a linear regression on yields that the image travels a "computational distance" of through the first section and through the second section. This may suggest that the first section is more important when processing the image than the second section. If taken literally, it would imply that the average MNIST image is traveling a total "computational distance" of from the low-level input representation to the high-level abstract output representation. This measure is a depth-invariant computational distance the data travels through the network.

  • The above analytical approach suggests a systematic way of determining the depth of a network when designing new network architectures. If one requires a certain minimum mesh size, after estimating the ’s, one can then calculate the minimum number of layers required to achieve a mesh of this size. For example on this MNIST experiment, if one requires a minimum average mesh size of , then the first section should have about layers while the second only needs layers.

4.3 Comparison of various order network architectures

The purpose of this subsection is to experimentally compare the classification performance of various order architectures that are described in this paper. The architectures that are tested are the networks for as well as the additive dense network for ; note that the additive dense network is the same as the network. In all of the experiments, first the ResNet architecture was designed to work well, and then using these exact conditions the described skip connections were then introduced, changing nothing else. Further details of the experiments can be found in the appendix.

add-dense add-dense add-dense
CIFAR10
SVHN
Table 1: Test errors for our implementations of the various types of architectures on both CIFAR10 as well as SVHN. All networks had sections where the data is transformed to sizes , and (denoted by height width number of channels), and each section having residual blocks. Training procedures were kept constant for all experiments, only the skip connections were changed.

It is seen in Table 1 that in both CIFAR10 as well as SVHN the , and architectures all perform similarly well, the architecture performs much more poorly, and the three additive dense networks perform fairly well. On CIFAR10 the architecture achieved the lowest test error, while on SVHN this was achieved by the architecture. A likely reason why the architecture is performing significantly worse than the rest could be because this architecture imposes significant restrictions on how data flows through the network, thus the network does not have sufficient flexibility in how it can process the data.

5 Conclusions and Future Work

This paper has developed a theory of skip connections in neural networks in the state space setting of dynamical systems with appropriate algebraic structures. This theory was then applied to find closed form solutions for the state space representations of both networks as well as dense networks. This immediately shows that these -order network architectures are equivalent, from a dynamical systems perspective, to defining -many first-order systems. In the design, this reduces the number of parameters needed to learn by a factor of while retaining the same state space embedding dimension for the equivalent and networks.

Three experiments were conducted to validate and understand the proposed theory. The first had a carefully designed dataset such that restricted to a certain number of nodes, the neural network is only able to properly separate the classes by using the implicit state variables in addition to its position, such as velocity. The second experiment on MNIST was used to measure the magnitude of the perturbation term with varying levels of layers, resulting in a depth-invariant computational distance the data travels, from low-level input representation to high-level output representation. The third experiment compared various order architectures on benchmark image classification tasks. This paper explains in part why skip connections have been so successful, and further motivates the development of architectures of these types.

While there are many possible directions for further theoretical and experimental research, the authors suggest the following topics of future work:

  • Rigorous design of network architectures from the algebraic properties of the space space model, as opposed to engineering intuitions.

  • Analysis of the topologies of data manifolds to determine relationships between data manifolds and minimum embedding dimension, in a similar manner to the Whitney embedding theorems.

  • Investigations of the computational distance for different, more complex data sets. As mentioned before, this invariant measure could be potentially used to systematically define the depth of the network, as well as to characterize the complexity of the data.

Acknowledgments

Samer Saab Jr has been supported by the Walker Fellowship from the Applied Research Laboratory at the Pennsylvania State University. The work reported here has been supported in part by the U.S. Air Force Office of Scientific Research (AFOSR) under Grant Nos. FA9550-15-1-0400 and FA9550-18-1-0135 in the area of dynamic data-driven application systems (DDDAS). Any opinions, findings and conclusions or recommendations expressed in this publication are those of the authors and do not necessarily reflect the views of the sponsoring agencies.

Appendix:  Description of Numerical Experiments

For the experiment of Section 4.2, no data augmentation was used and a constant batch size of was used. In the network, each block has the form , where the is the convolution operation, and are the learned filters and LReLU and BN are the leaky-ReLU activation and batch-normalization functions. For specifying image sizes, we use the notation . The first section of the network of constant feature map size operates on

images, and a stride of

is then applied and mapped to . After the second section, global average pooling was performed to reduce the size to length vectors, and fed into a fully connected layer for softmax classification.

For Section 4.3, the batch size was updated automatically, from , when a trailing window of the validation error stopped decreasing. In CIFAR10, of the training samples were used for validation, while in SVHN a random collection of of the training and extra data was used for training, while the remaining

was used for validation. The only data augmentation used during training was the images were flipped left-right and padded with

zeros and randomly cropped to .

In the networks of Section 4.3, each section of constant feature map size contained residual blocks, all having forcing functions:

The first, second and third sections operate on images of sizes , and , respectively, with downsampling by convolution strides of , and increasing the number of channels by using filter banks of size and . Global average pooling was then performed on the last layers to reduce the size to -many -length vectors, and with each of the vectors then fed into a fully connected layer of size , leaky-ReLu applied and then fully connected for softmax classification.

References