## 1 Introduction

Skip connections are extra connections between nodes in different layers of a neural network that skip one or more layers of nonlinear processing. Introduction of skip (or residual) connections has substantially improved the training of very deep neural networks

he2015 ; he2016 ; huang2016 ; srivastava2015 . Despite informal intuitions put forward to motivate skip connections, a clear understanding of how these connections improve training has been lacking. Such understanding is invaluable both in its own right and for the possibilities it might offer for further improvements in training very deep neural networks. In this paper, we attempt to shed light on this question. We argue that skip connections improve the training of deep networks partly by eliminating the singularities inherent in the loss landscapes of deep networks. These singularities are caused by the non-identifiability of subsets of parameters when nodes in the network either get eliminated or collapse into each other, called elimination and overlap singularities, respectively wei2008 . Previous work has identified these singularities and has shown that they significantly slow down learning in shallow neural networks saad1995 ; amari2006 ; wei2008 . We show that skip connections eliminate these singularities and provide evidence suggesting that they improve training by ameliorating the learning slow-down caused by the singularities.## 2 Results

### 2.1 Singularities in fully-connected nets and how skip connections break them

In fully-connected layers, two types of singularity have been identified in previous work amari2006 ; wei2008

: elimination and overlap singularities, both related to the non-identifiability of the model. The Hessian of the loss function becomes singular at both types of singularity (Supplementary Note 1), sometimes called degenerate or higher-order saddles

anandkumar2016 . Elimination singularities arise when a hidden unit is effectively killed, e.g. when its incoming (or outgoing) weights become zero (Figure 1a). This makes the outgoing (or incoming) connections of the unit non-identifiable. Overlap singularities are caused by the permutation symmetry of the hidden units at a given layer and they arise when two units become identical, e.g. when their incoming weights become identical (Figure 1b). In this case, the outgoing connections of the units are no longer identifiable individually (only their sum is identifiable).How do skip connections eliminate these singularities? Skip connections between adjacent layers break the elimination singularities by ensuring that the units are active at least for some inputs, even when their adjustable incoming or outgoing connections become zero (Figure 1a; right). They also eliminate the overlap singularities by breaking the permutation symmetry of the hidden units at a given layer (Figure 1b; right). Thus, even when the adjustable incoming weights of two units become identical, the units do not collapse into each other, since their distinct skip connections still disambiguate them.

### 2.2 Why are singularities harmful for learning?

The effect of singularities on gradient-based learning has been analyzed in detail previously for shallow networks amari2006 ; wei2008 . Figure 2a shows the simplified two hidden unit model analyzed in wei2008 and its reduction to a two-dimensional system in terms of the variables and . Both types of singularities cause degenerate manifolds in the loss landscape, represented by the lines and in Figure 2b, corresponding to the overlap and elimination singularities respectively. The elimination manifolds divide the overlap manifolds into stable and unstable segments. According to the analysis presented in wei2008 , these manifolds give rise to two types of plateaus in the learning dynamics: on-singularity plateaus

which are caused by the random walk behavior of stochastic gradient descent (SGD) along a stable segment of the overlap manifolds (thick segment on the

line in Figure 2b) until it escapes the stable segment (Figure 2d), and near-singularity plateaus which cause a general slowing of the dynamics near the overlap manifolds, even when the initial location is not within the basin of attraction of the stable segment (Figure 2c).### 2.3 Plain networks are more degenerate than networks with skip connections

To investigate the relationship between degeneracy, training difficulty and skip connections in deep networks, we conducted several experiments with deep fully-connected networks. To measure degeneracy, we estimated the eigenvalue density of the Hessian during training for different network architectures. Specifically, we estimated the probability of small eigenvalues, which reflects the dimensionality of the degenerate parameter space. We compared three different architectures: (i) the

plain architecture is a fully-connected feedforward network with no skip connections, described by the equation:where

is the ReLU nonlinearity and

denotes the input layer. (ii) The residual architecture introduces identity skip connections between adjacent layers (note that we do not allow skip connections from the input layer):(iii) The hyper-residual architecture adds skip connections between each layer and all layers above it:

The skip connectivity from the immediately preceding layer is set to be the identity, whereas the remaining skip connections are fixed, but allowed to be different from the identity (see Supplementary Note 2 for further details). This architecture is inspired by the DenseNet architecture huang2016 . In both architectures, each layer projects skip connections to layers above it. However, in the DenseNet architecture, the skip connectivity matrices are learned, whereas in the hyper-residual architecture considered here, they are fixed.

In the experiments of this subsection, the networks all had

hidden layers (followed by a softmax layer at the top) and

hidden units in each hidden layer. Hence, the networks had the same total number of parameters. The biases were initialized to , the weights were initialized with the Glorot normal initialization scheme glorot2010 . The networks were trained on the CIFAR-100 dataset (with coarse labels) using the Adam optimizer kingma2015 with learning rate and a batch size of . Because we are mainly interested in understanding how singularities, and their removal, change the shape of the loss landscape and consequently affect the optimization difficulty, we primarily monitor the training accuracy rather than test accuracy in the results reported below.To estimate the eigenvalue density of the Hessian in our

-dimensional parameter spaces, we first estimated the first four moments of the spectral density using the method of Skilling

skilling1989and fit the estimated moments with a flexible mixture density model (see Supplementary Note 3 for details) consisting of a narrow Gaussian component to capture the bulk of the spectral density, and a skew Gaussian density to capture the tails (see Figure

3c for example fits). Figure 3b shows the evolution of the tail probability, i.e. the weight of the skew Gaussian component under the mixture, during training. A large tail probability at a particular point during optimization indicates a less degenerate model. By this measure, the hyper-residual architecture is the least degenerate and the plain architecture is the most degenerate one. The differences between the architectures are prominent early on in the training. Importantly, the hyper-residual architecture has the highest training accuracy and the plain architecture has the lowest accuracy (Figure 3a) consistent with our hypothesis that the degeneracy of a model increases the training difficulty.### 2.4 Training accuracy is related to distance from degenerate manifolds

To establish a more direct relationship between the elimination and overlap singularities and training difficulty, we exploited the natural variability in training the same model caused by the stochasticity of SGD and random initialization. Specifically, we trained 80 plain networks (30 hidden layers, 128 neurons per layer) on CIFAR-100 using different random initializations and random mini-batch selection. Training performance varied widely across runs. We compared the best 10 and the worst 10 runs (measured by mean accuracy over 100 training epochs, Figure

4a). The worst networks were significantly closer to the elimination singularities, as measured by the average -norm of the incoming weights of their hidden units (Figure 4c), and significantly closer to the overlap singularities (Figure 4d), as measured by the average positive correlation between the incoming weights of their hidden units.### 2.5 Benefits of skip connections are not explained by good initialization alone

To investigate if the benefits of skip connections can be explained in terms of favorable initialization of the parameters, we introduced a malicious initialization scheme for the residual network by subtracting the identity matrix from the initial weight matrices,

. If the benefits of skip connections can be explained primarily by favorable initialization, this malicious initialization would be expected to cancel the effects of skip connections at initialization and hence significantly deteriorate the performance. However, the malicious initialization only had a small adverse effect on the performance of the residual network (Figure 5; ResMalInit), suggesting that the benefits of skip connections cannot be explained by favorable initialization alone. This result reveals a fundamental weakness in previous explanations of the benefits of skip connections based purely on linear models hardt2016 ; li2016 . In Supplementary Note 4 we show that skip connections do not eliminate the singularities in deep linear networks, but only shift the landscape so that typical initializations are farther from the singularities. Thus, in linear networks, any benefits of skip connections are due entirely to better initialization. In contrast, skip connections genuinely eliminate the singularities in nonlinear networks (Supplementary Note 1). The malicious initialization of the residual network does reduce its performance, suggesting that “ghosts” of these singularities still exist in the loss landscape, but the fact that the performance reduction is only slight suggests that the skip connections indeed alter the landscape around these ghosts to alleviate the learning slow-down that would otherwise take place near them.### 2.6 Alternative ways of eliminating the singularities

If the success of skip connections can be attributed, at least partly, to eliminating singularities, then alternative ways of eliminating them should also improve training. We tested this hypothesis by introducing a particularly simple way of eliminating singularities: for each layer we drew random target biases from a Gaussian distribution,

, and put an -norm penalty on learned biases deviating from those targets. This breaks the permutation symmetry between units and eliminates the overlap singularities. In addition, positive values decrease the average threshold of the units and make the elimination of units less likely (but not impossible), hence reducing the elimination singularities. Note that setting and corresponds to the standard -norm regularization of the biases, which does not eliminate any of the overlap or elimination singularities. Hence, we expect the performance to be worse in this case than in cases with properly eliminated singularities. On the other hand, although in general, larger values of and correspond to greater elimination of singularities, the network also has to perform well in the classification task and very large , values might be inconsistent with the latter requirement. Therefore, we expect the performance to be optimal for intermediate values of and. In the experiments reported below, we optimized the hyperparameters

, , and, the mean and the standard deviation of the target bias distribution and the strength of the bias regularization term through random search

bergstra2012 .Putting a prior over the biases indirectly puts a prior over the unit activities. More complicated joint priors over hidden unit responses that favor decorrelated cogswell2015 or clustered liao2016 responses have been proposed before. Although the primary motivation for these regularization schemes was to improve the generalizability or interpretability of the learned representations, they can potentially be understood from a singularity elimination perspective as well. For example, a prior that favors decorrelated responses can facilitate the breaking of permutation symmetries between hidden units, even though it does not directly break those symmetries itself (unlike our bias regularizer).

We trained 30-layer feedforward networks on CIFAR-10 and CIFAR-100 datasets. Figure 5a-b shows the training accuracy of different models on the two datasets. For both datasets, among the models shown in Figure 5, the residual network performs the best and the plain network the worst. Our simple singularity elimination scheme of bias regularization (BiasReg, cyan) significantly improves performance over the plain network. Importantly, the standard -norm regularization on the biases (BiasL2Reg (), magenta) does not improve performance over the plain network. These results are consistent with the singularity elimination hypothesis.

There is still a significant performance gap between our BiasReg network and the residual network despite the fact that both break degeneracies. This can be partly attributed to the fact that the residual network breaks the degeneracies more effectively than the BiasReg network (Figure 5c). Secondly, even in models that completely eliminate the singularities, the learning speed would still depend on the behavior of the gradient norms, and the residual network fares better than the BiasReg network in this respect as well. At the beginning of training, the gradient norms with respect to the layer activities do not diminish in earlier layers of the residual network (Figure 6

a, Epoch 0), demonstrating that it effectively solves the vanishing gradients problem

hochreiter1991 ; bengio1994 . On the other hand, both in the plain network and in the BiasReg network, the gradient norms decay quickly as one descends from the top of the network. Moreover, as training progresses (Figure 6a, Epochs 1 and 2), the gradient norms are larger for the residual network than for the plain or the BiasReg network. Importantly, the same patterns hold even for the maliciously initialized residual network (Figure 6a; ResMalInit), suggesting that skip connections boost the gradient norms near the ghosts of singularities as well and reduce the learning slow-down that would otherwise take place near them. Adding a single batch normalization layer

ioffe2015 in the middle of the BiasReg network alleviates the vanishing gradients problem for this network and brings its performance closer to that of the residual network (Figure 6a-b; BiasReg+BN).### 2.7 Non-identity skip connections

If the singularity elimination hypothesis is correct, there should be nothing special about identity skip connections. Skip connections other than identity should lead to training improvements if they eliminate singularities. For the permutation symmetry breaking of the hidden units, the crucial condition is that the skip connection vector for each unit should disambiguate that unit from all other nodes in that layer. Mathematically, this corresponds to an orthogonality condition on the skip connectivity matrix. We therefore tested random dense orthogonal matrices as skip connectivity matrices. Random dense orthogonal matrices performed slightly better than identity skip connections in both CIFAR-10 and CIFAR-100 datasets (Figure 7a, black vs. blue). This is because, even with skip connections, units can be deactivated for some inputs because of the ReLU nonlinearity (recall that we do not allow skip connections from the input layer). When this happens to a single unit at layer , that unit is effectively eliminated for that subset of inputs, hence eliminating the skip connection to the corresponding unit at layer , if the skip connectivity is the identity. This causes a potential elimination singularity for that particular unit. With dense skip connections, however, this possibility is reduced, since all units in the previous layer are used. Moreover, when two distinct units at layer are deactivated together, the identity skips cannot disambiguate the corresponding units at the next layer, causing a potential overlap singularity. On the other hand, with dense orthogonal skips, because all units at layer are used, even if some of them are deactivated, the units at layer can still be disambiguated with the remaining active units. Figure 7b confirms for the CIFAR-100 dataset that throughout most of the training, the hidden units of the network with dense orthogonal skip connections have a lower probability of zero responses than those of the network with identity skip connections.

Next, we gradually decreased the degree of “orthogonality” of the skip connectivity matrix to see how the orthogonality of the matrix affects performance. Starting from a random dense orthogonal matrix, we first divided the matrix into two halves and copied the first half to the second half. Starting from

orthonormal vectors, this reduces the number of orthonormal vectors to . We continued on like this until the columns of the matrix were repeats of a single unit vector. We predict that as the number of orthonormal vectors in the skip connectivity matrix is decreased, the performance should deteriorate, because the permutation symmetry-breaking capacity of the skip connectivity matrix is reduced. Figure 7 shows the results for hidden units. Darker colors correspond to “more orthogonal” matrices (e.g. 128 means all 128 vectors are orthonormal to each other) The blue line is the identity skip connectivity. More orthogonal skip connectivity matrices yield better performance, consistent with our hypothesis.The less orthogonal skip matrices also suffer from the vanishing gradients problem. So, their failure could be partly attributed to the vanishing gradients problem. To control for this effect, we also designed skip connectivity matrices with eigenvalues on the unit circle (hence with eigenvalue spectra equivalent to an orthogonal matrix), but with varying degrees of orthogonality (see Supplementary Note 5 for details). More specifically, the columns (or rows) of an orthogonal matrix are orthonormal to each other, hence the covariance matrix of these vectors is the identity matrix. We designed matrices where this covariance matrix was allowed to have non-zero off-diagonal values, reflecting the fact that the vectors are not orthogonal any more. By controlling the magnitude of the correlations between the vectors, we manipulated the degree of orthogonality of the vectors. We achieved this by setting the eigenvalue spectrum of the covariance matrix to be given by where denotes the -th eigenvalue of the covariance matrix and is the parameter that controls the degree of orthogonality: corresponds to the identity covariance matrix, hence to an orthonormal set of vectors, whereas larger values of correspond to gradually more correlated vectors. This orthogonality manipulation was done while fixing the eigenvalue spectrum of the skip connectivity matrix to be on the unit circle. Hence, the effects of this manipulation cannot be attributed to any change in the eigenvalue spectrum, but only to the degree of orthogonality of the skip vectors. The results of this experiment are shown in Figure 8. More orthogonal skip connectivity matrices still perform better than less orthogonal ones, even when their eigenvalue spectrum is fixed, suggesting that the results of the earlier experiment (Figure 7) cannot be explained solely by the vanishing gradients problem.

## 3 Discussion

In this paper, we proposed a novel explanation for the benefits of skip connections in terms of the elimination of singularities. Our results suggest that elimination of singularities contributes at least partly to the success of skip connections. However, we emphasize that singularity elimination is not the only factor explaining the benefits of skip connections. Even in completely non-degenerate models, other independent factors such as the behavior of gradient norms would affect training performance. Indeed, we presented evidence suggesting that skip connections are also quite effective at dealing with the problem of vanishing gradients and not every form of singularity elimination can be expected to be equally good at dealing with such additional problems that beset the training of deep networks.

We only performed experiments with fully-connected networks, but we note that limited receptive field sizes and weight sharing between units in a single feature channel in convolutional neural networks also reduce the permutation symmetry in a given layer. The symmetry is not entirely eliminated since although individual units do not have permutation symmetry in this case, feature channels do, but they are far fewer than the number of individual units. Similarly, a recent extension of the residual architecture called ResNeXt

xie2016 uses parallel, segregated processing streams inside the “bottleneck” blocks, which can again be seen as a way of reducing the permutation symmetry inside the block.Our results highlight a potential disadvantage of highly redundant, over-parametrized models currently favored in deep learning: degenerate manifolds in the loss landscapes of such models can slow down learning (cf.

anandkumar2016 ). The results reported here suggest that it could be useful for neural network researchers to pay closer attention to the degeneracies inherent in their models. As a general design principle, we recommend reducing the degeneracies in a model as much as possible, but without sacrificing the model’s expressive capacity.Acknowledgments: We thank Guangyu Robert Yang for helpful discussions and comments on an earlier version of this paper. EO and XP were supported by the Intelligence Advanced Research Projects Activity (IARPA) via Department of Interior/Interior Business Center (DoI/IBC) contract number D16PC00003. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation thereon. Disclaimer: The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of IARPA, DoI/IBC, or the U.S. Government.

## References

- (1) Amari S, Park H, Ozeki T (2006) Singularities affect dynamics of learning in neuromanifolds. Neural Comput 18(5):1007-65.
- (2) Anandkumar A, Ge R (2016) Efficient approaches for escaping higher order saddle points in non-convex optimization. arxiv:1602.05908.
- (3) Bengio Y, Simard P, Frasconi P (1994) Learning long-term dependencies with gradient descent is difficult. IEEE Trans Neural Netw 5(2):157-66.
- (4) Bergstra J, Bengio Y (2012) Random search for hyper-parameter optimization. JMLR 13:281-305.
- (5) Cogswell M, Ahmed F, Girshick R, Zitnick L, Batra D (2015) Reducing overfitting in deep networks by decorrelating representations. arxiv:1511.06068.
- (6) Glorot X, Bengio Y (2010) Understanding the difficulty of training deep feedforward neural networks. AISTAS 9:249-256.
- (7) Hardt M, Ma T (2016) Identity matters in deep learning. arXiv:1611.04231.
- (8) He K, Zhang X, Ren S, Sun J (2015) Deep residual learning for image recognition. arXiv:1512.03385.
- (9) He K, Zhang X, Ren S, Sun J (2016) Identity mappings in deep residual networks. arXiv:1603.05027.
- (10) Hochreiter S (1991) Untersuchungen zu dynamischen neuronalen Netzen. Diploma thesis, Institut f. Informatik, Technische Univ. Munich.
- (11) Huang G, Liu Z, Weinberger KQ, van der Maaten L (2016) Densely connected convolutional networks. arXiv:1608.06993.
- (12) Ioffe S, Szegedy C (2015) Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv:1502.03167.
- (13) Kingma DP, Ba JL (2014) Adam: a method for stochastic optimization. arXiv:1412.6980.
- (14) Li S, Jiao J, Han Y, Weissman T (2016) Demystifying ResNet. arXiv:1611.01186.
- (15) Liao R, Schwing AG, Zemel RS, Urtasun R (2016) Learning deep parsimonious representations. Advances in Neural Information Processing Systems, 2016, 5076-5084.
- (16) Saad D, Solla SA (1995) On-line learning in soft committee machines. Phys Rev E 52:4225.
- (17) Skilling J (1989) The eigenvalues of mega-dimensional matrices. In Skilling J. ed., Maximum Entropy and Bayesian Methods, pp. 455-466. Kluwer Academic Publishers.
- (18) Srivastava RK, Greff K, Schmidhuber J (2015) Training very deep networks. Advances in Neural Information Processing Systems, 2015, 2377-2385.
- (19) Wei H, Zhang J, Cousseau F, Ozeki T, Amari S (2008) Dynamics of learning near singularities in layered networks. Neural Comput 20(3):813-43.
- (20) Xie S, Girshick R, Dollar P, Tu Z, He K (2016) Aggregated residual transformations for deep neural networks. arXiv:1611.05431.

## Supplementary Materials

### Supplementary Note 1: Singularity of the Hessian in non-linear multilayer networks

Because the cost function can be expressed as a sum over training examples, it is enough to consider the cost for a single example: , where are defined recursively as for . We denote the inputs to units at layer by the vector : . We ignore the biases for simplicity. The derivative of the cost function with respect to a single weight between layers and is given by:

(1) |

Now, consider a different connection between the same output unit at layer and a different input unit at layer . The crucial thing to note is that if the units and have the same set of incoming weights, then the derivative of the cost function with respect to becomes identical to its derivative with respect to : . This is because in this condition for all possible inputs and all the remaining terms in Equation 1 are independent of the input index . Thus, the columns (or rows) corresponding to the connections and in the Hessian become identical, making the Hessian degenerate. This is a re-statement of the simple observation that when the units and have the same set of incoming weights, the parameters and become non-identifiable (only their sum is identifiable). Thus, this corresponds to an overlap singularity.

Moreover, it is easy to see from Equation 1 that, when the presynaptic unit is always zero, i.e. when that unit is effectively killed, the column (or row) of the Hessian corresponding to the parameter becomes the zero vector for any , and thus the Hessian becomes singular. This is a re-statement of the simple observation that when the unit is always zero, its outgoing connections, , are no longer identifiable. This corresponds to an elimination singularity.

In the residual case, the only thing that changes in Equation 1 is that the factors on the right-hand side become where is an identity matrix of the appropriate size. The overlap singularities are eliminated, because and cannot be the same for all possible inputs in the residual case (even when the adjustable incoming weights of these units are identical). Similarly, elimination singularities are also eliminated, because cannot be identically zero for all possible inputs (even when the adjustable incoming weights of this unit are all zero), assuming that the corresponding unit at the previous layer is not always zero, which, in turn, is guaranteed with an identity skip connection if is not always zero etc., all the way down to the first hidden layer.

### Supplementary Note 2: Simulation details

In Figure 3, for the skip connections between non-adjacent layers in the hyper-residual networks, i.e. , we used matrices of the type labeled “32” in Figure 7, i.e. matrices consisting of four copies of a set of 32 orthonormal vectors. We found that these matrices performed slightly better than orthogonal matrices.

We augmented the training data in both CIFAR-10 and CIFAR-100 by adding reflected versions of each training image, i.e. their mirror images. This yields a total of 100000 training images for both datasets. The test data were not augmented, consisting of 10000 images in both cases. We used the standard splits of the data into training and test sets.

For the BiasReg network of Figures 5-6, random hyperparameter search returned the following values for the target bias distributions: , for CIFAR-10 and , for CIFAR-100.

### Supplementary Note 3: Estimating the eigenvalue spectral density of the Hessian in deep networks

We use Skilling’s moment matching method skilling1989 to estimate the eigenvalue spectra of the Hessian. We first estimate the first few non-central moments of the density by computing where is a random vector drawn from the standard multivariate Gaussian with zero mean and identity covariance, is the Hessian and is the dimensionality of the parameter space. Because the standard multivariate Gaussian is rotationally symmetric and the Hessian is a symmetric matrix, it is easy to show that

gives an unbiased estimate of the

-th moment of the spectral density:(2) |

where are the eigenvalues of the Hessian, and is the spectral density of the Hessian as . In Equation 2, we make use of the fact that

are random variables with expected value

.Despite appearances, the products in do not require the computation of the Hessian explicitly and can instead be computed efficiently as follows:

(3) |

where the Hessian times vector computation can be performed without computing the Hessian explicitly through Pearlmutter’s -operator pearlmutter1994 . In terms of the vectors , the estimates of the moments are given by the following:

(4) |

For the results shown in Figure 3, we use 20-layer fully-connected feedforward networks and the number of parameters is . For the remaining simulations, we use 30-layer fully-connected networks and the number of parameters is .

We estimate the first four moments of the Hessian and fit the estimated moments with a parametric density model. The parametric density model we use is a mixture of a narrow Gaussian distribution (to capture the bulk of the density) and a skew-normal distribution (to capture the tails):

(5) |

with 4 parameters in total: the mixture weight , and the location , scale and shape parameters of the skew-normal distribution. We fix the parameters of the Gaussian component to and . Since the densities are heavy-tailed, the moments are dominated by the tail behavior of the model, hence the fits are not very sensitive to the precise choice of the parameters of the Gaussian component. The moments of our model can be computed in closed-form. We had difficulty fitting the parameters of the model with gradient-based methods, hence we used a simple grid search method instead. The ranges searched over for each parameter was as follows. : logarithmically spaced between and ; : linearly spaced between and ; : linearly spaced between and ; : logarithmically spaced between and . parameters were evaluated along each parameter dimension for a total of parameter configurations evaluated.

The estimated moments ranged over several orders of magnitude. To make sure that the optimization gave roughly equal weight to fitting each moment, we minimized a normalized objective function:

(6) |

where is the model-derived estimate of the -th moment.

### Supplementary Note 4: Dynamics of learning in linear networks with skip connections

To get a better analytic understanding of the effects of skip connections on the learning dynamics, we turn to linear networks. In an -layer linear plain network, the input-output mapping is given by (again ignoring the biases for simplicity):

(7) |

where and are the input and output vectors, respectively. In linear residual networks with identity skip connections between adjacent layers, the input-output mapping becomes:

(8) |

Finally, in hyper-residual linear networks where all skip connection matrices are assumed to be the identity, the input-output mapping is given by:

(9) |

In the derivations to follow, we do not have to assume that the connectivity matrices are square matrices. If they are rectangular matrices, the identity matrix

should be interpreted as a rectangular identity matrix of the appropriate size. This corresponds to zero-padding the layers when they are not the same size, as is usually done in practice.

Three-layer networks: Dynamics of learning in plain linear networks with no skip connections was analyzed in saxe2013 . For a three-layer network (), the learning dynamics can be expressed by the following differential equations saxe2013 :

(10) | |||||

(11) |

Here and are -dimensional column vectors (where is the number of hidden units) connecting the hidden layer to the -th input and output modes, respectively, of the input-output correlation matrix and

is the corresponding singular value (see

saxe2013 for further details). The first term on the right-hand side of Equations 10-11 facilitates cooperation between and corresponding to the same input-output mode , while the second term encourages competition between vectors corresponding to different modes.In the simplest scenario where there are only two input and output modes, the learning dynamics of Equations 10, 11 reduces to:

(12) | |||||

(13) | |||||

(14) | |||||

(15) |

How does adding skip connections between adjacent layers change the learning dynamics? Considering again a three-layer network () with only two input and output modes, a straightforward extension of Equations 12-15 shows that the learning dynamics changes as follows:

(16) | |||||

(17) | |||||

(18) | |||||

(19) |

where and are orthonormal vectors (similarly for and ). The derivation proceeds essentially identically to the corresponding derivation for plain networks in saxe2013 . The only differences are: (i) we substitute the plain weight matrices with their residual counterparts and (ii) when changing the basis from the canonical basis for the weight matrices , to the input and output modes of the input-output correlation matrix, and , we note that:

(20) | |||||

(21) |

where and are orthogonal matrices and the vectors , , and in Equations 16-19 correspond to the -th columns of the matrices , , and , respectively.

Figure S1 shows, for two different initializations, the evolution of the variables and in plain and residual networks with two input-output modes and two hidden units. When the variables are initialized to small random values, the dynamics in the plain network initially evolves slowly (Figure S1a, blue); whereas it is much faster in the residual network (Figure S1a, red). This effect is attributable to two factors. First, the added orthonormal vectors and increase the initial velocity of the variables in the residual network. Second, even when we equalize the initial norms of the vectors, and (and those of the vectors and ) in the plain and the residual networks, respectively, we still observe an advantage for the residual network (Figure S1b), because the cooperative and competitive terms are orthogonal to each other in the residual network (or close to orthogonal, depending on the initialization of and ; see right-hand side of Equations 16-19), whereas in the plain network they are not necessarily orthogonal and hence can cancel each other (Equations 12-15), thus slowing down convergence.

Singularity of the Hessian in linear three-layer networks: The dynamics in Equations 10, 11 can be interpreted as gradient descent on the following energy function:

(22) |

This energy function is invariant to a (simultaneous) permutation of the elements of the vectors and for all

. This causes degenerate manifolds in the landscape. Specifically, for the permutation symmetry of hidden units, these manifolds are the hyperplanes

, for each pair of hidden units , (similarly, the hyperplanes ) that make the model non-identifiable. Formally, these correspond to the singularities of the Hessian or the Fisher information matrix. Indeed, we shall quickly check below that when for any pair of hidden units , , the Hessian becomes singular (overlap singularities). The Hessian also has additional singularities at the hyper-planes for any and at for any (elimination singularities).Starting from the energy function in Equation 22 and taking the derivative with respect to a single input-to-hidden layer weight, :

(23) |

and the second derivatives are as follows:

(24) | |||||

(25) |

Note that the second derivatives are independent of mode index , reflecting the fact that the energy function is invariant to a permutation of the mode indices. Furthermore, when for all , the columns in the Hessian corresponding to and become identical, causing an additional degeneracy reflecting the non-identifiability of and . A similar derivation establishes that for all also leads to a degeneracy in the Hessian, this time reflecting the non-identifiability of and . These correspond to the overlap singularities.

In addition, it is easy to see from Equations 24, 25 that when , the right-hand sides of both equations become identically zero, reflecting the non-identifiability of for all . A similar derivation shows that when , the columns of the Hessian corresponding to become identically zero for all , this time reflecting the non-identifiability of for all . These correspond to the elimination singularities.

When we add skip connections between adjacent layers, i.e. in the residual architecture, the energy function changes as follows:

(26) |

and straightforward algebra yields the following second derivatives:

(27) | |||||

(28) |

Unlike in the plain network, setting for all , or setting , does not lead to a degeneracy here, thanks to the orthogonal skip vectors . However, this just shifts the locations of the singularities. In particular, the residual network suffers from the same overlap and elimination singularities as the plain network when we make the following change of variables: and .

Networks with more than three-layers: As shown in saxe2013 , in linear networks with more than a single hidden layer, assuming that there are orthogonal matrices and for each layer that diagonalize the initial weight matrix of the corresponding layer (i.e. is a diagonal matrix), dynamics of different singular modes decouple from each other and each mode evolves according to gradient descent dynamics in an energy landscape described by saxe2013 :

(29) |

where can be interpreted as the strength of mode at layer and is the total number of layers. In residual networks, assuming further that the orthogonal matrices satisfy , the energy function changes to:

(30) |

and in hyper-residual networks, it is:

(31) |

Figure S2a illustrates the effect of skip connections on the phase portrait of a three layer network. The two axes, and , represent the mode strength variables for and , respectively: i.e. and . The plain network has a saddle point at (Figure S2a; left). The dynamics around this point is slow, hence starting from small random values causes initially very slow learning. The network funnels the dynamics through the unstable manifold to the stable hyperbolic solution corresponding to . Identity skip connections between adjacent layers in the residual architecture move the saddle point to (Figure S2a; middle). This speeds up the dynamics around the origin, but not as much as in the hyper-residual architecture where the saddle point is moved further away from the origin and the main diagonal to (Figure S2a; right). We found these effects to be more pronounced in deeper networks. Figure S2b shows the dynamics of learning in 10-layer linear networks, demonstrating a clear advantage for the residual architecture over the plain architecture and for the hyper-residual architecture over the residual architecture.

Singularity of the Hessian in reduced linear multilayer networks with skip connections: The derivative of the cost function of a linear multilayer residual network (Equation 30) with respect to the mode strength variable at layer , , is given by (suppressing the mode index and taking ):

(32) |

and the second derivatives are:

(33) | |||||

(34) |

It is easy to check that the columns (or rows) corresponding to and in the Hessian become identical when , making the Hessian degenerate. The hyper-residual architecture does not eliminate these degeneracies but shifts them to different locations in the parameter space by adding distinct constants to and (and to all other variables).

### Supplementary Note 5: Designing skip connectivity matrices with varying degrees of orthogonality and with eigenvalues on the unit circle

We generated the covariance matrix of the eigenvectors by

, where is a random orthogonal matrix and is the diagonal matrix of eigenvalues, , as explained in the main text. We find the correlation matrix through whereis the diagonal matrix of the variances: i.e.

. We take the Cholesky decomposition of the correlation matrix, . Then the designed skip connectivity matrix is given by , where and are the matrices of eigenvalues and eigenvectors of another randomly generated orthogonal matrix, : i.e. . With this construction, has the same eigenvalue spectrum as , however the eigenvectors of are linear combinations of the eigenvectors of such that their correlation matrix is given by . Thus, the eigenvectors of are not orthogonal to each other unless . Larger values of yield more correlated, hence less orthogonal, eigenvectors.## References

- (1) Pearlmutter BA (1994) Fast exact multiplication by the Hessian. Neural Comput 6(1):147-60.
- (2) Saxe AM, McClelland JM, Ganguli S (2013) Exact solutions to the nonlinear dynamics of learning in deep linear neural networks. arXiv:1312.6120.
- (3) Skilling J (1989) The eigenvalues of mega-dimensional matrices. In Skilling J. ed., Maximum Entropy and Bayesian Methods, pp. 455-466. Kluwer Academic Publishers.

Comments

There are no comments yet.