Positively Scale-Invariant Flatness of ReLU Neural Networks

03/06/2019 ∙ by Mingyang Yi, et al. ∙ 0

It was empirically confirmed by Keskar et al.SharpMinima that flatter minima generalize better. However, for the popular ReLU network, sharp minimum can also generalize well SharpMinimacan. The conclusion demonstrates that the existing definitions of flatness fail to account for the complex geometry of ReLU neural networks because they can't cover the Positively Scale-Invariant (PSI) property of ReLU network. In this paper, we formalize the PSI causes problem of existing definitions of flatness and propose a new description of flatness - PSI-flatness. PSI-flatness is defined on the values of basis paths GSGD instead of weights. Values of basis paths have been shown to be the PSI-variables and can sufficiently represent the ReLU neural networks which ensure the PSI property of PSI-flatness. Then we study the relation between PSI-flatness and generalization theoretically and empirically. First, we formulate a generalization bound based on PSI-flatness which shows generalization error decreasing with the ratio between the largest basis path value and the smallest basis path value. That is to say, the minimum with balanced values of basis paths will more likely to be flatter and generalize better. Finally. we visualize the PSI-flatness of loss surface around two learned models which indicates the minimum with smaller PSI-flatness can indeed generalize better.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

DNN (i.e. DNN) with rectifier linear units (ReLU) activation has achieved huge success in quite a lot fields of machine learning, such as computer vision

[20, 12, 6, 7]

, natural language processing

[1, 4, 22], information system [2, 23], etc. It is surprising that DNN can generalize well to unseen data despite its overwhelming capacity. Explaining this phenomenon is a challenging topic, because the classical uniform bound based on Rademacher complexity or VC dimension for analyzing generalization error seems to be too loose in practical [25]. Many works try to seek new measures that related to the generalization for DNN [10, 17, 5].

It is arguably believed that a flat minimum of loss function found by stochastic gradient based methods results in good generalization ability

[10, 13]. A minimum is flat if there is a wide region around it with roughly the same error. However, the existing definitions of flatness [10, 9, 24] are argued by Dinh et al. [3] that for ReLU neural network, there is no specific link between these definitions of flatness and generalization error by constructing counter examples based on the positive homogeneity of ReLU function. The counter examples reveal that minima can still generalize well even with large flatness value111When the flatness measure has large value, it means the loss surface is sharp.. Hence, existing definitions of flatness fail to account for the complex geometry of ReLU NNs, because a well-defined measure of flatness should at least cover the positive homogeneity of ReLU network.

It’s well known that ReLU NN is positively scale-invariant (PSI) [19] which is a more general property than positive homogeneity. More specific, for a ReLU network, PSI means the ingoing weights of one hidden node multiplied by a positive constant and the outgoing weights of the hidden node divided by at the same time while the output of network keeps unchanged under such transformation. Recent works [19, 16, 26] have been concerned to cope with such property using paths or basis paths of ReLU NN.

We analyze the exiting flatness and show that if the definitions of flatness of ReLU NN dissatisfy PSI, there are two networks with same generalization error but different measures of flatness. Thus such definitions can be an inappropriate measure. We claim that well-defined flatness should cover the PSI property of ReLU network, and a function of PSI-variables will naturally be PSI. Hence, it’s suitable to measure flatness of RelU network via some PSI-variables.

In fact, values of basis paths have been proved to be PSI-variables [16]. We leverage them to define a PSI-flatness for ReLU network. Specifically, a path is the connection between an input node and an output node of a ReLU network. The value of a path is defined as the product of weights along the path. And basis paths are the maximal linearly independent subset of all paths (the details will be introduced in Section 2). We propose three definitions of PSI-flatness accord with three generally accepted flatness in weight space. We uniformly call them PSI-flatness which reveal the local variation of the loss surface of model represented by values of basis paths instead of weights. Compared with the original definitions [10, 9, 18], PSI-flatness captures the PSI property of ReLU networks. Hence, we claim that PSI-flatness is a suitable definition measure of flatness.

We further confirm the relationship between PSI-flatness and generalization error. Theoretically, we leverage a PAC Bayes generalization error bound to bridge PSI-flatness and generalization ability. We show that minima with small ratio between the largest basis path value and the smallest basis path value correspond with small PSI-flatness therefore better generalization. Empirically, we visualize the loss surface represented by basis path values following similar visualization strategy in weight space proposed by Goodfellow et al.[8]. We compare the loss surface around a well-generalized point and a poor-generalized point. The results show that the point generalize well indeed locates in a flatter valley. The experimental results confirm our conclusion that minima with smaller PSI-flatness have better generalization ability.

1.1 Related Work

Flatness is a popular topic to understand generalization both empirically and theoretically. Experimental results [10, 13] illustrate that SGD with large batch size tends to produce ”sharp” minimum that generalize worse. Based on this observation, quite a lot measures are proposed to describe flatness [10, 9, 18, 21]. Unfortunately, the existing definitions of flatness are questioned by Dinh et al.[3]. They construct counterexamples to illustrate that the definitions of flatness are ill-posed. Hence they suggest that an appropriate measure should consider reparameterization.

There are some works focus on the path value based reparameterization of ReLU NN. Neyshabur et al.[19] introduce the positively scaling transformation and firstly propose the path parameterized method. Because the path values have overlaps, Meng et al.[16] show that a subset of path values called basis path values, are sufficient to represent ReLU networks. They claim the space composed by basis path values is a PSI space.

Existing references related to the two topics are mutually independent. None of these papers consider using a reparameterization method to describe flatness appropriately. In this paper, we study how PSI property of ReLU will influence the definition of geometrical measures, and leverage values of basis paths to propose the proper definition of flatness. In addition, we analyze the properties of PSI-flatness theoretically and empirically.

2 Background

2.1 Existing Definitions of Flatness

Given the loss function and a minimum , there are quite a lot general accepted measures to describe flatness around a minimum. We focus in three definitions. They respectively are -weight flatness [10]; trace-weight flatness [9] and expected-weight flatness [17]. In this paper, if there is no extra illustration, the loss means empirically loss.

Definition 2.1 (Weight flatness)

Let be an Euclidean ball centered on a minimum with radius . Then, for a non-negative valued loss function , the -weight flatness will be defined as

(1)

Meanwhile, we can define trace-weight flatness as ( is the Hessian of ). And expected-weight flatness on point is , where

is a random vector.

From the definitions of weight flatness, we see that minima with small weight flatness are flat minima.

2.2 Basis Paths of ReLU NN

Figure 1: This is a simple ReLU network with one hidden node. Suppose path values are , we can see the inner-dependency between them, i.e., .

The outputs of ReLU neural NN with layer can be calculated as , where

is the entry-wised non-linear ReLU activation function. The ReLU NN can be regarded as a directed graph. We use

to denote a path starting from an input node and pointing to an output node by crossing one hidden node at each layer. denotes the set of paths starting from the -th input and pointing to the -th output. Then the (-th) output can be represented as

(2)

where is the product of weights that the path passes by. And where is the -th output at layer that path passed by. We call path value and activation status.

However, path values are overlapped with each other (See Figure 1). Meng et al.[16] disentangle the relation of paths and give a more compact representation of ReLU network. They regard a path as a vector with same dimension to the number of all the parameters of network, where each element represents whether the weight is passed by the path or not. If is passed by path , the -th component of vector denoted by is ; otherwise, . All the path vectors of a ReLU network compose a matrix which is denoted as . The basis path is defined as follows.

Definition 2.2 (Basis Path)

[16] A subset of total paths is basis path set if the path vectors in it compose a maximal linearly independent group of matrix .

Here, we simply describe the skeleton method

to identify the basis paths for multilayer perceptron, which is designed in Meng et al.

[16]. It first identify the skeleton weights. For MLP with equally width, the skeleton weights are the diagonal elements in the weight matrix at each layer. Then the paths that contain at most one skeleton weights are basis paths. Figure 2 shows an example. The red lines are skeleton weights. Based on equation (2) and Theorem 3.6 in Meng et al.[16], the loss function can be defined in the basis value space which perform as as long as the free-skeleton weights keep the signal unchanged. The free-skeleton weights are actually the read line not in the first layer of Figure 2, our later conclusion will build under such circumstance.

Figure 2: This is a MLP network with equal width. The red lines of the network are skeleton weights. The path contains at most one red line is basis path.

Basis paths constructed using this method can be classified into two types. One contains only skeleton weights and the other type contains one non-skeleton weights. When the values of free skeleton weights are fixed, there is a bijection between weights and basis paths. We use

to denote the basis path which contains one non-skeleton weight . We use to denote the basis paths which contains only skeleton weights. Among all the weights contains, is the one at first layer. We will use the mapping between and , and the mapping between and in Section 5 and Section 6.

3 Analysis of Existing Flatness

One important property of ReLU activation function is the positively scale-invariant property: . Dinh et al.[3] point out that given a ReLU NN, the outputs of network are invariant under the transformation: , where is the weight matrix of two adjacent layers. It makes the infinite set of weights unidentifiable. Using , Dinh et al.[3] construct minima generalize well but correspond with infinite weight flatness which reveals that these definitions are uninformative to generalization error.

We consider a more general transformation - positively scaling transformation. Meanwhile, we claim that weight flatness is ill-posed under such more extensive circumstance. First, we give a detailed description of positively scale transformation. For a hidden neuron

of a NN, the transforms the input weights to and output weights as , where . For all hidden neurons of ReLU NN, a positively scaling transformation is defined as:

(3)

for a vector , where means function composition.

Obviously, the family of transformations defined in Definition 5 of Dinh et al.[3] is a subclass of positively scaling transformations. So the following results obtained under positively scaling transformations are stronger and naturally satisfied for .

We define the invariant variables of positively scaling transformations as follows.

Definition 3.1

If a function satisfy , we say is an invariant variable of positively scaling transformations, abbreviated as PSI-variable.

Because PSI property of ReLU network, the loss functions is PSI-variables. Thus, the properties related with loss functions (such as generalization ability) need to consider the PSI property of network. The next theorem shows that if a measure of ReLU NN is not PSI, then it is unsuitable to be used to analyze the generalization error.

Theorem 3.1

Given an ReLU NN model . If the measure is not invariant to all the positively scaling transformations, there exists another model satisfy and for any input .

Proof. Suppose for a specific . By choosing for , we can easily verify that for any input by PSI property of ReLU network. Then we get the conclusion.

Combining Definition 3.1 of the flatness in Section 2.2 and Theorem 3.1, we have the following theorem which reveals the weight flatness is not suitable because they are not PSI-variables.

Theorem 3.2

The three weight flatness defined in Definition 2.1 are not PSI-variables.

Proof. First, we prove the non-PSI property of . Suppose that are the hidden nodes at layer-. For fixed weight vector with non-zero elements, there exist a positively scaling transformation with to make . We use and to denote the incoming and outgoing weight vector of node respectively. Making , we have . So is at least as high as constant function, which makes the value of relatively large.

For , we choose as . Without loss of generality, we assume the weight parameter is represented as . Then, we have

(4)

where is the dimension of vector and is the dimension of parameters un-related to neuron . Denoting as

(5)

where have same dimension with ; and has same dimension with . We have

(6)

Since is the Hessian of a minimum point , we have . Let goes to zero, then . But is a finite number.

Finally, for expected flatness , as long as take negative number. We can construct a like in the proof of -flatness which ensure

locates in the line between and as long as has negative value. According to the continuity of , we can achieve a relatively large expected flatness

Theorem 3.2 illustrates that the ill-posed problem of weight flatness is not only rely on the positive homogeneity property of ReLU function, but also caused by a more general PSI property of ReLU function. Hence, just like Dinh et al.[3] discussed: an appropriate measurement of flatness needs considering defining flatness in some reparameterized space. More specifically, we point that ReLU network should consider reparameterized PSI-variable to describe flatness.

4 PSI-flatness

We have already discussed that PSI property of ReLU network causes the ill-posed problem of existing definitions of flatness. Hence, in this section, we aim to seek a positively scale-invariant measure to reflect the flatness of loss surface of ReLU NN.

The following theorem shows that a measure of ReLU model is PSI if it is defined on the invariant variables to positively scaling transformation.

Theorem 4.1

Given an ReLU NN model and a group of variables , that are invariant to all the positively scaling transformations. If the measure is a function of these variables, i.e., , then has positively scale-invariant property.

Proof. If are PSI-variables, they satisfy that for any positively scaling transformation . So we have

That means has positively scale-invariant property.

According to Theorem 4.1, defining a measure on the PSI-variables is a way to find the suitable definition of flatness. There are lots of variables satisfy PSI. However, the variables we need should be sufficient to represent the ReLU NN. That is to say, the loss function can be totally calculated by these variables. For example, the set of constant functions is a counter example, which is composed by PSI-variables but not sufficient to represent the loss function.

Fortunately, Meng et al.[16] have proven that values of basis paths are PSI-variables and sufficient to represent the ReLU NN. Thus, values of basis paths described in Section 2.2 are exactly what we need. It has been proved that the number of basis paths is , where is the dimension of weight vector and is the number of hidden nodes. Let to denote the basis path value vector. We correct the definition of weight flatness in Definition 2.1 by defining them on values of basis paths.

Definition 4.1 (PSI-flatness)

Representing ReLU NN by values of basis paths as . Let be an Euclidean ball centered on a minimum with radius . Then, for a non-negative valued loss function , the PSI--flatness is defined as

(7)

Meanwhile, PSI-trace flatness is , where is the Hessian matrix of . And the PSI-expected flatness of is , where is loss function and is a random vector.

We collectively call the three definitions of flatness as PSI-flatness. From the definition, we see minima with small PSI-faltness are flat minima. We name them with the prefix PSI because the space composed by values of basis paths is named as PSI-space.

According to Theorem 4.1 and PSI property of basis paths value, three types of PSI-flatness are all positively scale-invariant. Therefore, they capture the PSI geometrical property of the loss surface for ReLU networks. It’s suitable to leverage them to study the generalization ability of ReLU NN.

Till now, we give an positive answer to Dinh et al.[3]: we find a definition of flatness for ReLU NN that has PSI property.

5 Generalization Error Bound Based on PSI-flatness

Though we give an appropriate measure of flatness, the relationship between PSI-flatness and generalization is uncleared up till now. A perspective of weight flatness connecting generalization error is revealed by Neyshabur et al.[18], which relies on PAC Bayes theory [14, 15]. Motivated by their study, the relationship between PSI-flatness and generalization can be derived.

In fact, given a prior distribution

over the hypothesis space that is independent of the training data, with probability at least

, we have

(8)

where and are respectively expected loss and empirical loss, is the number of training data, basis path value is the minimum of learned from the training data. We would like to give some explanations about this generalization bound. The generalization error is described by perturbed generalization error rather the exact generalization error . A small perturbation variable will make the perturbed generalization error close to the real generalization error which involves a small . However, if the norm of tends to zero, the KL term in equation (8) will goes to infinity and make the bound vacuous. Hence, it causes a trade off when we chose the under perspective of equation (8). Even though, the result relatively gives a quantitatively description of generalization error.

Next we analyze the relationship of three PSI-flatness which ensure we can analyze the three PSI-flatness at the same time.

On one hand, by Taylor’s expansion we can easily derive that PSI- flatness is larger than , where is Hessian matrix of . On the other hand, if the perturbation in PSI-expected flatness satisfies , PSI- flatness is larger than PSI-expected flatness. To summary, PSI- flatness is the strongest measure to describe ”flatness”. Hence, combining equation (8) and an upper bound to PSI- flatness can describe generalization error.

We use to denote the vector composed by values of basis paths. If is a minimum, the denominator of PSI- flatness defined in equation (7) will be close to zero. Hence, we use

(9)

to approximate the PSI- flatness, where is empirically loss. Next, we will analyze the upper bound of . We need the following assumptions to derive our result.

Assumption 5.1

The norm of the input of every layer can be upper bounded by a constant .

Assumption 5.2

The loss function is Lipschitz continuous to the output vector:

(10)
Theorem 5.1

Under Assumption 5.1 and 5.2, for a -hidden layer ReLU NN with dimensional input and width in each hidden layer, we have

(11)

Here 222 denotes the maximum among , and is the -th basis path value. And is smaller than a constant decided by path value and input data.

Actually, we can get exactly scale of . It can be referred to the detailed proof of the theorem in Appendix.

From the theorem, we can analyze how the structure of the NN and the values of basis paths will influence the . As the size (both the width and depth) of the network becomes large, the value of becomes large which means the loss surface will become sharp. The ratio of the largest basis path value and the smallest basis path value which reflects the variability of basis path values also influences . More specifically, a minimum of NN equipped with balanced basis path values is a flatter minimum which leads to better generalization.

We will give a proof sketch of Theorem 2. We need a lemma to reveal how to use the values of basis paths to calculate the values of non-basis paths. The following lemma is motivated by Proposition 1 in [26]. We formulate it as follows.

Lemma 5.1

For a NN with layers and the number of basis paths is , the value of every non-basis path can be represented as:

(12)

where are values of basis paths. in equation (12) are non-negative integers satisfy and .

The lemma gives an explicit expression of non-basis path values. It helps to estimate the difference between

and in Theorem 2.

Here we give a sketch of the proof for Theorem 2, the detailed proof can be found in supplement material.

Sketch of Proof for Theorem 2: The main purpose is to bound when . Suppose that achieve the maximum in Theorem 2 satisfies with . Then, we divide the proof into three steps.

Step 1: By Assumption 5.2, we have

(13)

For all input data , if the output of each neuron in the network keep same signal after perturbation, then the activation status keeps unchanged. Under such condition, by the equation (2), we have

(14)

where is the number of total paths. According to the definition of empirically loss, we can bound the difference of output as well as . Next, we control the scale of to ensure the activation status unchanged for all input and give a bound to variable .

Step 2: The activation status of a path is decided by the outputs of the hidden neurons it crossed. For all input data , if the output of each neuron in the network keep same signal after perturbation, then the activation status keeps unchanged. By controlling the scale of , we can achieve this.

Let represent the indicator vector with the same dimension to the number of hidden nodes at layer . For example, represents the first two hidden neurons of layer 1 are activated while others are not. We use to be the indicator vector for the NN after perturbation. Next, we analyze the condition of to make for any and input . We have

(15)

where is the value of basis path connecting the -th hidden node in layer 1 and the -th input node, is the perturbation to its corresponding path value. We can derive the upper bound for which forces the in equation (25) equals to zero. Similarly, we can give an analysis to which makes . After that, we obtain the condition of unchanged activation status after perturbation.

Step 3: Since we want to bound the difference of the path values for the NN before and after the perturbation. It’s easily to verify the bound to for basis path. Based on Lemma 5.1, for any non-basis path , the difference of path value is

(16)

where are corresponding basis paths value in Lemma 5.1; and are perturbations. We can derive an upper bound to (16) based on basis path value and .

6 Visualizing Loss Landscapes

In this section, we will visualize the loss landscape represented by basis path values to study the relationship between generalization and PSI-flatness. To study the relationship between flatness and generalization, previous work [13]

has visualized the loss landscapes around minima in weight space. However, the results can’s be applied directly to PSI-flatness, because loss function is defined in a totally different space. In order to further confirm the claim that: Minimum with smaller PSI-flatness generalize better, we visualize the loss landscape represented by basis path values around the minimum found by stochastic gradient descent (SGD).

First, we present the visualization method for PSI-flatness. We see the training loss of a ReLU NN can be represented by basis path value vector as

(17)

where is training data. Since is usually a high dimensional variable, visualization is possible only for one or two dimension. Here we discuss two dimensional contour plots of loss landscape.

A general approach of visualizing two dimensional loss landscape is ”Random Direction” which was used in [8]. We generalize the method to PSI space. For a model , we choose two random direction vectors with dimension (number of basis paths), and

, which are independently sampled from Gaussian distribution. Then we plot the graph of function

(18)

with coordinate system and . Obviously, visualizing is to visualize the loss function after adding perturbation to basis path values. Intuitively, a flatter landscape is more robust to perturbations, so the plot of should be flatter.

(a) ;
(b) ;
(c) ;
(d) ;
Figure 3: ”128; ” refers to the model trained with batch size and achieved test accuracy; ”2048; ” refers to the model trained with batch size and achieved test accuracy; Figure (a)(b) shows the 3D plots for with and Figure (c)(d) shows the contour maps for with .

Since we want to verify whether the generalization ability is related to PSI-flatness, the first step is to obtain two minima with different generalization performance. It’s widely accepted that SGD with small batch size produces minima generalize better than SGD with large batch size [13, 10]. Thus we train two ReLU NNs with the same structure via different batch sizes of SGD.

The model we choosen is a stacked deep CNN-PlainNet-18 with ReLU activations [6]. We use SGD to train the model on CIFAR-10 dataset [11]. The training strategy is same as the experiments reported in Section 5 of [16] except the batch size. The batch sizes are chosen as and respectively. Both of the two experiments can make the training losses smaller than , which are close to zero. The test accuracy for the model trained with batch size is ; while the test accuracy for the model trained with batch size is .

In order to visualize the loss landscapes represented by basis path values, we independently sample two random vectors and from Gaussian distribution . Then for given , we compute the value of .

A simple method to obtain the value of is to leverage the feed forward calculation. Since the explicit expression of loss based on values of basis path is computational cost, we project the perturbed values of basis paths back to weights in order to calculate . Here we leverage the skeleton method in Meng et al.[16] to design the projection.

In general, under the framework of skeleton method, there exists a bijection between weights and values of basis paths when the values of some weights are fixed. So if we perturb the basis path values as , the resulting projection in weights can be obtained as

(19)

where is the corresponding perturbation in weight space.

For a basis path that contains no non-skeleton weight, we project its perturbed value to the un-fixed weight. Because we have , then by solving from , we have

(20)

Similarly, we project the value of a basis path that contains one non-skeleton weight to the non-skeleton weight . We have

(21)

where denotes the skeleton weight in path that has been updated in Equation (20)333According to the theory in the work [16], for the basis path that contains a non-skeleton weight, there will be only one skeleton weight that is not free variables with fixed value. .

We can directly calculate the new weight and after the projection according to equation (20) and equation (21). The value of can be easily obtained by forward propagation for every .

For the two trained models, we show two kinds of figures: the 3D plots of (shown in 2(c)) and the contour maps of (shown in 2(d)). All the pictures are plotted with same height interval.

From the figures (3), we obviously observe that the minimum with better test accuracy locates in a ”flatter” region under PSI-flatness measurement. It reveals that PSI-flatness is indeed related to the generalization which is accord with our theoretically result.

7 Conclusion

This paper focuses on the ill-posed problem of flatness proposed in Dinh et al.[3]. We give the geometrical concept flatness a new definition and name the measure as PSI-flatness. We prove that PSI-flatness is a positively scale-invariant measure for ReLU NN and thus it is more suitable to study generalization. We also analyze the relationship between PSI-flatness and generalization. More specifically, we quantitatively build a connection between the PSI-flatness and generalization error under PAC Bayes theory. Then we give an upper bound to PSI-flatness which constructs an angle to study generalization from basis path perspective. Finally, we visualize the loss surface in PSI space and show that PSI-flat minima indeed generalize better, while this fails to appear for some cases in weight space. In the future, we will study the influence of other reparametrizations of NN models on the geometrical measures.

References

Appendix A Proof of Theorem 5.1

First, we proof the Lemma 5.1 in the main paper. These proof is under the framework of skeleton method. Without loss of generality, we set the fixed skeleton weight value as 1.

Lemma 5.1 For a neural network with -layers and basis paths, the non-basis path value can be represented as:

(22)

where are path value of basis path. in equation 12 are non-negative integers satisfy and .

Proof of Lemma 5.1: By the construction of skeleton weight, we see that weight parameter not in the first layer can be represent as the ratio between two basis path values, which respectively are one basis path only containing skeleton weight and a basis path containing one non-skeleton weight. Mathematically speaking, weight parameters not in the first layer equal to , where are basis path values. The weight in the first layer is directly represented by one basis path value. Hence, the definition of path value implies the conclusion.

Now we give the complete proof of the Theorem 2. According to skeleton method, if , then we note . We have

(23)

Here is the skeleton weight in the first layer, while is the non-skeleton weight. is the skeleton in the first layer locate in the same basis path with non-skeleton weight and is the perturbation of in weight space.

Theorem 5.1 Under Assumption 5.1 and 5.2, for a -hidden layer ReLU neural network with dimensional input and width for each hidden layer, we have

(24)

where 444 denotes the maximum among , and is the -th basis path value, when is smaller than a constant decided by path value and input data.

Proof of Theorem 2:. Without loss of generality, we assume the upper bound in Assumption 5.1 is 1, the proof is exactly unchanged. Suppose is attained at and set with . The symbols follow the main text part. For any input data , we first give the appropriate scale of to keep . We notice that

(25)

where is the value of basis path connecting the -th hidden neuron in layer two and the -th input neuron in the first layer, and is the perturbation to corresponding path value. We notice that

(26)

where we assume that the input to each neuron is not exactly zero. Choosing , then we have .

To make sure , we have

(27)

where are respectively outputs and outputs after perturbed of the -th hidden neurons in the second layer. And in equation (27) is the path value of basis path connecting the -th neuron in layer 3 and the the -th neuron in layer 2, are corresponding perturbation. And finally, the and in equation (27) is the path value of basis path containing skeleton weight only which cross path and its perturbation. The result is derived by the skeleton method. Then we have

(28)

Now, we analysis the last term in equation (28). If we choosing to make the activation status unchanged in the second layer, then we have

(29)

We see the