1 Introduction
DNN (i.e. DNN) with rectifier linear units (ReLU) activation has achieved huge success in quite a lot fields of machine learning, such as computer vision
[20, 12, 6, 7][1, 4, 22], information system [2, 23], etc. It is surprising that DNN can generalize well to unseen data despite its overwhelming capacity. Explaining this phenomenon is a challenging topic, because the classical uniform bound based on Rademacher complexity or VC dimension for analyzing generalization error seems to be too loose in practical [25]. Many works try to seek new measures that related to the generalization for DNN [10, 17, 5].It is arguably believed that a flat minimum of loss function found by stochastic gradient based methods results in good generalization ability
[10, 13]. A minimum is flat if there is a wide region around it with roughly the same error. However, the existing definitions of flatness [10, 9, 24] are argued by Dinh et al. [3] that for ReLU neural network, there is no specific link between these definitions of flatness and generalization error by constructing counter examples based on the positive homogeneity of ReLU function. The counter examples reveal that minima can still generalize well even with large flatness value^{1}^{1}1When the flatness measure has large value, it means the loss surface is sharp.. Hence, existing definitions of flatness fail to account for the complex geometry of ReLU NNs, because a welldefined measure of flatness should at least cover the positive homogeneity of ReLU network.It’s well known that ReLU NN is positively scaleinvariant (PSI) [19] which is a more general property than positive homogeneity. More specific, for a ReLU network, PSI means the ingoing weights of one hidden node multiplied by a positive constant and the outgoing weights of the hidden node divided by at the same time while the output of network keeps unchanged under such transformation. Recent works [19, 16, 26] have been concerned to cope with such property using paths or basis paths of ReLU NN.
We analyze the exiting flatness and show that if the definitions of flatness of ReLU NN dissatisfy PSI, there are two networks with same generalization error but different measures of flatness. Thus such definitions can be an inappropriate measure. We claim that welldefined flatness should cover the PSI property of ReLU network, and a function of PSIvariables will naturally be PSI. Hence, it’s suitable to measure flatness of RelU network via some PSIvariables.
In fact, values of basis paths have been proved to be PSIvariables [16]. We leverage them to define a PSIflatness for ReLU network. Specifically, a path is the connection between an input node and an output node of a ReLU network. The value of a path is defined as the product of weights along the path. And basis paths are the maximal linearly independent subset of all paths (the details will be introduced in Section 2). We propose three definitions of PSIflatness accord with three generally accepted flatness in weight space. We uniformly call them PSIflatness which reveal the local variation of the loss surface of model represented by values of basis paths instead of weights. Compared with the original definitions [10, 9, 18], PSIflatness captures the PSI property of ReLU networks. Hence, we claim that PSIflatness is a suitable definition measure of flatness.
We further confirm the relationship between PSIflatness and generalization error. Theoretically, we leverage a PAC Bayes generalization error bound to bridge PSIflatness and generalization ability. We show that minima with small ratio between the largest basis path value and the smallest basis path value correspond with small PSIflatness therefore better generalization. Empirically, we visualize the loss surface represented by basis path values following similar visualization strategy in weight space proposed by Goodfellow et al.[8]. We compare the loss surface around a wellgeneralized point and a poorgeneralized point. The results show that the point generalize well indeed locates in a flatter valley. The experimental results confirm our conclusion that minima with smaller PSIflatness have better generalization ability.
1.1 Related Work
Flatness is a popular topic to understand generalization both empirically and theoretically. Experimental results [10, 13] illustrate that SGD with large batch size tends to produce ”sharp” minimum that generalize worse. Based on this observation, quite a lot measures are proposed to describe flatness [10, 9, 18, 21]. Unfortunately, the existing definitions of flatness are questioned by Dinh et al.[3]. They construct counterexamples to illustrate that the definitions of flatness are illposed. Hence they suggest that an appropriate measure should consider reparameterization.
There are some works focus on the path value based reparameterization of ReLU NN. Neyshabur et al.[19] introduce the positively scaling transformation and firstly propose the path parameterized method. Because the path values have overlaps, Meng et al.[16] show that a subset of path values called basis path values, are sufficient to represent ReLU networks. They claim the space composed by basis path values is a PSI space.
Existing references related to the two topics are mutually independent. None of these papers consider using a reparameterization method to describe flatness appropriately. In this paper, we study how PSI property of ReLU will influence the definition of geometrical measures, and leverage values of basis paths to propose the proper definition of flatness. In addition, we analyze the properties of PSIflatness theoretically and empirically.
2 Background
2.1 Existing Definitions of Flatness
Given the loss function and a minimum , there are quite a lot general accepted measures to describe flatness around a minimum. We focus in three definitions. They respectively are weight flatness [10]; traceweight flatness [9] and expectedweight flatness [17]. In this paper, if there is no extra illustration, the loss means empirically loss.
Definition 2.1 (Weight flatness)
Let be an Euclidean ball centered on a minimum with radius . Then, for a nonnegative valued loss function , the weight flatness will be defined as
(1) 
Meanwhile, we can define traceweight flatness as ( is the Hessian of ). And expectedweight flatness on point is , where
is a random vector.
From the definitions of weight flatness, we see that minima with small weight flatness are flat minima.
2.2 Basis Paths of ReLU NN
The outputs of ReLU neural NN with layer can be calculated as , where
is the entrywised nonlinear ReLU activation function. The ReLU NN can be regarded as a directed graph. We use
to denote a path starting from an input node and pointing to an output node by crossing one hidden node at each layer. denotes the set of paths starting from the th input and pointing to the th output. Then the (th) output can be represented as(2) 
where is the product of weights that the path passes by. And where is the th output at layer that path passed by. We call path value and activation status.
However, path values are overlapped with each other (See Figure 1). Meng et al.[16] disentangle the relation of paths and give a more compact representation of ReLU network. They regard a path as a vector with same dimension to the number of all the parameters of network, where each element represents whether the weight is passed by the path or not. If is passed by path , the th component of vector denoted by is ; otherwise, . All the path vectors of a ReLU network compose a matrix which is denoted as . The basis path is defined as follows.
Definition 2.2 (Basis Path)
[16] A subset of total paths is basis path set if the path vectors in it compose a maximal linearly independent group of matrix .
Here, we simply describe the skeleton method
to identify the basis paths for multilayer perceptron, which is designed in Meng et al.
[16]. It first identify the skeleton weights. For MLP with equally width, the skeleton weights are the diagonal elements in the weight matrix at each layer. Then the paths that contain at most one skeleton weights are basis paths. Figure 2 shows an example. The red lines are skeleton weights. Based on equation (2) and Theorem 3.6 in Meng et al.[16], the loss function can be defined in the basis value space which perform as as long as the freeskeleton weights keep the signal unchanged. The freeskeleton weights are actually the read line not in the first layer of Figure 2, our later conclusion will build under such circumstance.Basis paths constructed using this method can be classified into two types. One contains only skeleton weights and the other type contains one nonskeleton weights. When the values of free skeleton weights are fixed, there is a bijection between weights and basis paths. We use
to denote the basis path which contains one nonskeleton weight . We use to denote the basis paths which contains only skeleton weights. Among all the weights contains, is the one at first layer. We will use the mapping between and , and the mapping between and in Section 5 and Section 6.3 Analysis of Existing Flatness
One important property of ReLU activation function is the positively scaleinvariant property: . Dinh et al.[3] point out that given a ReLU NN, the outputs of network are invariant under the transformation: , where is the weight matrix of two adjacent layers. It makes the infinite set of weights unidentifiable. Using , Dinh et al.[3] construct minima generalize well but correspond with infinite weight flatness which reveals that these definitions are uninformative to generalization error.
We consider a more general transformation  positively scaling transformation. Meanwhile, we claim that weight flatness is illposed under such more extensive circumstance. First, we give a detailed description of positively scale transformation. For a hidden neuron
of a NN, the transforms the input weights to and output weights as , where . For all hidden neurons of ReLU NN, a positively scaling transformation is defined as:(3) 
for a vector , where means function composition.
Obviously, the family of transformations defined in Definition 5 of Dinh et al.[3] is a subclass of positively scaling transformations. So the following results obtained under positively scaling transformations are stronger and naturally satisfied for .
We define the invariant variables of positively scaling transformations as follows.
Definition 3.1
If a function satisfy , we say is an invariant variable of positively scaling transformations, abbreviated as PSIvariable.
Because PSI property of ReLU network, the loss functions is PSIvariables. Thus, the properties related with loss functions (such as generalization ability) need to consider the PSI property of network. The next theorem shows that if a measure of ReLU NN is not PSI, then it is unsuitable to be used to analyze the generalization error.
Theorem 3.1
Given an ReLU NN model . If the measure is not invariant to all the positively scaling transformations, there exists another model satisfy and for any input .
Proof. Suppose for a specific . By choosing for , we can easily verify that for any input by PSI property of ReLU network. Then we get the conclusion.
Combining Definition 3.1 of the flatness in Section 2.2 and Theorem 3.1, we have the following theorem which reveals the weight flatness is not suitable because they are not PSIvariables.
Theorem 3.2
The three weight flatness defined in Definition 2.1 are not PSIvariables.
Proof. First, we prove the nonPSI property of . Suppose that are the hidden nodes at layer. For fixed weight vector with nonzero elements, there exist a positively scaling transformation with to make . We use and to denote the incoming and outgoing weight vector of node respectively. Making , we have . So is at least as high as constant function, which makes the value of relatively large.
For , we choose as . Without loss of generality, we assume the weight parameter is represented as . Then, we have
(4) 
where is the dimension of vector and is the dimension of parameters unrelated to neuron . Denoting as
(5) 
where have same dimension with ; and has same dimension with . We have
(6) 
Since is the Hessian of a minimum point , we have . Let goes to zero, then . But is a finite number.
Finally, for expected flatness , as long as take negative number. We can construct a like in the proof of flatness which ensure
locates in the line between and as long as has negative value. According to the continuity of , we can achieve a relatively large expected flatness
Theorem 3.2 illustrates that the illposed problem of weight flatness is not only rely on the positive homogeneity property of ReLU function, but also caused by a more general PSI property of ReLU function. Hence, just like Dinh et al.[3] discussed: an appropriate measurement of flatness needs considering defining flatness in some reparameterized space. More specifically, we point that ReLU network should consider reparameterized PSIvariable to describe flatness.
4 PSIflatness
We have already discussed that PSI property of ReLU network causes the illposed problem of existing definitions of flatness. Hence, in this section, we aim to seek a positively scaleinvariant measure to reflect the flatness of loss surface of ReLU NN.
The following theorem shows that a measure of ReLU model is PSI if it is defined on the invariant variables to positively scaling transformation.
Theorem 4.1
Given an ReLU NN model and a group of variables , that are invariant to all the positively scaling transformations. If the measure is a function of these variables, i.e., , then has positively scaleinvariant property.
Proof. If are PSIvariables, they satisfy that for any positively scaling transformation . So we have
That means has positively scaleinvariant property.
According to Theorem 4.1, defining a measure on the PSIvariables is a way to find the suitable definition of flatness. There are lots of variables satisfy PSI. However, the variables we need should be sufficient to represent the ReLU NN. That is to say, the loss function can be totally calculated by these variables. For example, the set of constant functions is a counter example, which is composed by PSIvariables but not sufficient to represent the loss function.
Fortunately, Meng et al.[16] have proven that values of basis paths are PSIvariables and sufficient to represent the ReLU NN. Thus, values of basis paths described in Section 2.2 are exactly what we need. It has been proved that the number of basis paths is , where is the dimension of weight vector and is the number of hidden nodes. Let to denote the basis path value vector. We correct the definition of weight flatness in Definition 2.1 by defining them on values of basis paths.
Definition 4.1 (PSIflatness)
Representing ReLU NN by values of basis paths as . Let be an Euclidean ball centered on a minimum with radius . Then, for a nonnegative valued loss function , the PSIflatness is defined as
(7) 
Meanwhile, PSItrace flatness is , where is the Hessian matrix of . And the PSIexpected flatness of is , where is loss function and is a random vector.
We collectively call the three definitions of flatness as PSIflatness. From the definition, we see minima with small PSIfaltness are flat minima. We name them with the prefix PSI because the space composed by values of basis paths is named as PSIspace.
According to Theorem 4.1 and PSI property of basis paths value, three types of PSIflatness are all positively scaleinvariant. Therefore, they capture the PSI geometrical property of the loss surface for ReLU networks. It’s suitable to leverage them to study the generalization ability of ReLU NN.
Till now, we give an positive answer to Dinh et al.[3]: we find a definition of flatness for ReLU NN that has PSI property.
5 Generalization Error Bound Based on PSIflatness
Though we give an appropriate measure of flatness, the relationship between PSIflatness and generalization is uncleared up till now. A perspective of weight flatness connecting generalization error is revealed by Neyshabur et al.[18], which relies on PAC Bayes theory [14, 15]. Motivated by their study, the relationship between PSIflatness and generalization can be derived.
In fact, given a prior distribution
over the hypothesis space that is independent of the training data, with probability at least
, we have(8) 
where and are respectively expected loss and empirical loss, is the number of training data, basis path value is the minimum of learned from the training data. We would like to give some explanations about this generalization bound. The generalization error is described by perturbed generalization error rather the exact generalization error . A small perturbation variable will make the perturbed generalization error close to the real generalization error which involves a small . However, if the norm of tends to zero, the KL term in equation (8) will goes to infinity and make the bound vacuous. Hence, it causes a trade off when we chose the under perspective of equation (8). Even though, the result relatively gives a quantitatively description of generalization error.
Next we analyze the relationship of three PSIflatness which ensure we can analyze the three PSIflatness at the same time.
On one hand, by Taylor’s expansion we can easily derive that PSI flatness is larger than , where is Hessian matrix of . On the other hand, if the perturbation in PSIexpected flatness satisfies , PSI flatness is larger than PSIexpected flatness. To summary, PSI flatness is the strongest measure to describe ”flatness”. Hence, combining equation (8) and an upper bound to PSI flatness can describe generalization error.
We use to denote the vector composed by values of basis paths. If is a minimum, the denominator of PSI flatness defined in equation (7) will be close to zero. Hence, we use
(9) 
to approximate the PSI flatness, where is empirically loss. Next, we will analyze the upper bound of . We need the following assumptions to derive our result.
Assumption 5.1
The norm of the input of every layer can be upper bounded by a constant .
Assumption 5.2
The loss function is Lipschitz continuous to the output vector:
(10) 
Theorem 5.1
Actually, we can get exactly scale of . It can be referred to the detailed proof of the theorem in Appendix.
From the theorem, we can analyze how the structure of the NN and the values of basis paths will influence the . As the size (both the width and depth) of the network becomes large, the value of becomes large which means the loss surface will become sharp. The ratio of the largest basis path value and the smallest basis path value which reflects the variability of basis path values also influences . More specifically, a minimum of NN equipped with balanced basis path values is a flatter minimum which leads to better generalization.
We will give a proof sketch of Theorem 2. We need a lemma to reveal how to use the values of basis paths to calculate the values of nonbasis paths. The following lemma is motivated by Proposition 1 in [26]. We formulate it as follows.
Lemma 5.1
For a NN with layers and the number of basis paths is , the value of every nonbasis path can be represented as:
(12) 
where are values of basis paths. in equation (12) are nonnegative integers satisfy and .
The lemma gives an explicit expression of nonbasis path values. It helps to estimate the difference between
and in Theorem 2.Here we give a sketch of the proof for Theorem 2, the detailed proof can be found in supplement material.
Sketch of Proof for Theorem 2: The main purpose is to bound when . Suppose that achieve the maximum in Theorem 2 satisfies with . Then, we divide the proof into three steps.
Step 1: By Assumption 5.2, we have
(13) 
For all input data , if the output of each neuron in the network keep same signal after perturbation, then the activation status keeps unchanged. Under such condition, by the equation (2), we have
(14)  
where is the number of total paths. According to the definition of empirically loss, we can bound the difference of output as well as . Next, we control the scale of to ensure the activation status unchanged for all input and give a bound to variable .
Step 2: The activation status of a path is decided by the outputs of the hidden neurons it crossed. For all input data , if the output of each neuron in the network keep same signal after perturbation, then the activation status keeps unchanged. By controlling the scale of , we can achieve this.
Let represent the indicator vector with the same dimension to the number of hidden nodes at layer . For example, represents the first two hidden neurons of layer 1 are activated while others are not. We use to be the indicator vector for the NN after perturbation. Next, we analyze the condition of to make for any and input . We have
(15)  
where is the value of basis path connecting the th hidden node in layer 1 and the th input node, is the perturbation to its corresponding path value. We can derive the upper bound for which forces the in equation (25) equals to zero. Similarly, we can give an analysis to which makes . After that, we obtain the condition of unchanged activation status after perturbation.
Step 3: Since we want to bound the difference of the path values for the NN before and after the perturbation. It’s easily to verify the bound to for basis path. Based on Lemma 5.1, for any nonbasis path , the difference of path value is
(16) 
where are corresponding basis paths value in Lemma 5.1; and are perturbations. We can derive an upper bound to (16) based on basis path value and .
6 Visualizing Loss Landscapes
In this section, we will visualize the loss landscape represented by basis path values to study the relationship between generalization and PSIflatness. To study the relationship between flatness and generalization, previous work [13]
has visualized the loss landscapes around minima in weight space. However, the results can’s be applied directly to PSIflatness, because loss function is defined in a totally different space. In order to further confirm the claim that: Minimum with smaller PSIflatness generalize better, we visualize the loss landscape represented by basis path values around the minimum found by stochastic gradient descent (SGD).
First, we present the visualization method for PSIflatness. We see the training loss of a ReLU NN can be represented by basis path value vector as
(17) 
where is training data. Since is usually a high dimensional variable, visualization is possible only for one or two dimension. Here we discuss two dimensional contour plots of loss landscape.
A general approach of visualizing two dimensional loss landscape is ”Random Direction” which was used in [8]. We generalize the method to PSI space. For a model , we choose two random direction vectors with dimension (number of basis paths), and
, which are independently sampled from Gaussian distribution. Then we plot the graph of function
(18) 
with coordinate system and . Obviously, visualizing is to visualize the loss function after adding perturbation to basis path values. Intuitively, a flatter landscape is more robust to perturbations, so the plot of should be flatter.
Since we want to verify whether the generalization ability is related to PSIflatness, the first step is to obtain two minima with different generalization performance. It’s widely accepted that SGD with small batch size produces minima generalize better than SGD with large batch size [13, 10]. Thus we train two ReLU NNs with the same structure via different batch sizes of SGD.
The model we choosen is a stacked deep CNNPlainNet18 with ReLU activations [6]. We use SGD to train the model on CIFAR10 dataset [11]. The training strategy is same as the experiments reported in Section 5 of [16] except the batch size. The batch sizes are chosen as and respectively. Both of the two experiments can make the training losses smaller than , which are close to zero. The test accuracy for the model trained with batch size is ; while the test accuracy for the model trained with batch size is .
In order to visualize the loss landscapes represented by basis path values, we independently sample two random vectors and from Gaussian distribution . Then for given , we compute the value of .
A simple method to obtain the value of is to leverage the feed forward calculation. Since the explicit expression of loss based on values of basis path is computational cost, we project the perturbed values of basis paths back to weights in order to calculate . Here we leverage the skeleton method in Meng et al.[16] to design the projection.
In general, under the framework of skeleton method, there exists a bijection between weights and values of basis paths when the values of some weights are fixed. So if we perturb the basis path values as , the resulting projection in weights can be obtained as
(19) 
where is the corresponding perturbation in weight space.
For a basis path that contains no nonskeleton weight, we project its perturbed value to the unfixed weight. Because we have , then by solving from , we have
(20) 
Similarly, we project the value of a basis path that contains one nonskeleton weight to the nonskeleton weight . We have
(21) 
where denotes the skeleton weight in path that has been updated in Equation (20)^{3}^{3}3According to the theory in the work [16], for the basis path that contains a nonskeleton weight, there will be only one skeleton weight that is not free variables with fixed value. .
We can directly calculate the new weight and after the projection according to equation (20) and equation (21). The value of can be easily obtained by forward propagation for every .
For the two trained models, we show two kinds of figures: the 3D plots of (shown in 2(c)) and the contour maps of (shown in 2(d)). All the pictures are plotted with same height interval.
From the figures (3), we obviously observe that the minimum with better test accuracy locates in a ”flatter” region under PSIflatness measurement. It reveals that PSIflatness is indeed related to the generalization which is accord with our theoretically result.
7 Conclusion
This paper focuses on the illposed problem of flatness proposed in Dinh et al.[3]. We give the geometrical concept flatness a new definition and name the measure as PSIflatness. We prove that PSIflatness is a positively scaleinvariant measure for ReLU NN and thus it is more suitable to study generalization. We also analyze the relationship between PSIflatness and generalization. More specifically, we quantitatively build a connection between the PSIflatness and generalization error under PAC Bayes theory. Then we give an upper bound to PSIflatness which constructs an angle to study generalization from basis path perspective. Finally, we visualize the loss surface in PSI space and show that PSIflat minima indeed generalize better, while this fails to appear for some cases in weight space. In the future, we will study the influence of other reparametrizations of NN models on the geometrical measures.
References
 [1] D. Bahdanau, K. Cho, and Y. Bengio. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.1556, 2014.

[2]
H.T. Cheng, L. Koc, J. Harmsen, T. Shaked, T. Chandra, H. Aradhye,
G. Anderson, G. Corrado, W. Chai, M. Ispir, et al.
Wide & deep learning for recommender systems.
In Proceedings of the 1st Workshop on Deep Learning for Recommender Systems, pages 7–10. ACM, 2016.  [3] L. Dinh, R. Pascanu, S. Bengio, and Y. Bengio. Sharp minima can generalize for deep nets. In ICML’2017 arXiv:1703.04933, 2017.
 [4] J. Gehring, M. Auli, D. Grangier, D. Yarats, and Y. N. Dauphin. Convolutional sequence to sequence learning. arXiv preprint arXiv:1705.03122, 2017.
 [5] M. Hardt, B. Recht, and Y. Singer. Train faster, generalize better: Stability of stochastic gradient descent. arXiv preprint arXiv:1509.01240, 2015.

[6]
K. He, X. Zhang, S. Ren, and J. Sun.
Deep residual learning for image recognition.
In
Proceedings of the IEEE conference on computer vision and pattern recognition
, pages 770–778, 2016.  [7] G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger. Densely connected convolutional networks. In CVPR, volume 1, page 3, 2017.
 [8] O. V. Ian J Goodfellow and A. M. Saxe. Qualitatively characterizing neural network optimization problems. In ICLR’2015, 2015.
 [9] S. Jastrzlbski, Z. Kenton, D. Arpit, N. Ballas, A. Fischer, Y. Bengio, and A. Storkey. Three factors influencing minima in sgd. arXiv preprint arXiv:1711.04623, 2018.
 [10] N. S. Keskar, D. Mudigere, J. Nocedal, M. Smelyanskiy, and P. T. P. Tang. On largebatch training for deep learning: Generalization gap and sharp minima. In ICLR’2017 arxiv: 1609.04836, 2017.
 [11] A. Krizhevsky and G. Hinton. Learning multiple layers of features from tiny images. CIFAR, 2009.
 [12] Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner. Gradientbased learning applied to document recognition. Proceedings of the IEEE, 86(11):2278 – 2324, 1998.
 [13] H. Li, Z. Xu, G. Taylor, C. Studer, and T. Goldstein. Visualizing the loss landscape of neural nets. arxiv preprint arXiv:1712.0991, 2018.

[14]
D. A. McAllesterl.
Some pacbayesian theorems.
In Proceedings of the eleventh annual conference on Computational learning theory
, pages 230–234, 1998.  [15] D. A. McAllesterl. Pacbayesian model averaging. In Proceedings of the eleventh annual conference on Computational learning theory, pages 164–170, 1999.
 [16] Q. Meng, S. Zheng, H. Zhang, W. Chen, Z.M. Ma, and T.Y. Liu. Gsgd: Optimizing relu neural networks in its positively scaleinvariant space. arxiv preprint arXiv:1802.03713, 2018.
 [17] B. Neyshabur, S. Bhojanapalli, D. McAllester, and N. Srebro. Exploring generalization in deep learning. In Advances in Neural Information Processing Systems, pages 5947–5956, 2017.
 [18] B. Neyshabur, S. Bhojanapalli, D. McAllester, and N. Srebro. Exploring generalization in deep learning. arXiv preprint arXiv:1706.08947, 2017.
 [19] B. Neyshabur, R. Salakhutdinov, and N. Srebro. Pathsgd: Pathnormalized optimization in deep neural networks. arxiv preprint arXiv:1506.02617, 2015.
 [20] K. Simonyan and A. Zisserman. Very deep convolutional networks for largescale image recognition. arXiv preprint arXiv:1409.1556, 2014.
 [21] S. L. Smith and Q. V. Le. A bayesian perspective on generalization and stochastic gradient descent. In ICLR’2018 arXiv:1710.06451, 2018.
 [22] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin. Attention is all you need. In Advances in Neural Information Processing Systems, pages 5998–6008, 2017.
 [23] R. Wang, B. Fu, G. Fu, and M. Wang. Deep & cross network for ad click predictions. In Proceedings of the ADKDD’17, page 12. ACM, 2017.
 [24] L. Wu, C. Ma, and W. E. How sgd selects the global minima in overparameterized learning: A dynamical stability perspective. Advances in Neural Information Processing Systems 31 (NeurlPS 2018), 2018.
 [25] C. Zhang, S. Bengio, M. Hardt, B. Recht, and O. Vinyals. Understanding deep learning requires rethinking generalization. arXiv preprint arXiv:1611.03530, 2016.
 [26] S. Zheng, Q. Meng, H. Zhang, W. Chen, N. Yu, and T.Y. Liu. Capacity control of relu neural networks by basispath norm. arxiv preprint arXiv:1809.07122, 2018.
Appendix A Proof of Theorem 5.1
First, we proof the Lemma 5.1 in the main paper. These proof is under the framework of skeleton method. Without loss of generality, we set the fixed skeleton weight value as 1.
Lemma 5.1 For a neural network with layers and basis paths, the nonbasis path value can be represented as:
(22) 
where are path value of basis path. in equation 12 are nonnegative integers satisfy and .
Proof of Lemma 5.1: By the construction of skeleton weight, we see that weight parameter not in the first layer can be represent as the ratio between two basis path values, which respectively are one basis path only containing skeleton weight and a basis path containing one nonskeleton weight. Mathematically speaking, weight parameters not in the first layer equal to , where are basis path values. The weight in the first layer is directly represented by one basis path value. Hence, the definition of path value implies the conclusion.
Now we give the complete proof of the Theorem 2. According to skeleton method, if , then we note . We have
(23)  
Here is the skeleton weight in the first layer, while is the nonskeleton weight. is the skeleton in the first layer locate in the same basis path with nonskeleton weight and is the perturbation of in weight space.
Theorem 5.1 Under Assumption 5.1 and 5.2, for a hidden layer ReLU neural network with dimensional input and width for each hidden layer, we have
(24) 
where ^{4}^{4}4 denotes the maximum among , and is the th basis path value, when is smaller than a constant decided by path value and input data.
Proof of Theorem 2:. Without loss of generality, we assume the upper bound in Assumption 5.1 is 1, the proof is exactly unchanged. Suppose is attained at and set with . The symbols follow the main text part. For any input data , we first give the appropriate scale of to keep . We notice that
(25)  
where is the value of basis path connecting the th hidden neuron in layer two and the th input neuron in the first layer, and is the perturbation to corresponding path value. We notice that
(26)  
where we assume that the input to each neuron is not exactly zero. Choosing , then we have .
To make sure , we have
(27) 
where are respectively outputs and outputs after perturbed of the th hidden neurons in the second layer. And in equation (27) is the path value of basis path connecting the th neuron in layer 3 and the the th neuron in layer 2, are corresponding perturbation. And finally, the and in equation (27) is the path value of basis path containing skeleton weight only which cross path and its perturbation. The result is derived by the skeleton method. Then we have
(28)  
Now, we analysis the last term in equation (28). If we choosing to make the activation status unchanged in the second layer, then we have
(29) 
We see the
Comments
There are no comments yet.