# Fisher-Rao Metric, Geometry, and Complexity of Neural Networks

We study the relationship between geometry and capacity measures for deep neural networks from an invariance viewpoint. We introduce a new notion of capacity --- the Fisher-Rao norm --- that possesses desirable invariance properties and is motivated by Information Geometry. We discover an analytical characterization of the new capacity measure, through which we establish norm-comparison inequalities and further show that the new measure serves as an umbrella for several existing norm-based complexity measures. We discuss upper bounds on the generalization error induced by the proposed measure. Extensive numerical experiments on CIFAR-10 support our theoretical findings. Our theoretical analysis rests on a key structural lemma about partial derivatives of multi-layer rectifier networks.

## Authors

• 24 publications
• 31 publications
• 51 publications
• 11 publications
• ### Norm-Based Capacity Control in Neural Networks

We investigate the capacity, convexity and characterization of a general...
02/27/2015 ∙ by Behnam Neyshabur, et al. ∙ 0

• ### Towards Understanding the Role of Over-Parametrization in Generalization of Neural Networks

Despite existing work on ensuring generalization of neural networks in t...
05/30/2018 ∙ by Behnam Neyshabur, et al. ∙ 2

• ### Capacity Control of ReLU Neural Networks by Basis-path Norm

Recently, path norm was proposed as a new capacity measure for neural ne...
09/19/2018 ∙ by Shuxin Zheng, et al. ∙ 0

• ### Understanding Weight Normalized Deep Neural Networks with Rectified Linear Units

This paper presents a general framework for norm-based capacity control ...
10/03/2018 ∙ by Yixi Xu, et al. ∙ 0

• ### Global Capacity Measures for Deep ReLU Networks via Path Sampling

Classical results on the statistical complexity of linear models have co...
10/22/2019 ∙ by Ryan Theisen, et al. ∙ 23

• ### Conceptual capacity and effective complexity of neural networks

We propose a complexity measure of a neural network mapping function bas...
03/13/2021 ∙ by Lech Szymanski, et al. ∙ 0

• ### Galloping in natural merge sorts

We study the algorithm TimSort and the sub-routine it uses to merge mono...
12/07/2020 ∙ by Vincent Jugé, et al. ∙ 0

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

Beyond their remarkable representation and memorization ability, deep neural networks empirically perform well in out-of-sample prediction. This intriguing out-of-sample generalization property poses two fundamental theoretical questions:

• What are the complexity notions that control the generalization aspects of neural networks?

• Why does stochastic gradient descent, or other variants, find parameters with small complexity?

In this paper we approach the generalization question for deep neural networks from a geometric invariance vantage point. The motivation behind invariance is twofold: (1) The specific parametrization of the neural network is arbitrary and should not impact its generalization power. As pointed out in (Neyshabur et al., 2015a)

, for example, there are many continuous operations on the parameters of ReLU nets that will result in exactly the same prediction and thus generalization can only depend on the equivalence class obtained by identifying parameters under these transformations. (2) Although flatness of the loss function has been linked to generalization

(Hochreiter and Schmidhuber, 1997), existing definitions of flatness are neither invariant to nodewise re-scalings of ReLU nets nor general coordinate transformations (Dinh et al., 2017) of the parameter space, which calls into question their utility for describing generalization.

It is thus natural to argue for a purely geometric characterization of generalization that is invariant under the aforementioned transformations and additionally resolves the conflict between flat minima and the requirement of invariance. Information geometry is concerned with the study of geometric invariances arising in the space of probability distributions, so we will leverage it to motivate a particular geometric notion of complexity — the Fisher-Rao norm. From an algorithmic point of view the steepest descent induced by this geometry is precisely the natural gradient

(Amari, 1998). From the generalization viewpoint, the Fisher-Rao norm naturally incorporates distributional aspects of the data and harmoniously unites elements of flatness and norm which have been argued to be crucial for explaining generalization (Neyshabur et al., 2017).

Statistical learning theory equips us with many tools to analyze out-of-sample performance. The Vapnik-Chervonenkis dimension is one possible complexity notion, yet it may be too large to explain generalization in over-parametrized models, since it scales with the size (dimension) of the network. In contrast, under additional distributional assumptions of a margin, Perceptron (a one-layer network) enjoys a dimension-free error guarantee, with an

norm playing the role of “capacity”. These observations (going back to the 60’s) have led the theory of large-margin classifiers, applied to kernel methods, boosting, and neural networks

(Anthony and Bartlett, 1999). In particular, the analysis of Koltchinskii and Panchenko (2002) combines the empirical margin distribution (quantifying how well the data can be separated) and the Rademacher complexity of a restricted subset of functions. This in turn raises the capacity control question: what is a good notion of the restrictive subset of parameter space for neural networks? Norm-based capacity control provides a possible answer and is being actively studied for deep networks (Krogh and Hertz, 1992; Neyshabur et al., 2015b, a; Bartlett et al., 2017; Neyshabur et al., 2017), yet the invariances are not always reflected in these capacity notions. In general, it is very difficult to answer the question of which capacity measure is superior. Nevertheless, we will show that our proposed Fisher-Rao norm serves as an umbrella for the previously considered norm-based capacity measures, and it appears to shed light on possible answers to the above question.

Much of the difficulty in analyzing neural networks stems from their unwieldy recursive definition interleaved with nonlinear maps. In analyzing the Fisher-Rao norm, we proved an identity for the partial derivatives of the neural network that appears to open the door to some of the geometric analysis. In particular, we prove that any stationary point of the empirical objective with hinge loss that perfectly separates the data must also have a large margin. Such an automatic large-margin property of stationary points may link the algorithmic facet of the problem with the generalization property. The same identity gives us a handle on the Fisher-Rao norm and allows us to prove a number of facts about it. Since we expect that the identity may be useful in deep network analysis, we start by stating this result and its implications in the next section. In Section 3 we introduce the Fisher-Rao norm and establish through norm-comparison inequalities that it serves as an umbrella for existing norm-based measures of capacity. Using these norm-comparison inequalities we bound the generalization error of various geometrically distinct subsets of the Fisher-Rao ball and provide a rigorous proof of generalization for deep linear networks. Extensive numerical experiments are performed in Section 5 demonstrating the superior properties of the Fisher-Rao norm.

## 2 Geometry of Deep Rectified Networks

###### Definition 1.

The function class realized by the feedforward neural network architecture of depth

with coordinate-wise activation functions

is defined as set of functions ( and )111

It is possible to generalize the above architecture to include linear pre-processing operations such as zero-padding and average pooling.

with

 fθ(x)=σL+1(σL(…σ2(σ1(xTW0)W1)W2)…)WL), (2.1)

where the parameter vector

() and

 ΘL={W0∈Rp×k1,W1∈Rk1×k2,…,WL−1∈RkL−1×kL,WL∈RkL×K}.

For simplicity of calculations, we have set all bias terms to zero222In practice, we found that setting the bias to zero does not significantly impact results on image classification tasks such as MNIST and CIFAR-10.. We also assume throughout the paper that

 σ(z)=σ′(z)z. (2.2)

for all the activation functions, which includes ReLU , “leaky” ReLU , and linear activations as special cases.

To make the exposition of the structural results concise, we define the following intermediate functions in the definition (2.1). The output value of the -th layer hidden node is denoted as , and the corresponding input value as , with . By definition, , and the final output . For any , the subscript denotes the -th coordinate of the vector.

Given a loss function , the statistical learning problem can be phrased as optimizing the unobserved population loss:

 L(θ):=E(X,Y)∼Pℓ(fθ(X),Y), (2.3)

based on i.i.d. samples

from the unknown joint distribution

. The unregularized empirical objective function is denoted by

 ˆL(θ):=ˆEℓ(fθ(X),Y)=1NN∑i=1ℓ(fθ(Xi),Yi). (2.4)

We first establish the following structural result for neural networks. It will be clear in the later sections that the lemma is motivated by the study of the Fisher-Rao norm, formally defined in Eqn. (3.1

) below, and information geometry. For the moment, however, let us provide a different viewpoint. For linear functions

, we clearly have that . Remarkably, a direct analogue of this simple statement holds for neural networks, even if over-parametrized.

###### Lemma 2.1 (Structure in Gradient).

Given a single data input , consider the feedforward neural network in Definition 1 with activations satisfying (2.2). Then for any , one has the identity

 ∑i∈[kt],j∈[kt+1]∂Os+1∂WtijWtij=Os+1(x). (2.5)

 L∑t=0∑i∈[kt],j∈[kt+1]∂OL+1∂WtijWtij=(L+1)OL+1(x). (2.6)

Lemma 2.1 reveals the structural constraints in the gradients of rectified networks. In particular, even though the gradients lie in an over-parametrized high-dimensional space, many equality constraints are induced by the network architecture. Before we unveil the surprising connection between Lemma 2.1 and the proposed Fisher-Rao norm, let us take a look at a few immediate corollaries of this result. The first corollary establishes a large-margin property of stationary points that separate the data.

###### Corollary 2.1 (Large Margin Stationary Points).

Consider the binary classification problem with , and a neural network where the output layer has only one unit. Choose the hinge loss . If a certain parameter satisfies two properties

1. is a stationary point for in the sense ;

2. separates the data in the sense that for all ,

then it must be that is a large margin solution: for all ,

 Yifθ(Xi)≥1.

The same result holds for the population criteria , in which case is stated as , and the conclusion is .

###### Proof.

Observe that if , and if . Using Eqn. (2.6) when the output layer has only one unit, we find

 ⟨∇θˆL(θ),θ⟩ =(L+1)ˆE[∂ℓ(fθ(X),Y)∂fθ(X)fθ(X)], =(L+1)ˆE[−Yfθ(X)1Yfθ(X)<1].

For a stationary point , we have , which implies the LHS of the above equation is 0. Now recall that the second condition that separates the data implies implies for any point in the data set. In this case, the RHS equals zero if and only if . ∎

Granted, the above corollary can be proved from first principles without the use of Lemma 2.1, but the proof reveals a quantitative statement about stationary points along arbitrary directions .

In the second corollary, we consider linear networks.

###### Corollary 2.2 (Stationary Points for Deep Linear Networks).

Consider linear neural networks with and square loss function. Then all stationary points that satisfy

 ∇θˆL(θ)=∇θˆE[12(fθ(X)−Y)2]=0,

must also satisfy

 ⟨w(θ),XTXw(θ)−XTY⟩=0,

where , and are the data matrices.

###### Proof.

The proof follows from applying Lemma 2.1

 0=θT∇θˆL(θ)=(L+1)ˆE[(Y−XTL∏t=0Wt)XTL∏t=0Wt],

which means . ∎

###### Remark 2.1.

This simple Lemma is not quite asserting that all stationary points are global optima, since global optima satisfy , while we only proved that the stationary points satisfy .

## 3 Fisher-Rao Norm and Geometry

In this section, we propose a new notion of complexity of neural networks that can be motivated by geometrical invariance considerations, specifically the Fisher-Rao metric of information geometry. We postpone this motivation to Section 3.3 and instead start with the definition and some properties. Detailed comparison with the known norm-based capacity measures and generalization results are delayed to Section 4.

### 3.1 An analytical formula

###### Definition 2.

The Fisher-Rao norm for a parameter is defined as the following quadratic form

 ∥θ∥2fr:=⟨θ,I(θ)θ⟩,where I(θ)=E[∇θl(fθ(X),Y)⊗∇θl(fθ(X),Y)]. (3.1)

The underlying distribution for the expectation in the above definition has been left ambiguous because it will be useful to specialize to different distributions depending on the context. Even though we call the above quantity the “Fisher-Rao norm,” it should be noted that it does not satisfy the triangle inequality. The following Theorem unveils a surprising identity for the Fisher-Rao norm.

###### Theorem 3.1 (Fisher-Rao norm).

Assume the loss function is smooth in the first argument. The following identity holds for a feedforward neural network (Definition 1) with hidden layers and activations satisfying (2.2):

 ∥θ∥2fr=(L+1)2E⎡⎣⟨∂ℓ(fθ(X),Y)∂fθ(X),fθ(X)⟩2⎤⎦. (3.2)

The proof of the Theorem relies mainly on the geometric Lemma 2.1 that describes the gradient structure of multi-layer rectified networks.

###### Remark 3.1.

In the case when the output layer has only one node, Theorem 3.1 reduces to the simple formula

 ∥θ∥2fr=(L+1)2E⎡⎣(∂ℓ(fθ(X),Y)∂fθ(X))2fθ(X)2⎤⎦. (3.3)
###### Proof of Theorem 3.1.

Using the definition of the Fisher-Rao norm,

 ∥θ∥2fr =E[⟨θ,∇θl(fθ(X),Y)⟩2], =E⎡⎣⟨∇θfθ(X)∂ℓ(fθ(X),Y)∂fθ(X),θ⟩2⎤⎦, =E⎡⎣⟨∂ℓ(fθ(X),Y)∂fθ(X),∇θfθ(X)Tθ⟩2⎤⎦.

By Lemma 2.1,

 ∇θfθ(X)Tθ =∇θOL+1(x)Tθ, =L∑t=0∑i∈[kt],j∈[kt+1]∂OL+1∂WtijWtij, =(L+1)OL+1=(L+1)fθ(X).

Combining the above equalities, we obtain

 ∥θ∥2fr=(L+1)2E⎡⎣⟨∂ℓ(fθ(X),Y)∂fθ(X),fθ(X)⟩2⎤⎦.

Before illustrating how the explicit formula in Theorem 3.1 can be viewed as a unified “umbrella” for many of the known norm-based capacity measures, let us point out one simple invariance property of the Fisher-Rao norm, which follows as a direct consequence of Thm. 3.1. This property is not satisfied for norm, spectral norm, path norm, or group norm.

###### Corollary 3.1 (Invariance).

If there are two parameters such that they are equivalent, in the sense that , then their Fisher-Rao norms are equal, i.e.,

 ∥θ1∥fr=∥θ2∥fr.

### 3.2 Norms and geometry

In this section we will employ Theorem 3.1 to reveal the relationship among different norms and their corresponding geometries. Norm-based capacity control is an active field of research for understanding why deep learning generalizes well, including norm (weight decay) in (Krogh and Hertz, 1992; Krizhevsky et al., 2012), path norm in (Neyshabur et al., 2015a), group-norm in (Neyshabur et al., 2015b), and spectral norm in (Bartlett et al., 2017). All these norms are closely related to the Fisher-Rao norm, despite the fact that they capture distinct inductive biases and different geometries.

For simplicity, we will showcase the derivation with the absolute loss function and when the output layer has only one node (). The argument can be readily adopted to the general setting. We will show that the Fisher-Rao norm serves as a lower bound for all the norms considered in the literature, with some pre-factor whose meaning will be clear in Section 4.1. In addition, the Fisher-Rao norm enjoys an interesting umbrella property: by considering a more constrained geometry (motivated from algebraic norm comparison inequalities) the Fisher-Rao norm motivates new norm-based capacity control methods.

The main theorem we will prove is informally stated as follows.

###### Theorem 3.2 (Norm comparison, informal).

Denoting as any one of: (1) spectral norm, (2) matrix induced norm, (3) group norm, or (4) path norm, we have

 1L+1∥θ∥fr≤\vvvertθ\vvvert,

for any . The specific norms (1)-(4) are formally introduced in Definitions 3-6.

The detailed proof of the above theorem will be the main focus of Section 4.1. Here we will give a sketch on how the results are proved.

###### Lemma 3.1 (Matrix form).
 fθ(x)=xTW0D1(x)W1D2(x)⋯DLWLDL+1(x), (3.4)

where , for . In addition, is a diagonal matrix with diagonal elements being either or .

###### Proof of Lemma 3.1.

Since , we have . Proof is completed via induction. ∎

For the absolute loss, one has and therefore Theorem 3.1 simplifies to,

 ∥θ∥2fr=(L+1)2EX∼P[v(θ,X)TXXTv(θ,X)], (3.5)

where . The norm comparison results are thus established through a careful decomposition of the data-dependent vector , in distinct ways according to the comparing norm/geometry.

### 3.3 Motivation and invariance

In this section, we will provide the original intuition and motivation for our proposed Fisher-Rao norm from the viewpoint of geometric invariance.

Information geometry and the Fisher-Rao metric

Information geometry provides a window into geometric invariances when we adopt a generative framework where the data generating process belongs to the parametric family indexed by the parameters of the neural network architecture. The Fisher-Rao metric on is defined in terms of a local inner product for each value of as follows. For each define the corresponding tangent vectors , . Then for all and we define the local inner product

 ⟨¯α,¯β⟩pθ:=∫M¯αpθ¯βpθpθ, (3.6)

where . The above inner product extends to a Riemannian metric on the space of positive densities called the Fisher-Rao metric333Bauer et al. (2016) showed that it is essentially the the unique metric that is invariant under the diffeomorphism group of .. The relationship between the Fisher-Rao metric and the Fisher information matrix in statistics literature follows from the identity,

 ⟨¯α,¯β⟩pθ=⟨α,I(θ)β⟩. (3.7)

Notice that the Fisher information matrix induces a semi-inner product unlike the Fisher-Rao metric which is non-degenerate444The null space of is mapped to the origin under .. If we make the additional modeling assumption that then the Fisher information becomes,

 I(θ)=E(X,Y)∼Pθ[∇θlogpθ(Y|X)⊗∇θlogpθ(Y|X)]. (3.8)

If we now identify our loss function as then the Fisher-Rao metric coincides with the Fisher-Rao norm when . In fact, our Fisher-norm encompasses the Fisher-Rao metric and generalizes it to the case when the model is misspecified .

Flatness

Having identified the geometric origin of Fisher-Rao norm, let us study the implications for generalization of flat minima. Dinh et al. (2017)

argued by way of counter-example that the existing measures of flatness are inadequate for explaining the generalization capability of multi-layer neural networks. Specifically, by utilizing the invariance property of multi-layer rectified networks under non-negative nodewise rescalings, they proved that the Hessian eigenvalues of the loss function can be made arbitrarily large, thereby weakening the connection between flat minima and generalization. They also identified a more general problem which afflicts Hessian-based measures of generalization for any network architecture and activation function: the Hessian is sensitive to network parametrization whereas generalization should be invariant under general coordinate transformations. Our proposal can be motivated from the following fact

555Set

and recall the fact that Fisher information can be viewed as variance as well as the curvature.

which relates flatness to geometry (under appropriate regularity conditions)

 E(X,Y)∼Pθ⟨θ,Hessθ[ℓ(fθ(X),Y)]θ⟩=E(X,Y)∼Pθ⟨θ,∇θℓ(fθ(X),Y)⟩2=∥θ∥2fr. (3.9)

In other words, the Fisher-Rao norm evades the node-wise rescaling issue because it is exactly invariant under linear re-parametrizations. The Fisher-Rao norm moreover possesses an “infinitesimal invariance” property under non-linear coordinate transformations, which can be seen by passing to the infinitesimal form where non-linear coordinate invariance is realized exactly by the following infinitesimal line element,

 ds2=∑i,j∈[d][I(θ)]ijdθidθj. (3.10)

Comparing with the above line element reveals the geometric interpretation of the Fisher-Rao norm as the approximate geodesic distance from the origin. It is important to realize that our definition of flatness (3.9) differs from (Dinh et al., 2017) who employed the Hessian loss . Unlike the Fisher-Rao norm, the norm induced by the Hessian loss does not enjoy the infinitesimal invariance property (it only holds at critical points).

There exists a close relationship between the Fisher-Rao norm and the natural gradient. In particular, the natural gradient descent is simply the steepest descent direction induced by the Fisher-Rao geometry of . Indeed, the natural gradient can be expressed as a semi-norm-penalized iterative optimization scheme as follows,

 θt+1=argminθ∈Rd[⟨θ−θt,∇ˆL(θt)⟩+12ηt∥θ−θt∥2I(θt)]=θt−ηtI(θ)+∇ˆL(θt). (3.11)

We remark that the positive semi-definite matrix changes with different . We emphasize an “invariance” property of natural gradient under re-parametrization and an “approximate invariance” property under over-parametrization, which is not satisfied for the classic gradient descent. The formal statement and its proof are deferred to Lemma 6.1 in Section 6.2. The invariance property is desirable: in multi-layer ReLU networks, there are many equivalent re-parametrizations of the problem, such as nodewise rescalings, which may slow down the optimization process. The advantage of natural gradient is also illustrated empirically in Section 5.5.

## 4 Capacity Control and Generalization

In this section, we discuss in full detail the questions of geometry, capacity measures, and generalization. First, let us define empirical Rademacher complexity for the parameter space , conditioned on data , as

 RN(Θ)=Eϵsupθ∈Θ1NN∑i=1ϵifθ(Xi), (4.1)

where

### 4.1 Norm Comparison

Let us collect some definitions before stating each norm comparison result. For a vector , the vector norm is denoted , . For a matrix , denotes the spectral norm; denotes the matrix induced norm, for ; denotes the matrix group norm, for .

#### 4.1.1 Spectral norm.

###### Definition 3 (Spectral norm).

Define the following “spectral norm” ball:

 ∥θ∥σ:=[E(∥X∥2L+1∏t=1∥Dt(X)∥2σ)]1/2L∏t=0∥Wt∥σ. (4.2)

We have the following norm comparison Lemma.

###### Lemma 4.1 (Spectral norm).
 1L+1∥θ∥fr≤∥θ∥σ.
###### Remark 4.1.

Spectral norm as a capacity control has been considered in (Bartlett et al., 2017). Lemma 4.1 shows that spectral norm serves as a more stringent constraint than Fisher-Rao norm. Let us provide an explanation of the pre-factor here. Define the set of parameters induced by the Fisher-Rao norm geometry

 Bfr(1):={θ:E[v(θ,X)TXXTv(θ,X)]≤1}={θ:1L+1∥θ∥fr≤1}.

From Lemma 4.1, if the expectation is over the empirical measure , then, because , we obtain

 1L+1∥θ∥fr≤[ˆE(∥X∥2L+1∏t=1∥Dt(X)∥2σ)]1/2L∏t=0∥Wt∥σ≤[ˆE∥X∥2]1/2L∏t=0∥Wt∥σ, which implies  ⎧⎨⎩θ:L∏t=0∥Wt∥σ≤1[E^∥X∥2]1/2⎫⎬⎭⊂Bfr(1).

From Theorem 1.1 in (Bartlett et al., 2017), we know that a subset of the characterized by the spectral norm enjoys the following upper bound on Rademacher complexity under mild conditions: for any

 RN({θ:L∏t=0∥Wt∥σ≤r})≾r⋅[E^∥X∥2]1/2⋅PolylogN. (4.3)

Plugging in , we have,

 RN⎛⎜⎝⎧⎨⎩θ:L∏t=0∥Wt∥σ≤1[E^∥X∥2]1/2⎫⎬⎭⎞⎟⎠≾1[E^∥X∥2]1/2⋅[E^∥X∥2]1/2⋅PolylogN→0. (4.4)

Interestingly, the additional factor in Theorem 1.1 in (Bartlett et al., 2017) exactly cancels with our pre-factor in the norm comparison. The above calculations show that a subset of , induced by the spectral norm geometry, has good generalization error.

#### 4.1.2 Group norm.

###### Definition 4 (Group norm).

Define the following “group norm” ball, for

 ∥θ∥p,q:=[E(∥X∥2p∗L+1∏t=1∥Dt(X)∥2q→p∗)]1/2L∏t=0∥Wt∥p,q, (4.5)

where . Here denote the matrix induced norm.

###### Lemma 4.2 (Group norm).

It holds that

 1L+1∥θ∥fr≤∥θ∥p,q. (4.6)
###### Remark 4.2.

Group norm as a capacity measure has been considered in (Neyshabur et al., 2015b). Lemma 4.2 shows that group norm serves as a more stringent constraint than Fisher-Rao norm. Again, let us provide an explanation of the pre-factor here.

Note that for all

 L+1∏t=1∥Dt(X)∥q→p∗≤L+1∏t=1k[1p∗−1q]+t,

because

 ∥Dt(X)∥q→p∗=maxv≠0∥vTDt(X)∥p∗∥v∥q≤maxv≠0∥v∥p∗∥v∥q≤k[1p∗−1q]+t.

From Lemma 4.2, if the expectation is over the empirical measure , we know that in the case when for all ,

 1L+1∥θ∥fr≤[ˆE(∥X∥2p∗L+1∏t=1∥Dt(X)∥2q→p∗)]1/2L∏t=0∥Wt∥p,q, ≤(maxi∥Xi∥2p∗)1/2(k[1p∗−1q]+)L⋅L∏t=0∥Wt∥p,q, which implies  ⎧⎪ ⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪ ⎪⎩θ:L∏t=0∥Wt∥p,q≤1(k[1p∗−1q]+)Lmaxi∥Xi∥p∗⎫⎪ ⎪ ⎪ ⎪ ⎪⎬⎪ ⎪ ⎪ ⎪ ⎪⎭⊂Bfr(1).

By Theorem 1 in (Neyshabur et al., 2015b), we know that a subset of (different from the subset induced by spectral geometry) characterized by the group norm, satisfies the following upper bound on the Rademacher complexity, for any

 RN({θ:L∏t=0∥Wt∥p,q≤r})≾r⋅(2k[1p∗−1q]+)Lmaxi∥Xi∥p∗⋅Polylog√N. (4.7)

Plugging in , we have

 RN⎛⎜ ⎜ ⎜ ⎜ ⎜⎝⎧⎪ ⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪ ⎪⎩θ:L∏t=0∥Wt∥p,q≤1(k[1p∗−1q]+)Lmaxi∥Xi∥p∗⎫⎪ ⎪ ⎪ ⎪ ⎪⎬⎪ ⎪ ⎪ ⎪ ⎪⎭⎞⎟ ⎟ ⎟ ⎟ ⎟⎠ ≾1(k[1p∗−1q]+)Lmaxi∥Xi∥p∗⋅2L(k[1p∗−1q]+)Lmaxi∥Xi∥p∗⋅Polylog√N→0. (4.8)

Once again, we point out that the intriguing combinatorial factor in Theorem 1 of Neyshabur et al. (2015b) exactly cancels with our pre-factor in the norm comparison. The above calculations show that another subset of , induced by the group norm geometry, has good generalization error (without additional factors).

#### 4.1.3 Path norm.

###### Definition 5 (Path norm).

Define the following “path norm” ball, for

 ∥π(θ)∥q:=⎡⎢⎣E⎛⎝∑i0,i1,…,iL|Xi0L+1∏t=1Dtit(X)|q∗⎞⎠2/q∗⎤⎥⎦1/2⋅⎛⎝∑i0,i1,…,iLL∏t=0|Wtitit+1|q⎞⎠1/q, (4.9)

where , indices set . Here is a notation for all the paths (from input to output) of the weights .

###### Lemma 4.3 (Path-q norm).

The following inequality holds for any ,

 1L+1∥θ∥fr≤∥π(θ)∥q. (4.10)
###### Remark 4.3.

Path norm has been investigated in (Neyshabur et al., 2015a), where the definition is

 ⎛⎝∑i0,i1,…,iLL∏t=0|Wtitit+1|q⎞⎠1/q.

Again, let us provide an intuitive explanation for our pre-factor

 ⎡⎢⎣E⎛⎝∑i0,i1,…,iL|Xi0L+1∏t=1Dtit(X)|q∗⎞⎠2/q∗⎤⎥⎦1/2,

here for the case . Due to Lemma 4.3, when the expectation is over empirical measure,

 1L+1∥θ∥fr ≤⎡⎢⎣ˆE⎛⎝∑i0,i1,…,iL|Xi0L+1∏t=1Dtit(X)|q∗⎞⎠2/q∗⎤⎥⎦1/2⋅⎛⎝∑i0,i1,…,iLL∏t=0|Wtitit+1|q⎞⎠1/q, ≤maxi∥Xi∥∞⋅⎛⎝∑i0,i1,…,iLL∏t=0|Wtitit+1|⎞⎠, which implies  ⎧⎨⎩θ:∑i0,i1,…,iLL∏t=0|Wtitit+1|≤1maxi∥Xi∥∞⎫⎬⎭⊂Bfr(1).

By Corollary 7 in (Neyshabur et al., 2015b), we know that for any , the Rademacher complexity of path- norm ball satisfies

 RN⎛⎝⎧⎨⎩θ:∑i0,i1,…,iLL∏t=0|Wtitit+1|≤r⎫⎬⎭⎞⎠≾r⋅2Lmaxi∥Xi∥∞⋅Polylog√N.

Plugging in , we find that the subset of Fisher-Rao norm ball induced by path- norm geometry, satisfies

 RN⎛⎝⎧⎨⎩θ:∑i0,i1,…,iLL∏t=0|Wtitit+1|≤1maxi∥Xi∥∞⎫⎬⎭⎞⎠≾1maxi∥Xi∥∞⋅2Lmaxi∥Xi∥∞⋅Polylog√N→0.

Once again, the additional factor appearing in the Rademacher complexity bound in (Neyshabur et al., 2015b), cancels with our pre-factor in the norm comparison.

#### 4.1.4 Matrix induced norm.

###### Definition 6 (Induced norm).

Define the following “matrix induced norm” ball, for , as

 ∥θ∥p→q:=[E(∥X∥2pL+1∏t=1∥Dt(X)∥2q→p)]1/2L∏t=0∥Wt∥p→q. (4.11)
###### Lemma 4.4 (Matrix induced norm).

For any , the following inequality holds

 1L+1∥θ∥fr≤∥θ∥p→q.

Remark that may contain dependence on when . This motivates us to consider the following generalization of matrix induced norm, where the norm for each can be different.

###### Definition 7 (Chain of induced norm).

Define the following “chain of induced norm” ball, for a chain of

 ∥θ∥P:=[E(∥X∥2p0L+1∏t