1 Introduction
Deep learning has succeeded in making hierarchical neural networks perform excellently in various practical applications [1]. To proceed further, it would be beneficial to give more theoretical elucidation as to why and how deep neural networks (DNNs) work well in practice. In particular, it would be useful to not only clarify the individual models and phenomena but also explore various unified theoretical frameworks that could be applied to a wide class of deep networks. One widely used approach for this purpose is to consider deep networks with random connectivity and a large width limit [2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14]. For instance, Poole et al. [3] proposed a useful indicator to explain the expressivity of DNNs. Regarding the trainability of DNNs, Schoenholz et al. [4]
extended this theory to backpropagation and found that the vanishing and explosive gradients obey a universal law. These studies are powerful in the sense that they do not depend on particular model architectures, such as the number of layers or activation functions.
Unfortunately, such universal frameworks have not yet been established in many other topics. One is the geometric structure of the parameter space. For instance, the loss landscape without spurious local minima is important for easier optimization and theoretically guaranteed in singlelayer models [15], shallow piecewise linear ones [16], and extremely wide deep networks with the number of training samples smaller than the width [17]. Flat global minima have been reported to be related to generalization ability through empirical experiments showing that networks with such minima give better generalization performance [18, 19]
. However, theoretical analysis of the flat landscape has been limited in shallow rectified linear unit (ReLU) networks
[20, 21]. Thus, a residual subject of interest is to theoretically reveal the geometric structure of the parameter space truly common among various deep networks.To establish the foundation of the universal perspective of the parameter space, this study analytically investigates the Fisher information matrix (FIM). As is overviewed in Section 2.1
, the FIM plays an essential role in the geometry of the parameter space and is a fundamental quantity in both statistics and machine learning.
1.1 Main results
This study analyzes the FIM of deep networks with random weights and biases, which are widely used settings to analyze the phenomena of DNNs [2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14]. First, we analytically obtain novel statistics of the FIM, namely, the mean (Theorem 1
), variance (Theorem
1), and maximum of eigenvalues (Theorem 4). These are universal among a wide class of shallow and deep networks with various activation functions. These quantities can be obtained from simple iterative computations of macroscopic variables. To our surprise, the mean of the eigenvalues asymptotically decreases with an order of in the limit of a large network width , while the variance takes a value of , and the maximum eigenvalue takes a huge value of by using the order notation. Since the eigenvalues are nonnegative, these results mean that most of the eigenvalues are close to zero, but the eigenvalue distribution has an extremely long tail. Because the FIM defines the Riemannian metric of the parameter space, the derived statistics imply that the space is locally flat in most dimensions, but strongly distorted in others. In addition, because the FIM also determines the local shape of a loss landscape, the landscape is also expected to be locally flat while strongly distorted.Furthermore, to confirm the potential usage of the derived statistics, we show two exercises. One is on the FisherRao norm [22] (Theorem 5). This norm was originally proposed to connect the flatness of a parameter space to the capacity measure of generalization ability. We evaluate the FisherRao norm by using an indicator of the small eigenvalues, in Theorem 1. The second exercise is related to the more practical issue of determining the size of the learning rate necessary for the steepest descent gradient to converge. We demonstrate that an indicator of the huge eigenvalue, in Theorem 4, enables us to roughly estimate learning rates that make the gradient method converge to global minima (Theorem 7
). We expect that it will help to alleviate the dependence of learning rates on heuristic settings.
1.2 Related works
Despite its importance in statistics and machine learning, study on the FIM for neural networks has been limited so far. This is because layerbylayer nonlinear maps and huge parameter dimensions make it difficult to take analysis any further. Degeneracy of the eigenvalues of the FIM has been found in certain parameter regions [23]. To understand the loss landscape, Pennington and Bahri [5]
has utilized random matrix theory and obtained the spectrum of FIM and Hessian under several assumptions, although the analysis is limited to special types of shallow networks. In contrast, this paper is the first attempt to apply the mean field approach, which overcomes the difficulties above and enables us to identify universal properties of the FIM in various types of DNNs.
LeCun et al. [24] investigated the Hessian of the loss, which coincides with the FIM at zero training error, and empirically reported that very large eigenvalues exist, i.e., ”big killers”, which affects the optimization (discussed in Section 4.2). The eigenvalue distribution peaks around zero while its tail is very long; this behavior has been empirically known for decades [25]
, but its theoretical evidence and evaluation have remained unsolved as far as we know. Therefore, our theory provides novel theoretical evidence that this skewed eigenvalue distribution and its huge maximum appear universally in DNNs.
The theoretical tool we use here is known as the mean field theory of deep networks [3, 4, 10, 11, 12, 13, 14] as briefly overviewed in Section 2.4. This method has been successful in analyzing neural networks with random weights under a large width limit and in explaining the performance of the models. In particular, it quantitatively coincides with experimental results very well and can predict appropriate initial values of parameters for avoiding the vanishing or explosive gradient problems [4]. This analysis has been extended from fully connected deep networks to residual [11] and convolutional networks [14]. The evaluation of the FIM in this study is also expected to be extended to such cases.
2 Preliminaries
2.1 Fisher information matrix (FIM)
We focus on the Fisher information matrix (FIM) of neural network models, which previous works have developed and is commonly used [26, 27, 28, 29, 30, 31]. It is defined by
(1) 
where the statistical model is given by . The output model is given by , where is the network output parameterized by and is the Euclidean norm. The is an input distribution. The expectation is taken over the inputoutput pairs
of the joint distribution
. This FIM is transformed into , where is the th entry of the output (). When training samples are available, the expectation can be replaced by the empirical mean. This is known as the empirical FIM and often appears in practice [27, 28, 29, 30, 31]:(2) 
This study investigates the above empirical FIM for arbitrary . It converges to the expected FIM as . Although the form of the FIM changes a bit in other statistical models (i.g., softmax outputs), these differences are basically limited to the multiplication of activations in the output layer [30]. Our framework can be straightforwardly applied to such cases.
The FIM determines the asymptotic accuracy of the estimated parameters, as is known from a fundamental theorem of statistics, namely, the CramérRao bound. Below, we summarize a more intuitive understanding of the FIM from geometric views.
Information geometric view. Let us define an infinitesimal squared distance
, which represents the KullbackLeibler divergence between the statistical model
and against a perturbation . It is given by(3) 
It means that the parameter space of a statistical model forms a Riemannian manifold and the FIM works as its Riemannian metric, studied in information geometry [32]. This quadratic form is equivalent to the robustness of a deep network: Insights from information geometry have led to the development of natural gradient algorithms [30, 29, 31] and, recently, a capacity measure based on the FisherRao norm [22].
Loss landscape view. The empirical FIM (2
) determines the local landscape of the loss function around the global minimum. Suppose we have a squared loss function
. The FIM is related to the Hessian of the loss function, , in the following way:(4) 
The Hessian coincides with the FIM when the parameter converges to the global minimum by learning, that is, the true parameter from which the teacher signal is generated by or, more generally, with noise (i.e., , where denotes zeromean Gaussian noise) [27]
. In the literature on deep learning, its eigenvectors whose eigenvalues are close to zero locally compose flat minima, which leads to better generalization empirically
[19, 22]. Modifying the loss function with the FIM has also succeeded in overcoming the catastrophic forgetting [33].Note that the information geometric view tells us more than the loss landscape. While the Hessian (4) assumes the special teacher signal, the FIM works as the Riemannian metric to arbitrary teacher signals.
2.2 Network architecture
This study investigates a fully connected feedforward neural network. The network consists of one input layer with units, hidden layers () with units per hidden layer , and one output layer with units:
(5) 
This study focuses on the case of linear outputs, that is, . We assume that the activation function and its derivative are squareintegrable functions on a Gaussian measure. A wide class of activation functions, including the sigmoidlike and (leaky) ReLU functions, satisfy these conditions. Different layers may have different activation functions. Regarding the network width, we set and consider the limiting case of large with constant coefficients . This study mainly focuses on the case where the number of output units is given by a constant . The higherdimensional case of is argued in Section 3.4.
The FIM (2
) of a deep network is computed by the chain rule in a manner similar to that of the backpropagation algorithm:
(6)  
(7) 
where for (). To avoid the complicated notation, we omit the index of the output unit, i.e., , in the following.
2.3 Random connectivity
The parameter set is an ensemble generated by
(8) 
and then fixed, where
denotes a Gaussian distribution with zero mean and variance
, and we set and . To avoid complicated notation, we set them uniformly as and , but they can easily be generalized. It is essential to normalize the variance of the weights by in order to normalize the output to . This setting is similar to how parameters are initialized in practice [34]. We also assume that the input samples are generated in an i.i.d. manner from a standard Gaussian distribution: We focus here on the Gaussian case for simplicity, although we can easily generalize it to other distributions with finite variances.Let us remark that the above random connectivity is a common setting widely supposed in theories. Analyzing such a network can be regarded as the typical evaluation [2, 3, 5]. It is also equal to analyzing the network randomly initialized [20, 4]. The random connectivity is often assumed in the analysis of optimization as a true parameter of the networks, that is, the global minimum of the parameters [21, 35].
2.4 Meanfield approach
On neural networks with random connectivity, taking a large width limit, we can analyze the asymptotic behaviors of the networks. Recently, this asymptotic analysis is referred to as the mean field theory of deep networks, and we follow the previously reported notations and terminology
[3, 4, 11, 12].First, let us introduce the following variables for feedforward signal propagations: and . In the context of deep learning, these variables have been utilized to explain the depth to which signals can sufficiently propagate. The variable is the correlation between the activations for different input samples and in the th layer. Under the large limit, these variables are given by integration over Gaussian distributions because the preactivation
is a weighted sum of independent random parameters and the central limit theorem is applicable
[2, 3, 4]:(9) 
(10) 
with and (). The notation means integration over the standard Gaussian density. Here, the notation represents the following integral: with . The is linked to the compositional kernel and utilized as the kernel of the Gaussian process [36].
Next, let us introduce variables for backpropagated signals: and . is the correlation of backpropagated signals. To compute these quantities, the previous studies assumed the following:
Assumption 1 (Schoenholz et al. [4]).
This assumption makes the dependence between (or ) and , which share the same parameter set, very weak, and one can regard it as independent. It enables us to apply the central limit theorem to the backpropagated chain (7). Thus, the previous studies [4, 11, 12, 7] derived the following recurrence relations ():
(11) 
(12) 
with because of the linear outputs. The previous works confirmed excellent agreements between the above equations and experiments. In this study, we also adopt the above assumption and use the recurrence relations.
The variables () depend only on the variance parameters and , not on the unit indices. In that sense, they are referred to as macroscopic variables (a.k.a. order parameters in statistical physics). The recurrence relations for the macroscopic variables simply require iterations of one and twodimensional numerical integrals. Moreover, we can obtain their explicit forms for some activation functions (such as the error function, linear, and ReLU; see Supplementary Material B).
3 Fundamental FIM statistics
Here, we report mathematical findings that the mean, variance, and maximum of eigenvalues of the FIM (2) are explicitly expressed by using macroscopic variables. Our theorems are universal for networks ranging in size from shallow () to arbitrarily deep () with various activation functions.
3.1 Mean of eigenvalues
The FIM is a matrix, where represents the total number of parameters. First, we compute the arithmetic mean of the FIM’s eigenvalues as . We find a hidden relation between the macroscopic variables and the statistics of FIM:
Theorem 1.
In the limit of , the mean of the FIM’s eigenvalues is given by
(13) 
where . The macroscopic variables and can be computed recursively, and notably is .
This is obtained from a relation (detailed in Supplementary Material A.1). The coefficient is a constant not depending on , so it is . It is easily computed by iterations of the layerwise recurrence relations (9) and (11).
Because the FIM is a positive semidefinite matrix and its eigenvalues are nonnegative, this theorem means that most of the eigenvalues asymptotically approach zero when is large. Recall that the FIM determines the local geometry of the parameter space. The theorem suggests that the network output remains almost unchanged against a perturbation of the parameters in many dimensions. It also suggests that the shape of the loss landscape is locally flat in most dimensions.
Furthermore, by using Markov’s inequality, we can prove that the number of larger eigenvalues is limited, as follows:
Corollary 2.
Let us denote the number of eigenvalues satisfying by . For a constant , holds in the limit of .
The proof is shown in Supplementary Material A.2. This corollary clarifies that the number of eigenvalues whose values are is at most, and thus much smaller than the total number of parameters .
3.2 Variance of eigenvalues
Next, let us consider the second moment
. We now demonstrate that can be computed from the macroscopic variables:Theorem 3.
In the limit of , the second moment of the FIM’s eigenvalues is
(14)  
(15) 
The macroscopic variables and can be computed recursively, and is . ^{1}^{1}1Let us remark that we have assumed in the setting (8). If one considers a case of no bias term (
) with an odd activation
, is and becomes . In such exceptional cases, we need to evaluate the lower order terms of and (outside the scope of this study).The proof is shown in Supplementary Material A.3. Briefly, one can express the FIM as by definition. Here, let us consider a dual matrix of , that is, . and have the same nonzero eigenvalues. Because the sum of squared eigenvalues is equal to , we have . The nondiagonal entry corresponds to an inner product of the network activities for different inputs and , that is, . The diagonal entry is given by . Taking the summation of over all of and , we obtain the theorem. In particular, when and , is equal to the squared norm of the derivative , that is, , and one can easily check .
From Theorems 1 and 3, we can conclude that the variance of the eigenvalue distribution, , is . Because the mean is and most eigenvalues are close to zero, this result means the eigenvalue distribution has a long tail.
3.3 Maximum eigenvalue
As we have seen so far, the mean of the eigenvalues is , and the variance is . Therefore, we can expect that at least one of the eigenvalues must be huge. Actually, we can show that the maximum eigenvalue (that is, the spectral norm of the FIM) increases in the order of as follows.
Theorem 4.
In the limit of , the maximum eigenvalue of the FIM is
(16) 
The is derived from the dual matrix (detailed in Supplemental Material A.4). If we take the limit , we can characterize the quantity by the maximum eigenvalue as . Note that is independent of . When , it may depend on , as shown in Section 3.4.
This theorem suggests that the network output changes dramatically with a perturbation of the parameters in certain dimensions and that the local shape of the loss landscape is strongly distorted in that direction. Here, note that is proportional to , which is the summation over terms. This means that, when the network becomes deeper, the parameter space is more strongly distorted.
We confirmed the agreement between our theory and numerical experiments, as shown in Fig. 1. Three types of deep networks with parameters generated by random connectivity (8) were investigated: tanh, ReLU, and linear activations (, ). The input samples were generated using i.i.d. Gaussian samples, and . When , we calculated the eigenvalues by using the dual matrix (defined in Supplementary Material A.3) because is much smaller and its eigenvalues are easy to compute. The theoretical values of , and agreed very well with the experimental values in the large limit. We could predict even for small . In addition, In Supplementary Material C.1, we also show the results of experiments with fixed and changing . The theoretical values coincided with the experimental values very well for any as the theorems predict.
3.4 Multilabel classification with high dimensionality
This study mainly focuses on the multidimensional output of . This is because the number of labels is much smaller than the number of hidden units in most practice cases. However, since classification problems with far more labels are sometimes examined in the context of machine learning [37], it would be helpful to remark on the case of here. Denote the mean of the FIM’s eigenvalues in the case of as and so on. Straightforwardly, we can derive
(17) 
(18) 
The derivation is shown in Supplementary Material A.5. The mean of eigenvalues has the same form as Eq. (13) obtained in the case of . The second moment and maximum eigenvalues can be evaluated by the form of inequalities. We found that the mean is of while the maximum eigenvalue is of at least and of at most. Therefore, the eigenvalue distribution is more widely distributed than the case of .
4 Connections to learning strategies
Here, we show some applications that demonstrate how our universal theory on the FIM can potentially enrich deep learning theories. It enables us to quantitatively measure the behaviors of learning strategies as follows.
4.1 The FisherRao norm
Recently, Liang et al. [22] proposed the FisherRao norm for a capacity measure of generalization ability:
(19) 
where represents weight parameters. They reported that this norm has several desirable properties to explain the high generalization capability of DNNs. In deep linear networks, its generalization capacity (Rademacher complexity) is upper bounded by the norm. In deep ReLU networks, the FisherRao norm serves as a lower bound of the capacities induced by other norms, such as the path norm [38] and the spectral norm [39]. The FisherRao norm is also motivated by information geometry, and invariant under nodewise linear rescaling in ReLU networks. This is a desirable property to connect capacity measures with flatness induced by the rescaling [40].
Here, to obtain a typical evaluation of the norm, we define the average over possible parameters with fixed variances () by , which leads to the following theorem:
Theorem 5.
In the limit of , the FisherRao norm of DNNs satisfies
(20) 
where . Equality holds in a network with a uniform width , and then we have .
The proof is shown in Supplementary Material A.6. Although what we can evaluate is only the average of the norm, it can be quantified by . In the supplementary material, we also show an explicit expression of the norm as computed by the macroscopic variables, that is, . This guarantees that the norm is independent of the network width in the limit of , which was empirically conjectured by [22].
Recently, Smith and Le [41] argued that the Bayesian factor composed of the Hessian of the loss function, whose special case is the FIM, is related to the generalization. Similar analysis to the above theorem may enable us to quantitatively understand the relation between the statistics of the FIM and the indicators to measure the generalization ability.
4.2 Learning rate for convergence
Consider the steepest gradient descent method in a batch regime. Its update rule is given by
(21) 
where is a constant learning rate. We have added a momentum term with a coefficient because it is widely used in training deep networks. Assume that the squared loss function of Eq. (4) has a global minimum achieving the zero training error . Then, the FIM’s maximum eigenvalue is dominant over the convergence of learning as follows:
Lemma 6.
A learning rate satisfying is necessary for the steepest gradient method to converge to the global minimum .
The proof is given by the expansion around the minimum, i.e., (detailed in Supplementary Material A.7). This lemma is a generalization of LeCun et al. [24], which proved the case of . Let us refer to as the critical learning rate. When , the gradient method never converges to the global minimum. The previous work [24] also claimed that is the best choice for fastest convergence around the minimum. Although we focus on the batch regime, the eigenvalues also determine the bound of the gradient norms and the convergence of learning in the online regime [42].
Theorem 7.
Suppose we have a global minimum generated by Eq. (8) and satisfying . In the limit of , the gradient method never converges to when
(22) 
Theorem 7 quantitatively reveals that, the wider the network becomes, the smaller the learning rate we need to set. In addition, is the sum over constant positive terms, so a deeper network requires a finer setting of the learning rate and it will make the optimization more difficult. In contrast, the expressive power of the network grows exponentially as the number of layers increases [3, 43]. We thus expect there to be a tradeoff between trainability and expressive power.
To confirm the effectiveness of Theorem 7, we performed several experiments. As shown in Fig. 2, we exhaustively searched training losses while changing and , and found that the theoretical estimation coincides well with the experimental results. We trained deep networks (, , ) and the loss function was given by the squared error.
The left column of Fig. 2 shows the results of training on artificial data. We generated training samples in the Gaussian manner () and teacher signals by the teacher network with a true parameter set satisfying Eq. (8). We used the gradient method (21) with and trained the DNNs for steps. The variances of the initialization of the parameters were set to the same as the global minimum. We found that the losses of the experiments were clearly divided into two areas: one where the gradient exploded (gray area) and the other where it was converging (colored area). The red line is theoretically calculated using and on of the initial parameters. Training on the regions above exploded, just as Theorem 7 predicts. The explosive region with got smaller in the limit of large .
We performed similar experiments on benchmark datasets and found that the theory can estimate the appropriate learning rates. The results on MNIST are shown in the right column of Fig. 2. As shown in Supplementary Material C.2, the results of training on CIFAR10 were almost the same as those of MINIST. We used stochastic gradient descent (SGD) with a minibatch size of
and , and trained the DNNs for epoch. Each training sample was normalized to zero mean and variance (). The initial values of were set to the vicinity of the special parameter region, i.e., the critical line of the ordertochaos transition, which the previous works [3, 4] recommended to use for achieving high expressive power and trainability. Note that the variances may change from the initialization to the global minimum, and the conditions of the global minimum in Theorem 7 do not hold in general. Nevertheless, the learning rates estimated by Theorem 7 explained the experiments well. Therefore, the ideal conditions supposed in Theorem 7 seem to hold effectively. This may be explained by the conjecture that the change from the initialization to the global minima is small in the large limit [44].Theoretical estimations of learning rates in deep networks have so far been limited; such gradients as AdaGrad and Adam also require heuristically determined hyperparameters for learning rates. Extending our framework would be beneficial in guessing learning rates to prevent the gradient update from exploding.
5 Conclusion and discussion
The present work elucidated the asymptotic statistics of the Fisher information matrix (FIM) common among deep networks with any number of layers and various activation functions. The statistics of FIM are characterized by the small mean of eigenvalues and the huge maximum eigenvalue, which are computed by the recurrence relations. This suggests that the parameter space determined by the FIM is locally flat in many directions while highly distorted in certain others. As examples of how one can connect the derived statistics to learning strategies, we suggest the FisherRao norm and learning rates of steepest gradient descents.
The derived statistics are also of potential importance to other learning strategies, for instance, natural gradient methods. When the loss landscape is nonuniformly distorted, naive gradient methods are likely to diverge or become trapped in plateau regions, but the natural gradient, , converges more efficiently [27, 30, 28, 29]. Several experiments showed that the choice of damping term , introduced in , is crucial to its performance in DNNs [31]. Since we found that the FIM has many eigenvalues close to zero, any naive inversion of it would be very unstable. Therefore, the development of more efficient gradient methods will require modification such as damping.
It would also be interesting for our framework to quantitatively reveal the effects of normalization methods on the FIM. In particular, batch normalization may alleviate the larger eigenvalues because it empirically allows larger learning rates for convergence
[45]. It would also be fruitful to investigate the eigenvalues of the Hessian with a large error (4) and to theoretically quantify the negative eigenvalues that lead to the existence of saddle points and the loss landscapes without spurious local minima [46]. The global structure of the parameter space should be also explored. We can hypothesize that the parameters are globally connected through the locally flat dimensions and compose manifolds of flat minima.Our framework on FIMs is readily applicable to other architectures such as convolutional networks and residual networks by using the corresponding mean field theories [11, 12]. To this end, it may be helpful to remark that macroscopic variables in residual networks essentially diverge at the extreme depths [11]. If one considers extremely deep residual networks, the statistics will require a careful examination of the order of the network width and the explosion of the macroscopic variables. We expect that further studies will establish a mathematical foundation of deep learning from the perspective of the large limit.
Acknowledgments
This work was supported by a GrantinAid for Research Activity Startup (17H07390) from the Japan Society for the Promotion of Science (JSPS).
References
 LeCun et al. [2015] Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep learning. Nature, 521(7553):436–444, 2015.
 Amari [1974] Shunichi Amari. A method of statistical neurodynamics. Kybernetik, 14(4):201–215, 1974.
 Poole et al. [2016] Ben Poole, Subhaneil Lahiri, Maithreyi Raghu, Jascha SohlDickstein, and Surya Ganguli. Exponential expressivity in deep neural networks through transient chaos. In Advances In Neural Information Processing Systems (NIPS), pages 3360–3368, 2016.
 Schoenholz et al. [2016] Samuel S Schoenholz, Justin Gilmer, Surya Ganguli, and Jascha SohlDickstein. Deep information propagation. ICLR’2017 arXiv preprint arXiv:1611.01232, 2016.
 Pennington and Bahri [2017] Jeffrey Pennington and Yasaman Bahri. Geometry of neural network loss surfaces via random matrix theory. In International Conference on Machine Learning (ICML), pages 2798–2806, 2017.
 Pennington and Worah [2017] Jeffrey Pennington and Pratik Worah. Nonlinear random matrix theory for deep learning. In Advances in Neural Information Processing Systems (NIPS), pages 2634–2643, 2017.
 Pennington et al. [2018] Jeffrey Pennington, Samuel S Schoenholz, and Surya Ganguli. The emergence of spectral universality in deep networks. International Conference on Artificial Intelligence and Statistics (AISTATS), pages 1924–1932, 2018.
 Raghu et al. [2017] Maithra Raghu, Ben Poole, Jon Kleinberg, Surya Ganguli, and Jascha SohlDickstein. On the expressive power of deep neural networks. In International Conference on Machine Learning (ICML), pages 2847–2854, 2017.
 Daniely et al. [2016] Amit Daniely, Roy Frostig, and Yoram Singer. Toward deeper understanding of neural networks: The power of initialization and a dual view on expressivity. In Advances In Neural Information Processing Systems (NIPS), pages 2253–2261, 2016.
 Li and Saad [2018] Bo Li and David Saad. Exploring the function space of deeplearning machines. Physical Review Letters, 120(24):248301, 2018.
 Yang and Schoenholz [2017] Ge Yang and Samuel Schoenholz. Mean field residual networks: On the edge of chaos. In Advances in Neural Information Processing Systems (NIPS), pages 2865–2873. 2017.

Xiao et al. [2018]
Lechao Xiao, Yasaman Bahri, Jascha SohlDickstein, Samuel S Schoenholz, and
Jeffrey Pennington.
Dynamical isometry and a mean field theory of CNNs: How to train 10,000layer vanilla convolutional neural networks.
In International Conference on Machine Learning (ICML), pages 5393–5402, 2018.  Kadmon and Sompolinsky [2016] Jonathan Kadmon and Haim Sompolinsky. Optimal architectures in a solvable model of deep networks. In Advances in Neural Information Processing Systems (NIPS), pages 4781–4789, 2016.

Chen et al. [2018]
Minmin Chen, Jeffrey Pennington, and Samuel S Schoenholz.
Dynamical isometry and a mean field theory of RNNs: Gating enables signal propagation in recurrent neural networks.
In International Conference on Machine Learning (ICML), pages 873–882, 2018.  Mei et al. [2016] Song Mei, Yu Bai, and Andrea Montanari. The landscape of empirical risk for nonconvex losses. arXiv preprint arXiv:1607.06534, 2016.
 Soudry and Hoffer [2017] Daniel Soudry and Elad Hoffer. Exponentially vanishing suboptimal local minima in multilayer neural networks. arXiv preprint arXiv:1702.05777, 2017.
 Nguyen and Hein [2017] Quynh Nguyen and Matthias Hein. The loss surface of deep and wide neural networks. In International Conference on Machine Learning (ICML), pages 2603–2612, 2017.
 Hochreiter and Schmidhuber [1997] Sepp Hochreiter and Jürgen Schmidhuber. Flat minima. Neural Computation, 9(1):1–42, 1997.
 Keskar et al. [2016] Nitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, and Ping Tak Peter Tang. On largebatch training for deep learning: Generalization gap and sharp minima. ICLR’2017 arXiv:1609.04836, 2016.
 Safran and Shamir [2016] Itay Safran and Ohad Shamir. On the quality of the initial basin in overspecified neural networks. In International Conference on Machine Learning (ICML), pages 774–782, 2016.
 Tian [2017] Yuandong Tian. An analytical formula of population gradient for twolayered ReLU network and its applications in convergence and critical point analysis. In International Conference on Machine Learning (ICML), pages 3404–3413, 2017.
 Liang et al. [2017] Tengyuan Liang, Tomaso Poggio, Alexander Rakhlin, and James Stokes. FisherRao metric, geometry, and complexity of neural networks. arXiv preprint arXiv:1711.01530, 2017.

Fukumizu [1996]
Kenji Fukumizu.
A regularity condition of the information matrix of a multilayer perceptron network.
Neural Networks, 9(5):871–879, 1996.  LeCun et al. [1998] Yann LeCun, Léon Bottou, Genevieve B Orr, and KlausRobert Müller. Efficient backprop. In Neural networks: Tricks of the trade, pages 9–50. Springer, 1998.
 Sagun et al. [2017] Levent Sagun, Utku Evci, V Ugur Guney, Yann Dauphin, and Leon Bottou. Empirical analysis of the hessian of overparametrized neural networks. arXiv preprint arXiv:1706.04454, 2017.
 Amari [1998] Shunichi Amari. Natural gradient works efficiently in learning. Neural Computation, 10(2):251–276, 1998.
 Amari et al. [2000] ShunIchi Amari, Hyeyoung Park, and Kenji Fukumizu. Adaptive method of realizing natural gradient learning for multilayer perceptrons. Neural Computation, 12(6):1399–1409, 2000.
 Pascanu and Bengio [2013] Razvan Pascanu and Yoshua Bengio. Revisiting natural gradient for deep networks. ICLR’2014 arXiv preprint arXiv:1301.3584, 2013.
 Ollivier [2015] Yann Ollivier. Riemannian metrics for neural networks I: feedforward networks. Information and Inference: A Journal of the IMA, 4(2):108–153, 2015.
 Park et al. [2000] Hyeyoung Park, Shunichi Amari, and Kenji Fukumizu. Adaptive natural gradient learning algorithms for various stochastic models. Neural Networks, 13(7):755–764, 2000.
 Martens and Grosse [2015] James Martens and Roger Grosse. Optimizing neural networks with kroneckerfactored approximate curvature. In International Conference on Machine Learning (ICML), pages 2408–2417, 2015.
 Amari [2016] Shunichi Amari. Information geometry and its applications. Springer, 2016.
 Kirkpatrick et al. [2017] James Kirkpatrick, Razvan Pascanu, Neil Rabinowitz, Joel Veness, Guillaume Desjardins, Andrei A Rusu, Kieran Milan, John Quan, Tiago Ramalho, Agnieszka GrabskaBarwinska, et al. Overcoming catastrophic forgetting in neural networks. Proceedings of the National Academy of Sciences, 114(13):3521–3526, 2017.
 Glorot and Bengio [2010] Xavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedforward neural networks. In International Conference on Artificial Intelligence and Statistics (AISTATS), pages 249–256, 2010.
 Saad and Solla [1995] David Saad and Sara A Solla. Exact solution for online learning in multilayer neural networks. Physical Review Letters, 74(21):4337, 1995.
 Lee et al. [2017] Jaehoon Lee, Yasaman Bahri, Roman Novak, Samuel S Schoenholz, Jeffrey Pennington, and Jascha SohlDickstein. Deep neural networks as gaussian processes. ICLR’2018 arXiv preprint arXiv:1711.00165, 2017.

Deng et al. [2010]
Jia Deng, Alexander C Berg, Kai Li, and Li FeiFei.
What does classifying more than 10,000 image categories tell us?
InEuropean Conference on Computer Vision (ECCV)
, pages 71–84. Springer, 2010.  Neyshabur et al. [2015] Behnam Neyshabur, Ryota Tomioka, and Nathan Srebro. Normbased capacity control in neural networks. In Conference on Learning Theory (COLT), pages 1376–1401, 2015.
 Bartlett et al. [2017] Peter L Bartlett, Dylan J Foster, and Matus J Telgarsky. Spectrallynormalized margin bounds for neural networks. In Advances in Neural Information Processing Systems (NIPS), pages 6241–6250, 2017.
 Dinh et al. [2017] Laurent Dinh, Razvan Pascanu, Samy Bengio, and Yoshua Bengio. Sharp minima can generalize for deep nets. In International Conference on Machine Learning (ICML), pages 1019–1028, 2017.
 Smith and Le [2017] Samuel L Smith and Quoc V Le. Understanding generalization and stochastic gradient descent. ICLR’2018 arXiv preprint arXiv:1710.06451, 2017.
 Bottou [1998] Léon Bottou. Online learning and stochastic approximations. Online learning in neural networks, 17(9):9–42, 1998.
 Montufar et al. [2014] Guido F Montufar, Razvan Pascanu, Kyunghyun Cho, and Yoshua Bengio. On the number of linear regions of deep neural networks. In Advances in Neural Information Processing Systems (NIPS), pages 2924–2932, 2014.
 Neyshabur et al. [2018] Behnam Neyshabur, Zhiyuan Li, Srinadh Bhojanapalli, Yann LeCun, and Nathan Srebro. Towards understanding the role of overparametrization in generalization of neural networks. arXiv preprint arXiv:1805.12076, 2018.
 Ioffe and Szegedy [2015] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In International Conference on Machine Learning (ICML), pages 448–456, 2015.
 Dauphin et al. [2014] Yann N Dauphin, Razvan Pascanu, Caglar Gulcehre, Kyunghyun Cho, Surya Ganguli, and Yoshua Bengio. Identifying and attacking the saddle point problem in highdimensional nonconvex optimization. In Advances in Neural Information Processing Systems (NIPS), pages 2933–2941, 2014.
Supplementary Materials
Appendix A Proofs
A.1 Theorem 1
(i) Case of
To avoid complicating the notation, we first consider the case of the single output (). The general case is shown after. The network output is denoted by here. We denote the Fisher information matrix with full components as
(A.1 ) 
where we notice that
(A.2 ) 
In general, the sum over the eigenvalues is given by the matrix trace, . We also denote the average of the eigenvalues of the diagonal block as for , and for . Accordingly, we find
(A.3 ) 
The contribution of is negligible in the large limit as follows. The first term is
(A.4 )  
(A.5 ) 
Under the Assumption 1, we can apply the central limit theorem to summations over the units and independently because we can assume that they do not share the same random weights and biases. By taking the limit of , we obtain , where and are computed by the recursive relations (9) and (11). Note that this transformation to the macroscopic variables holds regardless of the sample index . Therefore, we obtain
(A.6 ) 
where comes from , and comes from .
In contrast, the contributions of the bias entries are smaller than those of the weight entries in the limit of , as is easily confirmed:
(A.7 )  
(A.8 )  
(A.9 ) 
is while is . Hence, the mean is negligible and we obtain .
(ii) of
We can apply the above computation of to each network output ():
(A.10 ) 
Therefore, the mean of the eigenvalues becomes
(A.11 )  
(A.12 ) 
A.2 Corollary 2
Because the FIM is a positive semidefinite matrix, its eigenvalues are nonnegative. For a constant , we obtain
(A.13 )  
(A.14 )  
(A.15 ) 
This is known as Markov’s inequality. When , combining this with Theorem 1 immediately yields Corollary 2:
(A.16 ) 
A.3 Theorem 3
(i) Case of
Here, let us express the FIM as , where is a matrix whose columns are the gradients on each input sample, i.e., . We also introduce a dual matrix of , that is, :
(A.17 ) 
Note that is a matrix while is a matrix. We can easily confirm that these and have the same nonzero eigenvalues.
The squared sum of the eigenvalues is given by . By using the Frobenius norm , this is . Similar to , the bias entries in are negligible because the number of the entries is much less than that of weight entries. Therefore, we only need to consider the weight entries. The th entry of is given by
(A.18 )  
(A.19 ) 
where we defined
(A.20 ) 
Under the Assumption 1, we can apply the central limit theorem to and independently because we can assume that they do not share the same random weights and biases. For , we have and in the limit of , where the macroscopic variables and satisfy the recurrence relations (10) and (12). and are constants of . Then, for all and ,
(A.21 )  
(A.22 ) 
Similarly, for , we have , and then .
Thus, under the limit of , the dual matrix is asymptotically given by
(A.23 ) 
Neglecting the lower order term, we obtain
(A.24 )  
(A.25 ) 
Note that, when , becomes zero and the lower order term may be nonnegligible. In this exceptional case, we have , where the second term comes from the term of Eq. (A.23). Therefore, the lower order evaluation depends on the ratio, although it is outside the scope of this study. Intuitively, the origin of is related to the offset of firing activities . The condition of is satisfied when the bias terms exist or when the activation is not an odd function. In such cases, the firing activities have the offset . Therefore, for any input samples and (), we have and then makes of .
(ii) of
Here, we introduce the following dual matrix :
(A.26 )  
(A.27 ) 
where is a matrix whose columns are the gradients on each input sample, i.e., , and is a matrix. The FIM is represented by . is a matrix and consists of block matrices,
(A.28 ) 
for .
The diagonal block is evaluated in the same way as the case of . It becomes as shown in Eq. (A.23). The nondiagonal block has the following th entries: