# Collective evolution of weights in wide neural networks

We derive a nonlinear integro-differential transport equation describing collective evolution of weights under gradient descent in large-width neural-network-like models. We characterize stationary points of the evolution and analyze several scenarios where the transport equation can be solved approximately. We test our general method in the special case of linear free-knot splines, and find good agreement between theory and experiment in observations of global optima, stability of stationary points, and convergence rates.

## Authors

• 12 publications
• ### Convective transport in nanofluids: the stationary problem

We analyze the existence of solutions to the stationary problem from a m...
11/12/2019 ∙ by Eberhard Bänsch, et al. ∙ 0

• ### How to Escape Saddle Points Efficiently

This paper shows that a perturbed form of gradient descent converges to ...
03/02/2017 ∙ by Chi Jin, et al. ∙ 0

• ### Analysis of a Two-Layer Neural Network via Displacement Convexity

Fitting a function by using linear combinations of a large number N of `...
01/05/2019 ∙ by Adel Javanmard, et al. ∙ 0

• ### On Convergence of Training Loss Without Reaching Stationary Points

It is a well-known fact that nonconvex optimization is computationally i...
10/12/2021 ∙ by Jingzhao Zhang, et al. ∙ 9

• ### WGAN with an Infinitely Wide Generator Has No Spurious Stationary Points

Generative adversarial networks (GAN) are a widely used class of deep ge...
02/15/2021 ∙ by Albert No, et al. ∙ 0

• ### Global Convergence and Geometric Characterization of Slow to Fast Weight Evolution in Neural Network Training for Classifying Linearly Non-Separable Data

In this paper, we study the dynamics of gradient descent in learning neu...
02/28/2020 ∙ by Ziang Long, et al. ∙ 0

• ### Asymptotics of Wide Networks from Feynman Diagrams

Understanding the asymptotic behavior of wide networks is of considerabl...
09/25/2019 ∙ by Ethan Dyer, et al. ∙ 0

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

Modern neural networks include millions of neurons, and one can expect that macroscopic properties of such big systems can be described by analytic models – similarly to how statistical mechanics or fluid dynamics describe macroscopic properties of big colections of physical particles. One specific approach in this general direction is to consider the limit of “large-width” neural networks, typically combined with the assumption of weight independence. There is a well-known connection between neural networks, Gaussian processes and kernel machines in this limit (

[Neal, 2012, Williams, 1997]). Recently, this limit has been used, along with some complex mathematical tools, to understand the landscape of the loss surface of large networks. In particular, [Choromanska et al., 2015] establish a link to the theory of spin glasses and use this theory to explain the distribition of critical points on the loss surface; [Pennington and Bahri, 2017]

analyze the loss surface using random matrix theory;

[Poole et al., 2016, Schoenholz et al., 2016] analyze propagation of information in a deep network assuming Gaussian signal distributions in wide layers.

In the present paper we apply the large-width limit to describe the collective evolution of weights under standard gradient descent used to train the network. Weight optimization is currently a topic of active theoretical research. While local convergence of gradient descent is generally well-understood ([Nesterov, 2013]), its global and statistical properties are hard to analyze due to the complex nonconvex shape of the loss surface. There are some special scenarios where the absence of spurious local minima has been proved, in particular in deep linear networks ([Kawaguchi, 2016, Laurent and von Brecht, 2017]) or for pyramidal networks trained on small training set ([Nguyen and Hein, 2017]). But in general, the loss surface is known to have many local minima or saddle points trapping or delaying optimization, see e.g. ([Safran and Shamir, 2017, Dauphin et al., 2014]). Similarly, gradient descent is known to be analytically tractable in some cases, e.g. in linear networks ([Saxe et al., 2013, Bartlett et al., 2018]

) or in ReLU networks under certain assumptions on the distribution of the inputs (

[Tian, 2017]), but in general, current studies of gradient descent in large networks tend to postulate its replacement by an “effective” macroscopic model ([Shwartz-Ziv and Tishby, 2017, Chaudhari et al., 2018]).

The goal of the present paper is to derive a general macroscopic description of the gradient descent directly from the underlying finite model, without ad hoc assumptions. We do this in section 2.2, obtaining a nonlinear integro-differential transport equation. In sections 2.3,2.4

we analyze its general properties and characterize its global minima and stationary points. Interestingly, an important quantity for the transport equation is the quadratic mean-field version of the loss function: this loss does not increase under the evolution. However, the generator of the transport equation is not the formal gradient of this loss – in particular, that is why the macroscopic dynamics does have stationary points that are not global minima. Next, in sections

2.52.7

we analyze three scenarios where the transport equation can be solved perturbatively (near a global minimizer, for small output weights, and for localized Gaussian distributions). Finally, in section

3 we verify our general method on a simple one-dimensional ReLU network (essentially describing linear splines). We compare the theoretical predictions involving stationary points, their stability, and convergence rate of the dynamics with the experiment and find a good agreement between the two.

This short paper is written at a “physics level of rigor”. We strive to expose the main ideas and do not attempt to fill in all mathematical details.

## 2 General theory

### 2.1 A “generalized shallow network” model

We consider the problem of approximating a “ground truth” map . The nature of the set will not be important for the present exposition. We consider approximation by the “generalized shallow network” model:

 ˆf(W,x)=1NN∑n=1ϕ(wn,x),wn∈Rd,x∈X, (1)

where is some function of the weights and the input,

is the weight vector for the

’th term, and is the full weight vector. The usual neural network with a single hidden layer is obtained when and

with some nonlinear activation function

.111The standard network model also includes a constant term in the output layer, but we will omit it for simplicity of the exposition.

We consider the usual quadratic loss function

 (2)

where is some measure on . Again, we don’t assume anything specific about : for example, it can consist of finitely many atoms (a “finite training set” scenario) or have a density w.r.t. the Lebesgue measure (assuming ; a “population average” scenario).

We will study the standard gradient descent dynamics of the weights:

 ddtW=G,G=−α∇WL, (3)

where is the learning rate and is the gradient w.r.t. .

### 2.2 Derivation of the transport equation

We are interested in the collective behavior of the weights under gradient descent in the “wide network” limit . It is convenient to view the weight vector as a stochastic object described by a distribution with a density function such that . The common practice in network training (see e.g. [Glorot and Bengio, 2010]) is to initialize the weights randomly and independently with some density , i.e. Our immediate goal will be to derive, under suitable assumptions, the equation governing the evolution of the factors.

First we replace the system (3

. We can view the vector field in Eq.(3

) as a “probability flux”. Then, the evolution of

can be described by the continuity equation expressing local probability conservation:

 1PddtP=−∇W⋅G=−∇W⋅(−α∇WL)=αΔWL, (4)

where is the Laplacian and denotes the material derivative:

 ddtP=∂∂tP+ddtW⋅∇WP

(see Appendix A.1 for details).

Expanding the continuity equation, we get

 1PdPdt(W)= αΔL(W)=αN∑n=1ΔwnL(W) = +1NN∑n=1|∇wϕ(wn,x)|2]dμ(x) (5)

We look for factorized solutions with some density . However, it is clear that we cannot fulfill this factorization exactly because of the interaction of different weights

on the r.h.s. To decouple them, we perform the “mean field” (or “law of large numbers”) approximation:

 ˆf(W,x)= 1NN∑n=1ϕ(wn,x) (6) ≈ Ew∼pϕ(w,x)=∫p(w,t)ϕ(w,x)dw.

This approximation corresponds to the limit . To obtain a finite non-vanishing r.h.s. in Eq.(5), we rescale the learning rate by setting . We discard the second term on the r.h.s. of (5), since the coefficient makes it asymptotically small compared to the first term. After all this, we obtain the equation

 1PdPdt(W)= ∫X(∫p(w′,t)ϕ(w′,x)dw′−f(x)) ×N∑n=1Δwϕ(wn,x)dμ(x),

admitting a factorized solution. Namely, using the product form of , unwrapping the material derivative and equating same- terms, we get

 1p(wn,t) (∂∂tp(wn,t)+ddtwn⋅∇wp(wn,t)) = ∫X(∫p(w′,t)ϕ(w′,x)dw′−f(x)) ×Δwϕ(wn,x)dμ(x). (7)

Using the mean-field approximation (6), we can write

 ddtwn= −N∇wnL = −∫X(ˆf(W,x)−f(x))∇wϕ(wn,x)dμ(x).

Making this replacement for in Eq.(7), dropping the index and rearranging the terms, we obtain the desired transport equation for :

 ∂∂tp(w,t)= ∫X(∫p(w′,t)ϕ(w′,x)dw′−f(x)) ×∇w⋅(p(w,t)∇wϕ(w,x))dμ(x). (8)

### 2.3 Properties of the transport equation

The derived equation (8) is a nonlinear integro-differential equation in and we expect it to approximately describe the evolution of the distribution of the weights . In agreement with this interpretation, the equation (8) preserves the total probability () – this follows from the gradient form of the r.h.s. of Eq.(8) (assuming the function falls off sufficiently fast as , so that the boundary term can be omitted when integrating by parts). We also expect that, under suitable regularity assumptions, the equation preserves the nonnegativity

of the initial condition. In the sequel, our treatment of this equation will be rather heuristic; in particular, we will not distinguish between regular and weak (distributional) solutions

.

It is helpful to consider Eq.(8) as a pair of an integral and a differential equations:

 u(w,t)= ∫X(∫p(w′,t)ϕ(w′,x)dw′−f(x)) ×ϕ(w,x)dμ(x), (9) ∂∂tp(w,t)= ∇w⋅(p(w,t)∇wu(w,t)). (10)

The quantity reflects the correlation of the current approximation error, as a function of , with the function . If is known for all , then equation (10), being first-order in , can be solved by the method of characteristics. Specifically, let be some trajectory satisfying the equation . On this trajectory, by Eq.(10),

 ddtp(w(t),t)=p(w(t),t)Δwu(w(t),t),

which has the solution

 p(w(t),t)=p(w(t0),t0)e∫tt0Δwu(w(τ),τ)dτ

This shows that, geometrically, determines the transport direction in the space, while determines the infinitesimal change of the value of .

### 2.4 The mean-field loss and stationary points

We define the mean-feald loss function on distributions by plugging the approximation (6) into the original loss formula (2):

 Lmf(p)=12∫X(∫p(w)ϕ(w,x)dw−f(x))2dμ(x). (11)

If is a solution of the transport equation (8), then, by a computation involving integration by parts,

 ddtLmf(p(⋅,t))=−∫p(w,t)|∇wu(w,t)|2dw, (12)

where is defined by Eq.(9) (see Appendix A.2). In particular, the mean-field loss does not decrease for any nonnegative solution :

 ddtLmf(p(⋅,t))≤0. (13)

This property is inherited from the original gradient descent (3). Note, however, that the transport equation (8) is not a (formal) gradient descent performed on w.r.t. – this latter would be given by and is quite different from Eq.(8). (In fact, in contrast to Eq.(8), the equation does not seem to generally admit reasonable solutions – even in simplest cases solutions may blow up in infinitesimal time).

Accordingly, the intuition behind the connection of the transport equation (8) to the mean-field loss is different from the naïve intuition of gradient descent. Note that is a convex quadratic functional in and the set of all nonnegative distributions is convex. The intuition of finite-dimensional gradient descent then suggests that the solution should invariably converge to a global minimum of . However, this is not the case with the dynamics (8) which can easily get trapped at stationary points. The meaning of the stationary point is clarified by the following proposition.

###### Proposition 1.

Assuming is a solution of Eq.(8), the following conditions are equivalent at any particular :

1. ;

2. at all where .

3. .

###### Proof.

The equivalence 1) 2) follows from Eq.(12). The implication 3) 1) is trivial. It remains to establish 2) 3). Since , condition 2) implies that for all . Then, by Eq.(10), for all . ∎

Thus, a stationary distribution can be characterized by any of the three conditions of this proposition. Given the monotonicity (13), the proposition suggests that, under reasonable regularity assumptions, any solution of the transport equation (8) eventually converges to such a stationary distribution.

If the family of maps is sufficiently rich, then we expect a stationary distribution either to be a global minimizer of or to be concentrated on some proper subset of the -space . We state one particular proposition demonstrating this in the important case of models with linear output. We say that an approximation (1) is a model with linear output if , where and is some map. For example, the standard shallow neural network is a model with linear output. For the proposition below, we assume that the functions and the error function are square-integrable w.r.t. the measure . Recall that a set of vectors in a Hilbert space is called total if their finite linear combinations are dense in this space.

###### Proposition 2.

Let be a stationary distribution supported on a subset . Suppose that the functions are total in . Then

###### Proof.

By the second stationarity condition of proposition 1, we have for all . In particular, taking the partial derivative w.r.t. ,

for all . Since the functions are total, almost everywhere w.r.t. , and hence

### 2.5 Linearization near a global minimizer

Suppose that the solution of the transport equation converges to a limiting distribution such that i.e. (-a.e.). Consider the positive semidefinite kernel

 K(w,w′)=∫Xϕ(w,x)ϕ(w′,x)dμ(x)

and the corresponding integral operator

 Kp(w)=∫K(w,w′)p(w′)dw′.

Note that the operator determines the quadratic part of the mean-field loss:

 Lmf(p∞+δp)=12⟨Kδp,δp⟩, (14)

where

Let us represent the solution in the form with a small . Plugging this into Eq.(8) and keeping only terms linear in , we obtain

 ∂∂tδp(w,t)≈ ∫X(∫δp(w′,t)ϕ(w′,x)dw′)) ×∇w⋅(p∞(w)∇wϕ(w,x))dμ(x) = ∇⋅(p∞∇)Kδp(w,t) = Rδp(w,t), (15)

where is the integral operator with the kernel

 R(w,w′)=∇w⋅(p∞(w)∇w)K(w,w′). (16)

Under this linearization, the solution to the transport equation can be written as

 δp(⋅,t)≈etRδp(⋅,0). (17)

The operator is symmetric and negative semi-definite w.r.t. the semi-definite scalar product associated with the kernel :

 ⟨Kp,Rq⟩=⟨KRp,q⟩=−⟨p∞∇Kp,∇Kq⟩.

This suggests that we can use the spectral theory of self-adjoint operators to analize the large– evolution of . A caveat here is that the scalar product is not strictly positive definite, in general. This issue can be addressed by considering the quotient space , where is the Hilbert space associated with the semi-definite scalar product , and . On this quotient space the scalar product is non-degenerate, and extends from to since the subspace lies in the null-space of .

Now, the self-adjoint operator has an orthogonal spectral decomposition in . In particular, suppose that

has a discrete spectrum of eigenvalues

and

–orthogonal eigenvectors

. For each , the equivalence class has a unique representative such that in . On the other hand, . We can view the space as a direct sum so this gives us the full eigendecomposition of the space . Recall that we have assumed that . By Eq.(17), this means that the eigendecomposition of the vector can include only eigenvectors with stricly negative eigenvalues, and in particular it does not include an –component. We can then write, by Eq.(17) and the eigenvector expansion,

 δp(⋅,t)≈∑k⟨Kδp(⋅,0),pk⟩⟨Kpk,pk⟩eλktpk(⋅)

and, by Eq.(14),

 Lmf(p(⋅,t))≈12∑k⟨Kδp(⋅,0),pk⟩2⟨Kpk,pk⟩e2λkt. (18)

We see, in particular, that the large- asymptotic of loss is determined by the distribution of the eigenvalues of and, for a particular initial condition , by the coefficients of its eigendecomposition.

For overparametrized models, we expect the subspace to be nontrivial and possibly even highly degenerate. Assuming that for some ground truth the minimum loss can be achieved, we can ask how to find the actual limiting distribution in the space of all minimizers of . In the linearized setting described above, this should be done using the already mentioned condition that expansion of over the eigenvectors of does not contain the component. In Appendix A.3, we illustrate this observation with a simple example involving a single-element set .

### 2.6 Models with linear output

We describe now another solvable approximation that holds for models with linear output at small values of the linear parameter. Recall that we defined such models as those where with some map and . Observe that for such models, the gradient factor in the transport equation (8) can be written as a sum of two terms:

 ∇w⋅(p∇wϕ)=∂p∂w0˜ϕ+w0∇˜w⋅(p∇˜w˜ϕ)

For small the second term is small and can be dropped. Then, the transport equation simplifies to

 ∂∂tp(w0,˜w,t)=˜u(˜w,t)∂∂w0p(w0,˜w,t),

where

 ˜u(˜w,t)= ∫X(∫∫p(w′0,˜w′,t)w′0˜ϕ(˜w′,x)dw′0d˜w′ −f(x))˜ϕ(˜w,x)dμ(x). (19)

It follows that the distribution evolves by “shifting along ”, separately at each :

 p(w0,˜w,t)=p(w0+s(˜w,t),˜w,t0), (20)

where the function satisfies the equation

 ∂∂ts(˜w,t)=˜u(˜w,t) (21)

with the initial condition . In particular, for each , the integral – the marginal of w.r.t. – does not depend on . We will denote this marginal by , and by we will denote the operator of multiplication by , i.e.

It is convenient to introduce linear operators :

 ˜Φa(x)= ∫˜ϕ(˜w,x)a(˜w)d˜w, ˜Φ∗b(˜w)= ∫˜ϕ(˜w,x)b(x)dμ(x).

These operators are mutually adjoint w.r.t. to the scalar products and namely

Plugging Eq.(20) into Eq.(19) and performing a change of variables, we obtain

 ˜u(˜w,t)=˜u(˜w,0)−(˜Φ∗˜Φˆps(⋅,t))(˜w). (22)

Denote, for brevity, and . Combining Eqs.(22) and (21), we obtain a linear first order equation for :

 ddts=˜u0−˜Φ∗˜Φˆps. (23)

The operator is self-adjoint and positive semidefinite with respect to the scalar product :

 ⟨ˆpa1,˜Φ∗˜Φˆpa2⟩=⟨˜Φˆpa1,˜Φˆpa2⟩μ= ⟨ˆp˜Φ∗˜Φˆpa1,a2⟩.

Assuming belongs to the Hilbert space associated with this scalar product, we can write the solution to Eq.(23) as

 s(t)=tP0˜u0+(1−e−t˜Φ∗˜Φˆp)(˜Φ∗˜Φˆp)−1P>0˜u0 (24)

where are the complementary orthogonal projectors to the nullspace and to the strictly positive subspace of , respectively.

Let where

 z(x)=∫∫p(w′0,˜w′,0)w′0˜ϕ(˜w′,x)dw′0d˜w′−f(x).

Using the identity and Eq.(24), the loss function can then be written as

 Lmf(p(t))= 12∥z−˜Φˆps(t)∥2μ = 12∥(1−˜Φˆp(˜Φ∗˜Φˆp)−1P>0˜Φ∗)z∥2μ +12⟨˜Φ∗z,ˆpe−2t˜Φ∗˜Φˆp(˜Φ∗˜Φˆp)−1P>0˜Φ∗z⟩.

The first, constant term on the r.h.s. is half the squared norm of the component of orthogonal to the range of the operator . The second term converges to 0 as , so the limit of the loss function is given by the first term. Suppose that the first term vanishes and suppose that the positive semidefinite operator has a pure point spectrum with eigenvalues and eigenvectors . Then Eq.(24) can be expanded as

 s(t)=tP0˜Φ∗z+∑k:ζk>01−e−tζkζk⟨ˆpsk,˜Φ∗z⟩⟨ˆpsk,sk⟩sk (25)

and the loss function as

 Lmf(p(t))=12∑k:ζk>0⟨ˆpsk,˜Φ∗z⟩2⟨ˆpsk,sk⟩e−2tζkζk. (26)

Thus, the large- evolution of loss is determined by the eigenvalues of and, for a particular , by the eigendecomposition of .

### 2.7 Localized Gaussian aproximation

Suppose that, for each , the distribution is approximately Gaussian with a center and a small covariance matrix :

 (27)

Plugging this ansatz into the transport equation and keeping only leading terms in we obtain (see Appendix A.4):

 dbdt≈−∫X(ϕ(b,x)−f(x))∇wϕ(b,x)dμ(x), (28)

and

where

 H=−∫X(ϕ(b,x)−f(x))Dwϕ(b,x)dμ(x)

and denotes the Hessian w.r.t. Not surprisingly, Eq.(28) coincides with the gradient descent equation for the original model (1) with . Now consider Eq.(29). If the matrix is diagonalized, then Eq.(29) is also diagonalized in the basis of matrix elements:

 dAkmdt=(λk+λm)Akm.

In particular, our “pointlike” Gaussian solution is unstable (expansive) iff has positive eigenvalues.

Consider now the special case of a model with linear output, and a distribution close to a “pointlike stationary distribution”, so that . We write the Hessian and accordingly in the block form:

 Dw=(Dw0w0Dw0˜wDtw0˜wD˜w˜w),H=(Hw0w0Hw0˜wHtw0˜wH˜w˜w).

Observe that, by linearity of in , and hence . Also, observe that It follows that , where we have denoted By assumption, If , this implies that . We conclude that the matrix has the block form

 H=(000H˜w˜w). (30)

This shows in particular that a distribution close to a pointlike stationary distribution evolves only in the -component.

## 3 Application to free-knot linear splines

In this section we apply the developed general theory to the model of piecewise linear free-knot splines, which can be viewed as a simplified neural network acting on the one-dimensional input space. Specifically, let be the Lebesgue measure on and

 (31)

where is the ReLU activation function. We will perform numerical simulations for this model with in Eq.(1).

Note that еру model (31) is highly degenerate in the sense that the same prediction can be obtained from multiple distributions on the parameter space :

 ∫R∫Rp(c,h)c(x−h)+dcdh=ˆfmf(x),∀x∈[0,1]. (32)

Using the identity with Dirac delta, we get:

 ∫Rp(c,x)cdc=d2ˆfmfdx2(x),∀x∈[0,1]. (33)

The derivative determines the prediction up to two constants, for example, and , which can also be expressed in terms of the distribution :

 −∫0−∞(∫Rp(c,h)cdc)hdh= ˆfmf(0) (34) ∫0−∞(∫Rp(c,h)cdc)dh= dˆfmfdx(0) (35)

Conditions (33) on distributions are independent at different

and leave infinitely many degrees of freedom for these distributions. This property is of course shared by all models with linear output.

### 3.1 Global optima and stationary distributions

For a given ground truth , setting in Eqs.(33)-(35) gives us a criterion for the distribution to be a global minimizer of the loss .

More generally, we can ask what are stationary distributions of the dynamics (in the sense of section 2.4 and proposition 1). Their complete general characterization is rather cumbersome, so we just consider the particular example of the ground truth function (this case is easier thanks to the constant convexity of ). One can then show the following properties (see Appendix A.5). First, any distribution supported on the subset is stationary. Second, consider the marginal distribution and its restriction to the segment . Then, for a stationary , either and is a global minimizer, or consists of a finite number of equidistantly spaced atoms in (and the approximation is a piecewise linear spline). In particular, if consists of a single point , then or

We examine in more detail the special case when a stationary measure is a single atom, i.e. . There are two possibilities: either and then is arbitrary, or and then In Fig.1 we show a series of simulations where the initial distribution is an atom, with some . We observe such distributions to converge to stationary atomic distributions of one of the above two kinds.

We consider now “pointlike” Gaussian initial distributions

, with a small standard deviation

Depending on we observe two different patterns of evolution of . If the limit of the perfectly atomic distribution () is a stationary point with then the evolution is stable. In contrast, if the limit of the atomic distribution is then the evolution is unstable: near this point the weights diverge rapidly along , and then form a curve approaching a globally minimizing distribution spread relatively uniformly over see Fig.2. The existence of two patterns is explained by the linear stability study of section 2.7: the pattern depends on the sign of in Eq.(30). By Eq.(31) we have and hence, at a stationary point , . Thus, stationary points with are stable and contractive in the direction, while the stationary point is unstable and expansive in the direction.

### 3.2 The small-c approximation

We can solve our model approximately in the region of small as described in section 2.6. Suppose that the initial distribution is As discussed in section 2.6, at small we expect to evolve by approximately shifting along , with some shift function . We observe indeed in an experiment that , see Fig.3(a). To further understand this dynamics, recall the approximate solutions (25),(26) (where , by our choice of the initial distribution). In our case, is the operator of multiplication by the indicator function