# Generalisation dynamics of online learning in over-parameterised neural networks

Deep neural networks achieve stellar generalisation on a variety of problems, despite often being large enough to easily fit all their training data. Here we study the generalisation dynamics of two-layer neural networks in a teacher-student setup, where one network, the student, is trained using stochastic gradient descent (SGD) on data generated by another network, called the teacher. We show how for this problem, the dynamics of SGD are captured by a set of differential equations. In particular, we demonstrate analytically that the generalisation error of the student increases linearly with the network size, with other relevant parameters held constant. Our results indicate that achieving good generalisation in neural networks depends on the interplay of at least the algorithm, its learning rate, the model architecture, and the data set.

## Authors

• 10 publications
• 4 publications
• 12 publications
• 64 publications
• 57 publications
• ### Dynamics of stochastic gradient descent for two-layer neural networks in the teacher-student setup

Deep neural networks achieve stellar generalisation even when they have ...
06/18/2019 ∙ by Sebastian Goldt, et al. ∙ 11

• ### Imitating Deep Learning Dynamics via Locally Elastic Stochastic Differential Equations

Understanding the training dynamics of deep learning models is perhaps a...
10/11/2021 ∙ by Jiayao Zhang, et al. ∙ 0

• ### Soft Mode in the Dynamics of Over-realizable On-line Learning for Soft Committee Machines

Over-parametrized deep neural networks trained by stochastic gradient de...
04/29/2021 ∙ by Frederieke Richert, et al. ∙ 0

• ### Representational Distance Learning for Deep Neural Networks

Deep neural networks (DNNs) provide useful models of visual representati...
11/12/2015 ∙ by Patrick McClure, et al. ∙ 0

• ### Optimal Rate of Convergence for Deep Neural Network Classifiers under the Teacher-Student Setting

Classifiers built with neural networks handle large-scale high-dimension...
01/19/2020 ∙ by Tianyang Hu, et al. ∙ 0

• ### SGD Distributional Dynamics of Three Layer Neural Networks

With the rise of big data analytics, multi-layer neural networks have su...
12/30/2020 ∙ by Victor Luo, et al. ∙ 0

• ### A new role for circuit expansion for learning in neural networks

Many sensory pathways in the brain rely on sparsely active populations o...
08/19/2020 ∙ by Julia Steinberg, et al. ∙ 0

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

One hallmark of the deep neural networks behind state-of-the-art results in image classification [1] or the games of Atari and Go [2, 3] is their size: their free parameters outnumber the samples in their training set by up to two orders of magnitude [4]

. Statistical learning theory would suggest that such heavily over-parameterised networks should generalise poorly without further regularisation

[5, 6, 7, 8, 9, 10, 11], yet empirical studies consistently find that increasing the size of networks to the point where they can fit their training data and beyond does not impede their ability to generalise well [12, 13, 14]

. This paradox is arguably one of the biggest challenges in the theory of deep learning.

In practice, it is notoriously difficult to determine the point at which a statistical model becomes over-parameterised for a given data set. Instead, here we study the dynamics of learning neural networks in the teacher-student setup. The student is a two-layer neural network with weights that computes a scalar function of its inputs . It is trained with samples , where is the noisy output of another two-layer network with weights , called the teacher, and

is a Gaussian random variable with mean 0 and variance

. Crucially, the student can have a number of hidden units that is different from , the number of hidden units of the teacher. Choosing then gives us neural networks that are over-parameterised with respect to the generative model of their training data in a controlled way. The key quantity in our model is the generalisation error

 ϵg≡12⟨[ϕ(w,x)−ϕ(B,x)]2⟩, (1)

where the average is taken over the input distribution. Our main questions are twofold: how does evolve over time, and how does it depend on ?

Main contributions.

We derive a set of ordinary differential equations (ODEs) that track the

typical generalisation error of an over-parameterised student trained using SGD. This description becomes exact for large input dimension and data sets that are large enough to allow that we visit every sample only once before training converges. Using this framework, we analytically calculate the generalisation error after convergence . We find that with other relevant parameters held constant, the generalisation error increases at least linearly with . For small learning rates in particular, we have

 ϵ∗g∼ησ2L. (2)

Our model thus offers an interesting perspective on the implicit regularisation of SGD, which we will discuss in detail. The derivation of a set of ODEs for over-parameterised neural networks and their perturbative solution in the limit of small noise are an extension of earlier work by [15] and [16, 17].

The concepts and tools from statistical physics that we use in our analysis have a long and successful history of analysing average-case generalisation in learning and inference [18, 19, 20, 21, 22], and they have recently seen a surge of interest [23, 24, 25, 26, 27].

We begin our paper in Sec. 2 with a description of the teacher-student setup, the learning algorithm and the derivation of a set of ordinary differential equations that capture the dynamics of SGD in our model. Using this framework, we derive Eq. (2) in Sec. 3

and discuss networks with sigmoidal, linear and ReLU activation functions in detail. We discuss our results and in particular the importance of the size of the training set in Sec.

4 before concluding in Sec. 5.

## 2 Setup

### 2.1 The teacher generates test and training data

We study the learning of a supervised regression problem with inputs and outputs with a generative model as follows. We take the components of the inputs () to be i.i.d. Gaussian random variables111N.B. our results for large are valid for any input distributions that has the same mean and variance, for example equiprobable binary inputs . with zero mean and unit variance. The output is given by a neural network with a single hidden layer containing hidden units and all-to-all connections, see Fig. 1. Its weights from the inputs to the hidden units are drawn at random from some distribution and kept fixed222It is also possible to extend our approach to time-dependent weights .. Given an input , the network’s output is given by

 ϕ(B,x)=M∑m=1g(Bmx√N) (3)

where is the th row of ,

is the dot product between two vectors and

is the activation function of the network. We focus on the case where both student and teacher have the same activation function. In particular, we study linear networks with , sigmoidal networks with

, and rectified linear units where

. The training set consists of tuples , ; where

 yμB(xμ)≡ϕ(B,xμ)+ζμ, (4)

is a noisy observation of the network’s output and the random variable

is normally distributed with mean

and variance .

In the statistical physics literature on learning, a data-generating neural network is called the teacher and neural networks of the type (3) are called Soft Committee Machines [28]. They combine the apparent simplicity of their architecture, which allows for a detailed analytical description, with the power of a universal approximator: given a hidden layer of sufficient size, they can approximate any continuous function of their inputs to any desired accuracy [29, 30]. They have thus been at the center of a lot of recent research on the generalisation of neural networks [31, 32, 26, 33, 34].

### 2.2 The student aims to mimic the teacher’s function

Once a teacher network has been chosen at random, we train another neural network, called the student, on data generated by the teacher according to (4). The student has a single fully-connected hidden layer with hidden units, and we explicitly allow for . The student’s weights from the input to the hidden layer are denoted and its output is given by . We keep the weights from the hidden units to the output fixed at unity and only train the first layer weights . We consider both networks in the thermodynamic limit, where we let while keeping of order 1.

The key quantity in our study is the generalisation error of the student network with respect to the teacher network, which we defined in Eq. (1) as the mean squared error between the outputs of the student and the noiseless output of the teacher, averaged over the distribution of inputs. Note that including the output noise of the teacher would only introduce a constant offset proportional to the variance of the output noise.

### 2.3 The student is trained using online learning

Since we are training the student for a regression task, we choose a quadratic loss function. Given a training data set with

 E(w)=12P∑μ=1(ϕ(w,xμ)−yμB)2. (5)

We perform stochastic gradient descent on the training error to optimise the weights of the student, using only a single sample to evaluate the gradient of at every step. To make the problem analytically tractable, we consider the limit where the training data set is large enough to allow that each sample , is visited only once during the entire training until the generalisation error converges to its final value333In Sec. 4, we investigate the case of small  via simulations.. We can hence index the steps of the algorithm by and write the weight updates as

 wμ+1k=wμk−κNwμk−η√N∇wkE(w)|(xμ,yμB) (6)

where is the weight decay rate, the learning rate is , and we have chosen their scaling with such that all terms remain of order 1 in the thermodynamic limit . Evaluating the derivative yields

 wμ+1k=wμk−κNwμk−η√Nxμrμk (7)

where

 rμk≡g′(λμk)[ϕ(w,xμ)−yμB] (8)

and we have defined .

Stochastic gradient descent with mini-batch size 1 in the limit of very large training sets is also known as online or one-shot learning. Kinzel [35]

first realised that its dynamics could be described in terms of order parameters, initiating a number of works on the perceptron

[36, 21]. Online learning in committee machines was first studied by Biehl and Schwarze [15] in the case and by Saad and Solla [17, 37], who gave a detailed analytic description of the case with . Beyond its application to neural networks, its performance has been analysed for problems ranging from PCA [38, 39] to the training of generative adversarial networks [40].

### 2.4 The dynamics of online learning can be described in closed form

Our aim is to track the evolution of the generalisation error  (1), which can be written more explicitly as

 ϵg=12⟨[K∑k=1g(λμk)−M∑m=1g(νμm)]2⟩, (9)

where . Since the inputs only appear as products with the weight vectors of the student and the teacher, we can replace the average  over  with an average over  and . To determine the distribution of the latter, the assumption that every sample

is only used once during training becomes crucial, because it guarantees that the inputs and the weights of the networks are uncorrelated. By the central limit theorem,

are hence normally distributed with mean zero since . Their covariance is also easily found: writing for a component of the th weight vector, we have

 ⟨λkλl⟩=∑Na,bwkawlb⟨xaxb⟩N=wkwlN≡Qkl (10)

since . Likewise, we define

 ⟨νnνm⟩=BnBmN≡Tnm,⟨λkνm⟩=wkBmN≡Rkm. (11)

The variables , , and are called order parameters in statistical physics and measure the overlap between student and teacher weight vectors and and their self-overlaps, respectively. Crucially, from Eq. (9) we see that they are sufficient to determine the generalisation error .

We can obtain a closed set of differential equations for the time evolution of the order parameters and by squaring the weight update (7) and taking its inner product with , respectively444Since we keep the teacher fixed, remains constant; however, our approach can be easily extended to a time-dependent teacher.. Then, an average over the inputs needs to be taken. The resulting equations of motion for and can be written as

 dRindα =−κRin+η⟨riνn⟩ (12a) dQikdα=−2κQik+η⟨riλk⟩+η⟨rkλi⟩+η2⟨rirk⟩+η2σ2⟨g′(λi)g′(λk)⟩ (12b)

where becomes a continuous time-like variable in the limit . The averages over inputs can again be reduced to an average over the normally distributed local fields and as above. If the averages can be evaluated analytically, the equations of motion (12) together with the generalisation error (9) form a closed set of equations which can be integrated numerically and provides an exact description of the generalisation dynamics of the network in the limit of large and large training sets. Indeed, the integrals have an analytical solution for the choice  [15] and for linear networks. Eqns. (12) hold for any and , enabling us to study the learning of complex non-linear target functions

, rather than data that is linearly separable or follows a Gaussian distribution

[41, 42]. A detailed derivation and the explicit form of the equations of motion are given in Appendix A.

We plot obtained by numerically integrating555We have packaged our simulations and our ODE integrator into a user-friendly Python library. To download, visit https://github.com/sgoldt/pyscm Eqns. (12) and the generalisation error observed during a single run of online learning (7) with in Fig. 2. The plots demonstrate a good quantitative agreement between simulations and theory and display some generic features of online learning in soft committee machines.

One notable feature of all the plots in Fig. 2 is the existence of plateaus during training, where the generalisation error is almost stationary. During this time, the student “believes” that data are linearly separable and all its hidden units have roughly the same overlap with all the hidden units of the teacher. Only after a longer time, the student picks up the additional structure of the teacher and “specialises”: each of its hidden units ideally becomes strongly correlated with one and only one hidden unit of the teacher before the generalisation error decreases exponentially to its final value. This effect is well-known in the literature for both batch and online learning [28, 15, 16] and will be revisited in Sec. 3.

It is perhaps surprising that the generalisation dynamics seem unaffected by the difference in output noise (Fig. 2 a) until they leave the plateau. This is due to the fact that the noise appears in the equations of motion only in terms that are quadratic in the learning rate. Their effect takes longer to build up and become significant. The specialisation observed above is also due to terms that are quadratic in the learning rate. This goes to show that even in the limit of small learning rates, one cannot simplify the dynamics of the neural network by linearising Eqns. (12) in without losing some key properties of the dynamics.

## 3 Asymptotic generalisation of over-parameterised students after online learning

In the absence of output noise () and without weight decay (), online learning of a student with hidden units will yield a network that generalises perfectly with respect to the teacher. More precisely, at some point during training, the generalisation error will start an exponential decay towards zero (see Appendix B.1). On the level of the order parameters and , a student that generalises perfectly with respect to the teacher corresponds to a stable fixed point of the equations of motion (12) with .

This fixed point disappears for and the order parameters converge to a different fixed point. The values of the order parameters at that fixed point can be obtained perturbatively in the limit of small noise, i.e. small . To this end, we first make an ansatz for the matrices and that involves eight order parameters for any . We choose this number of order parameters for two reasons: it is the smallest number of variables for which we were able to self-consistently close the equations of motion (12), and they agree with numerical evidence obtained from integrating the full equations of motion (12).

We then derive equations of motion for this reduced set of order parameters, and expand them to first order in around the fixed point with perfect generalisation. Throughout this section, we set and choose uncorrelated and isotropic weight vectors for the teacher, i.e. , which is equivalent to drawing the weights at random from a standard normal distribution.

### 3.1 Sigmoidal networks

We have performed this calculation for teacher and student networks with . We discuss the details of this tedious calculation in Appendix 3, and here we only state the asymptotic value of the generalisation error to first order in the variance of the noise for teacher and student with sigmoidal activation:

 ϵ∗g=σ2η2πf(M,L,η)+O(σ3) (13)

where is a lengthy rational function of its variables. The full expression spans more than two pages, so here we plot it in Fig. 3 together with a single run of a simulation of a neural network with , which is in excellent agreement.

#### 3.1.1 Discussion

One notable feature of Fig. 3 is that with all else being equal, the generalisation error increases monotonically with . In other words, our result (13) implies that SGD alone fails to regularise the student networks of increasing size in our setup, instead yielding students whose generalisation error increases at least linearly with .

One might be tempted to mitigate this effect by simultaneously decreasing the learning rate for larger students. However, this raises two problems: first of all, a lower learning rate means the model would take longer to train. More importantly though, the resulting longer training time implies that more data is required until the final generalisation error is achieved. This is in agreement with statistical learning theory, where given more and more data, models with more parameters (higher ) and a higher complexity class, e.g. a higher VC dimension or Rademacher complexity [6], generalise just as well as smaller ones. In practice however, more data might not be readily available. Furthermore, we show in the Appendix B.2.1 that even when we choose , the generalisation error still increases with before plateauing at a constant value.

We can gain some intuition for the result (13) by considering the final representations learnt by a sigmoidal network. On the left half of Fig. 4, we plot the overlap matrices and for a teacher with and various . For , we see that each of the hidden units of the student has learnt the weights of one teacher hidden unit, yielding diagonal matrices and (modulo the permutation symmetry of the hidden units). As we add a third hidden unit to the student, , the specialisation discussed in the previous section becomes apparent: two of the hidden units of the student each align almost perfectly with a different hidden unit of the teacher, such that , while the weights of the third unit go to zero (). As we add even more hidden units, (), the weight vectors of some units become exactly anti-correlated, hence effectively setting their weights to zero as far as the output of that network is concerned (since we set the weights of the second layer to unity). This behaviour is essentially a consequence of the sigmoidal form of the activation function, which makes it hard to express the sum of hidden units with hidden units, instead forcing 1-to-1 specialisation of the student’s hidden units.

In the over-parameterised case,

units of the student are hence effectively specialising to hidden units of the teacher with zero weights. Although their weights are unbiased estimators of the weights

, their variance is finite due to the noise in the teacher’s output. Thus they always hurt generalisation.

This intuition can be confirmed analytically by going to the limit of small learning rates, which is the most relevant in practice. Expanding to first order in the learning rate reveals a particularly revealing form,

 ϵ∗g=σ2η2π(L+M√3)+O(η2), (14)

with second-order corrections that are quadratic in . The linear term in is the sum of two contributions: the asymptotic generalisation errors of independent networks with one hidden units that are learning from a teacher with single hidden unit hand . The superfluous units contribute each the error of a continuous perceptrons that is learning from a teacher with zero weights (). Again, we relegate the detailed calculation to the Appendix C.

### 3.2 Linear networks

One might suspect that part of the scaling in sigmoidal networks is due to the specialisation of the hidden units or the fact that teacher and student network can implement functions of different range if . It thus makes sense to calculate for linear neural networks, where  [43]. These networks have no specialisation transition [26] and their output range is set by the magnitude of their weights, rather than their number of hidden units. Furthermore, linear networks are receiving increasing attention as models for neural networks [44, 25, 45].

Following the same steps as for the sigmoidal networks, a perturbative calculation in the limit of small noise yields

 ϵ∗g=ησ2(L+M)4−2η(L+M)+O(σ3). (15)

In the limit of small learning rates, the above expression further simplifies to

 ϵ∗g=14ησ2(L+M)+O(η2). (16)

Hence we see that in the limit of small learning rates, the asymptotic generalisation error of linear networks has the same scaling with , and as for sigmoidal networks. This result is again in good agreement with the results of simulations, demonstrated in Fig. 3.

#### 3.2.1 Discussion

The linear scaling of with

, keeping all other things equal, might be surprising given that all linear networks implement a linear transformation with an effective matrix

, irrespective of their number of hidden units. However, this is exactly the problem: adding hidden units to a linear network does not augment the class of functions it can implement, but it adds free parameters which will indiscriminately pick up fluctuations due to the output noise in the teacher. The optimal generalisation error is indeed realised with irrespective of , since a linear network with has the lowest number of free parameters while having the same expressive power as a teacher with arbitrary . Our formula (15) however only applies to the case .

Linear networks thus show that the scaling of with is not only a consequence of either specialisation or the mismatched range of the networks’ output functions, as one might have suspected by looking only at sigmoidal networks.

Interestingly, if we rescale the learning rate by choosing , we find that the generalisation error (15) becomes equal to , independent of . Again, this rescaling of the learning rate comes at the cost of increased training time and hence, in this model, increased training data. A quantitative exploration of the trade-off between learning rate and network size for a fixed data set is an interesting problem, however, it goes beyond the domain of online learning and is hence left for future work.

### 3.3 Numerical results suggest the same scalings for ReLU networks, too

The analytical calculation of described above for networks with ReLU activation function poses some additional technical challenges, so here we resort to simulations to illustrate the behaviour of in this case and leave the analytical description for future work. The results are shown in Fig. 5: we found numerically that the asymptotic generalisation error of a ReLU student learning from a ReLU teacher has the same scaling as the one we found analytically for networks with sigmoidal and with linear activation functions: .

#### 3.3.1 Discussion

Looking at the final overlap matrices and for ReLU networks in the right half of Fig. 4, we see a mechanism behind Eq. (2) for ReLU networks that is reminiscent of the linear case: instead of the one-to-one specialisation of sigmoidal networks, all the hidden units of the student have a finite overlap with all the hidden units of the teacher. This is a consequence of the fact that it is much simpler to re-express the sum of ReLU units with

ReLU units. However, this also means that there are a lot of redundant degrees of freedom, which nevertheless all pick up fluctuations from the output noise and degrade the generalisation error.

For ReLU networks, one might imagine that several ReLU units can specialise to one and only one teacher unit and thus act as an effective denoiser for that teacher unit. However, we have checked numerically that this configuration is not a stable fixed point of the SGD dynamics.

## 4 Discussion

The main result of the preceding section was a set of ODEs that described the generalisation dynamics of over-parameterised two-layer neural networks. This framework allowed us to derive the scaling of the generalisation error of the student with the network size, the learning rate and the noise level in the teacher’s outputs. This scaling is robust with respect to the choice of the activation function as it holds true for linear, sigmoidal and ReLU networks. In this section, we discuss several possible changes to our setup and discuss their impact on the scaling of .

### 4.1 Weight decay

A natural strategy to avoid overfitting is to explicitly regularise the weights, for example by using weight decay. In our setup, this is introduced by choosing a finite in Eq. (7). In our simulations with , we did not find a scenario where weight decay did not increase the final generalisation error compared to the case . In particular, we did not find a scenario where weight decay improved the performance of a student with . The corresponding plots can be found in Appendix D.

### 4.2 SGD with mini-batches

We also made sure that the phenomenology we observed persists if we move to stochastic gradient descent with mini-batches, where the gradient estimate in Eq. (7) is averaged over several samples , as is standard in practice. We observed that increasing the mini-batch size lowers the asymptotic generalisation error up to a certain mini-batch size of order samples, after which it stays roughly constant. Crucially, having mini-batches does not change the scaling of with (see Appendix E for details).

### 4.3 Structured input data

One idealised assumption in our setup is that we take our inputs as i.i.d. draws from a standard normal distribution (see Sec. 2). We therefore repeated our experiments using MNIST images as inputs , while leaving all other aspects of our setup the same. In particular, we still trained the student on a regression task with generated using a random teacher. This setup allowed us to trace any change in the generalisation behaviour of the student to the higher-order correlations of the input distribution. Switching to MNIST inputs reproduced the same - curve as having Gaussian inputs to within the experimental error; the interested reader is referred to Appendix F for a detailed description of these experiments.

### 4.4 The scaling of ϵ∗g with L depends also on the size of the training set

In practice, a single sample of the training data set will be visited several times during training. After a first pass through the training set, the online assumption that an incoming sample is uncorrelated to the weights of the network thus breaks down. A complete analytical treatment in this setting remains an open problem, so to study this practically relevant setup, we turn to simulations. We keep the setup described in Sec. 2, but simply reduce the number of samples in the training data set . Our focus is again on the final generalisation error after convergence for linear, sigmoidal and ReLU networks, which we plot from left to right as a function of in Fig. 6.

Linear networks show a similar behaviour to the setup with a very large training set discussed in Sec. 3.2: the bigger the network, the worse the performance for both and . Again, the optimal network has hidden units, irrespective of the size of the teacher. However, for non-linear networks, the picture is more varied: For large training sets, where the number of samples easily outnumber the free parameters in the network (, red curve; this corresponds roughly to learning a data set of the size of MNIST), the behaviour is qualitatively described by our theory from Sec. 3: the best generalisation is obtained by a network that matches the teacher size, . However, as we reduce the size of the training set, this is no longer true. For , for example, the best generalisation is obtained with networks that have . Thus the size of the training set with respect to the network has an important influence on the scaling of with . Note that the early-stopping generalisation error, which we define as the minimal generalisation error over the duration of training, shows qualitatively the same behaviour as (see Appendix G for additional information.)

## 5 Concluding perspectives

We have studied the dynamics of online learning in two-layer neural networks within the teacher-student framework, where we train a student network using SGD on data generated by another network, the teacher. One advantage of this setup is that it allows us to investigate the behaviour of networks that are over-parameterised with respect to the generative model of their data in a controlled fashion. We derived a set of eight ODEs that describe the generalisation dynamics of over-parameterised students of any size. Within this framework, we analytically computed the final generalisation error of the student in the limit of online learning with small noise. One immediate consequence of this result is that SGD alone is not enough to regularise the over-parameterised student, instead yielding networks whose generalisation error scales linearly with the network size.

Furthermore, we demonstrated that adding explicit regularisation by introducing weight decay did not improve the performance of the networks and that the same phenomenology arises when using mini-batches or after substituting the Gaussian inputs used in the theoretical analysis with a more realistic data set. Nevertheless, we were able to find a scenario in our setup where the generalisation decreases with the student’s size, namely when training using a finite data set that contains roughly as many samples as there are free parameters in the network.

In the setting we analyse, our results clearly indicate that the regularisation of neural networks goes beyond the properties of SGD alone. Instead, a full understanding of the generalisation properties of deep networks requires taking into account the interplay of at least the algorithm, its learning rate, the model architecture, and the data set, setting up a formidable research programme for the future.

## Acknowledgements

SG and LZ acknowledge funding from the ERC under the European Union’s Horizon 2020 Research and Innovation Programme Grant Agreement 714608-SMiLe. MA thanks the Swartz Program in Theoretical Neuroscience at Harvard University for support. AS acknowledges funding by the European Research Council, grant 725937 NEUROABSTRACTION. FK acknowledges support from “Chaire de recherche sur les modèles et sciences des données”, Fondation CFM pour la Recherche-ENS, and from the French National Research Agency (ANR) grant PAIL.

APPENDICES

## Appendix A Derivation of the ODE description of the generalisation dynamics of online learning

We will now show how to derive ODEs that describe the dynamics of online learning in two-layer neural networks, following the seminal work by Biehl and Schwarze [15] and Saad and Solla [16, 17]. We focus on the teacher-student setup introduced in the main paper, where a student network with weights and output

 ϕ(w,x)=K∑k=1g(wkx√N) (17)

is trained on samples generated by another two-layer network with weights , the teacher, according to

 yμB(xμ)≡ϕ(B,xμ)+ζμ. (18)

Here, is normally distributed with mean and variance . We will make two technical assumptions, namely having a large network () and a data set that is large enough to allow that we visit every sample only once before training converges.

### a.1 Expressing the generalisation error in terms of order parameters

To make this section self-consistent, we briefly recapitulate how the assumptions stated above allow to rewrite the generalisation error in terms of a number of order parameters. We have

 ϵg≡ 12⟨[ϕ(w,x)−ϕ(B,x)]2⟩ (19) = 12⟨[K∑k=1g(λμk)−M∑m=1g(νμm)]2⟩, (20)

where we have introduced the local fields

 λμk≡ wkxμ√N, (21) νμm≡ Bmxμ√N. (22)

Here and throughout this paper, we will use the indices to refer to hidden units of the student, and indices to denote hidden units of the teacher. Since the input only appears in only via products with the weights of the teacher and the student, we can replace the high-dimensional average over the input distribution by an average over the local fields and . The assumption that the training set is large enough to allow that we visit every sample in the training set only once guarantees that the inputs and the weights of the networks are uncorrelated. Taking the limit ensures that the local fields are jointly normally distributed with mean zero (). Their covariance is also easily found: writing for the th component of the th weight vector, we have

 ⟨λkλl⟩=∑Na,bwkawlb⟨xaxb⟩N=wkwlN≡Qkl, (23)

since . Likewise, we define

 ⟨νnνm⟩=BnBmN≡Tnm,⟨λkνm⟩=wkBmN≡Rkm. (24)

The variables , , and are called order parameters in statistical physics and measure the overlap between student and teacher weight vectors and and their self-overlaps, respectively. Crucially, from Eq. (20) we see that they are sufficient to determine the generalisation error . We can thus write the generalisation error as

 ϵg=12∑i,kI2(i,k)+12∑n,mI2(n,m)−∑i,nI2(i,n), (25)

where we have defined

 I2(i,k)≡⟨g(λi)g(λk)⟩=1πarcsinQik√1+Qii√1+Qkk. (26)

The average in Eq. (26) is taken over a normal distribution for the local fields and with mean and covariance matrix

 C2=(QiiQikQikQkk). (27)

Since we are using the indices for student units and for teacher hidden units, we have

 I2(i,n)=⟨g(λi)g(νm)⟩, (28)

where the covariance matrix of the joint of distribution and is given by

 C2=(QiiRinTinTnn). (29)

and likewise for . We will use this convention to denote integrals throughout this section. For the generalisation error, this means that it can be expressed in terms of the order parameters alone as

 ϵg=1π∑i,karcsinQik√1+Qii√1+Qkk+1π∑n,marcsinTnm√1+Tnn√1+Tmm−2π∑i,narcsinRin√1+Qii√1+Tnn. (30)

### a.2 ODEs for the evolution of the order parameters

Expressing the generalisation error in terms of the order parameters as we have in Eq. (30) is of course only useful if we can track the evolution of the order parameters over time. We can derive ODEs that allow us to do precisely that by first writing again the SGD update of the weights:

 wμ+1k=wμk−κNwμk−η√Nxμrμk, (31)

where is a running index counting the weight updates or, equivalently, the samples used so far, and

 rμk≡g′(λμk)[ϕ(w,xμ)−yμB]. (32)

From this equation, we can obtain differential equations for the time evolution of the order parameters by squaring the weight update (31) and for taking the inner product of (31) with , respectively, which yields the Eqns. (12) of the main text and which we state again for completeness:

 dRindα =−κRin+η⟨riνn⟩ (33a) dQikdα =−2κQik+η⟨riλk⟩+η⟨rkλi⟩+η2⟨rirk⟩+η2σ2⟨g′(λi)g′(λk)⟩ (33b)

where becomes a continuous time-like variable in the limit . These equations are valid for any choice of activation functions and . To make progress however, i.e. to obtain a closed set of differential equations for and , we need to evaluate the averages over the local fields. In particular, we have to compute three types of averages:

 I3=⟨g′(a)bg′(c)⟩, (34)

where is one the local fields of the student, while and can be local fields of either the student or the teacher;

 I4=⟨g′(a)g′(b)g(c)g(d)⟩, (35)

where and are local fields of the student, while and can be local fields of both; and finally

 J2=⟨g′(a)g′(b)⟩, (36)

where and are local fields of the teacher. In each of these integrals, the average is taken with respect to a multivariate normal distribution for the local fields with zero mean and a covariance matrix whose entries are chosen in the same way as discussed for .

We can re-write Eqns. (33) with these definitions in a more explicit form as [16, 17]

 dRindα =−κRin+η(∑mI3(i,n,m)−∑jI3(i,n,j)), (37) dQikdα =−2κQik+η2σ2J2(i,k) +η(∑mI3(i,k,m)−∑jI3(i,k,j)) +η(∑mI3(k,i,m)−∑jI3(k,i,j)) +η2⎛⎝∑n,mI4(i,k,n,m)−2∑j,nI4(i,k,j,n)+∑j,lI4(i,k,j,l)⎞⎠. (38)

The explicit form of the integrals , , and is given in Sec. H for the case . Solving these equations numerically for and and substituting their values in to the expression for the generalisation error (25) gives the full generalisation dynamics of the student. We show the resulting learning curves together with the result of a single simulation in Fig. 2 of the main text. We have bundled our simulation software and our ODE integrator as a user-friendly Python package666To download, visit https://github.com/sgoldt/pyscm. In Sec. B, we discuss how to extract information from them in an analytical way.

## Appendix B Calculation of ϵg in the limit of small noise

Our aim is to understand the asymptotic value of the generalisation error

 ϵ∗g≡limα→∞ϵg(α). (39)

We focus on students that have more hidden units than the teacher, . These students are thus over-parameterised with respect to the generative model of the data and we define

 L≡K−M (40)

as the number of additional hidden units in the student network. In this section, we focus on the sigmoidal activation function

 g(x)=erf(x/√2), (41)

unless stated otherwise.

Eqns. (37) are a useful tool to analyse the generalisation dynamics and they allowed Saad and Solla to gain plenty of analytical insight into the special case  [16, 17]. However, they are also a bit unwieldy. In particular, the number of ODEs that we need to solve grows with and as . To gain some analytical insight, we make use of the symmetries in the problem, e.g. the permutation symmetry of the hidden units of the student, and re-parametrised the matrices and in terms of eight order parameters that obey a set of self-consistent ODEs for any . We choose the following parameterisation with eight order parameters:

 Qij= ⎧⎪ ⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪ ⎪⎩Qi=j≤M,Ci≠j;i,j≤M,Di>M,j≤Mori≤M,j>M,Ei=j>M,Fi≠j;i,j>M, (42) Rin= ⎧⎨⎩Ri=n,Si≠n;i≤M,Ui>M, (43)

which in matrix form for the case and read:

 R=⎛⎜ ⎜ ⎜ ⎜ ⎜ ⎜⎝RSSSRSSSRUUUUUU⎞⎟ ⎟ ⎟ ⎟ ⎟ ⎟⎠andQ=⎛⎜ ⎜ ⎜ ⎜ ⎜ ⎜⎝QCCDDCQCDDCCQDDDDDEFDDDFE⎞⎟ ⎟ ⎟ ⎟ ⎟ ⎟⎠ (44)

We choose this number of order parameters and this particular setup for the overlap matrices and for two reasons: it is the smallest number of variables for which we were able to self-consistently close the equations of motion (37), and they agree with numerical evidence obtained from integrating the full equations of motion (37).

By substituting this ansatz into the equations of motion (37), we find a set of eight ODEs for the order parameters. These equations are rather unwieldy and some of them do not even fit on one page, which is why we do not print them here in full; instead, we provide a Mathematica notebook where they can be found and interacted with777To download, visit https://github.com/sgoldt/pyscm. These equations allow for a detailed analysis of the effect of over-parameterisation on the asymptotic performance of the student, as we will discuss now.

### b.1 Heavily over-parameterised students can learn perfectly from a noiseless teacher using online learning

For a teacher with and in the absence of noise in the teacher’s outputs (), there exists a fixed point of the ODEs with , , and perfect generalisation . Online learning will find this fixed point, as is demonstrated in Fig. 7, where we plot the generalisation dynamics of a student with hidden units learning from a teacher with hidden units for both Erf and ReLU activation functions. More precisely, after a plateau whose length depends on the size of the network for the sigmoidal network, the generalisation error eventually begins an exponential decay to the optimal solution with zero generalisation error. The learning rates are chosen such that learning converges, but aren’t optimised otherwise.

### b.2 Perturbative solution of the ODEs

We have calculated the asymptotic value of the generalisation error for a teacher with to first order in the variance of the noise . To do so, we performed a perturbative expansion around the fixed point

 R0=Q0=1, (45) S0=U0=C0=D0=E0=F0=0, (46)

with the ansatz

 X=X0+σ2X1 (47)

for all the order parameters. Writing the ODEs to first order and solving for their steady state where yielded a fixed point with an asymptotic generalisation error

 ϵ∗g=σ2η2πf(M,L,η)+O(σ3). (48)

is an unwieldy rational function of its variables. Due to its length, we do not print it here in full; instead, we give the full function in a Mathematica notebook888To download, visit https://github.com/sgoldt/pyscm. Here, we plot the results in various forms in Fig. 8. We note in particular the following points:

#### b.2.1 Discussion

increases with ,

The two plots on the left show that the generalisation error increases monotonically with both and while keeping the other fixed, resp., for teachers with (red) and (blue)

Divergence at large

Our perturbative result diverges for large , or equivalently, for a large learning rate that depends on the number of hidden units . For the special case , the learning rate at which our perturbative result diverges is precisely the maximum learning rate for which the exponential convergence to the optimal solution is still guaranteed for  [17]

 ηmax=√3πM+3/√5−1 (49)

as we show in the right-most plot of Fig. 8.

Expansion for small

In the limit of small learning rates, which is the most relevant in practice and which from the plots in Fig. 8 dominates the behaviour of outside of the divergence, the generalisation error is linear in the learning rate. Expanding to first order in the learning rate reveals a particularly revealing form,

 ϵ∗g=σ2η2π(L+M√3)+O(η2) (50)

with second-order corrections that are quadratic in . This is actually the sum of the asymptotic generalisation errors of continuous perceptrons that are learning from a teacher with and continuous perceptrons with as we calculate in Sec. C. This neat result is a consequence of the specialisation that is typical of SCMs with sigmoidal activation functions as we discussed in the main text.

Rescaling the learning rate by

The expression for the generalisation error in the limit of small learning rates might tempt one to rescale the learning rate by in order to mitigate the detrimental effect of the over-parameterisation. As we note in the main text, this a leads to a longer training duration which in our model implies that more data is required until the final generalisation error is achieved, both of which might not be feasible in practice. Moreover, we show in Fig. 9 that the asymptotic generalisation error (48) of a student trained using SGD with learning rate still increases with before plateauing at a constant value that is independent of .

## Appendix C Asymptotic generalisation error of a noisy continuous perceptron

What is the asymptotic generalisation for a continuous perceptron, i.e. a network with , in a teacher-student scenario when the teacher has some additive Gaussian output noise? In this section, we repeat a calculation by Biehl and Schwarze [15] where the teacher’s outputs are given by

 yB=g(Bx√N)+ζ (51)

where is again a Gaussian random variable with mean 0 and variance . We keep denoting the weights of the student by and the weights of the teacher by . To analyse the generalisation dynamics, we introduce the order parameters

 R≡wBN,Q≡wwNandT≡BBN. (52)

and we explicitly do not fix

for the moment. For

, they obey the following equations of motion:

 dRdt= 2ηπ(Q(t)+1)(TQ(t)−R(t)2+T√(T+1)Q(t)−R(t)2+T+1−R(t)√2Q(t)+1) (53) dQdt= 4ηπ(Q(t)+1)(R(t)√2(Q(t)+1)−R(t)2−Q(t)√2Q(t)+1) +4η2π2√2Q(t)+1[−2arcsin(R(t)√(6Q(t)+2)(2Q(t)−R(t)2+1)) +arcsin(2(Q(t)−R(t)2)+12(2Q(t)−R(t)2+1))+arcsin(Q(t)3Q(t)+1)] +2η2σ2π√2Q(t)+1. (54)

The equations of motion have a fixed point at which has perfect generalisation for . We hence make a perturbative ansatz in

 Q(t)= T+σ2q(t) (55) R(t)= T+σ2r(t) (56)

and find for the asymptotic generalisation error

 ϵ∗g=ησ2(4T+1)2√2T+1(−η√8T2+6T+1+4πT+π)+O(σ3). (57)

To first order in the learning rate, this reads

 ϵ∗g=ησ22π√2T+1, (58)

which should be compared to the corresponding result for the full SCMs, Eq. (50).

## Appendix D Regularisation by weight decay does not help

A natural strategy to avoid the pitfalls of overfitting is to regularise the weights, for example by using explicit weight decay by choosing . We have not found a setup where adding weight decay improved the asymptotic generalisation error of a student compared to a student that was trained without weight decay in our setup. As a consequence, weight decay completely fails to mitigate the increase of with . We show the results of an illustrative experiment in Fig. 10.

## Appendix E SGD with mini-batches

One key characteristic of online learning is that we evaluate the gradient of the loss function using a single sample from the training step per step. In practice, it is more common to actually use a number of samples to estimate the gradient at every step. To be more precise, the weight update equation for SGD with mini-batches would read:

 wμ+1k=wμk−κNwμk−ηb√Nb∑ℓ=1xμ,ℓg′(λμ,ℓk)[ϕ(w,xμ,ℓ)−yμ,ℓB]. (59)

where is the th input from the mini-batch used in the th step of SGD, is the local field of the th student unit for the th sample in the mini-batch, etc. Note that when we use every sample only once during training, using mini-batches of size increases the amount of data required by a factor when keeping the number of steps constant.

We show the asymptotic generalisation error of student networks of varying size trained using SGD with mini-batches and a teacher with in Fig. 11. Two trends are visible: first, using increasing the size of the mini-batches decreases the asymptotic generalisation error up to a certain mini-batch size, after which the gains in generalisation error become minimal; and second, the shape of the curve is the same for all mini-batch sizes, with the minimal generalisation error attained by a network with .

## Appendix F Using MNIST images for training and testing

In the derivation of the ODE description of online learning for the main text, we noted that only the first two moments of the input distribution matter for the learning dynamics and for the final generalisation error. The reason for this is that the inputs only appear in the equations of motion for the order parameters as a product with the weights of either the teacher or the student. Now since they are – by assumption – uncorrelated with those weights, this product is the sum of large number of random variables and hence distributed by the central limit theorem.

We have checked how our results change when this assumption breaks down in one example where we train a network on a finite data set with non-trivial higher order moments, namely the images of the MNIST data set. We studied the very same setup that we discuss throughout this work, namely the supervised learning of a regression task in the teacher-student scenario. We

only replace the the inputs, which would have been i.i.d. draws from the standard normal distribution, with the images of the MNIST data set. In particular, this means that we do not care about the labels of the images. Figure 12 shows a plot of the resulting final generalisation against for both the MNIST data set and a data set of the same size, comprised of i.i.d. draws from the standard normal distribution, which are in good agreement.

## Appendix G Early-stopping generalisation error for finite training sets

A common way to prevent over-fitting of a neural network when training with a finite training set in practice is early stopping, where the training is stopped before the training error has converged to its final value yet. The idea behind early-stopping is thus to stop training before over-fitting sets in. For the purpose of our analysis of the generalisation of two-layer networks trained on a fixed finite data set in Sec. 4 of the main text, we define the early-stopping generalisation error as the minimum of during the whole training process. In Fig. 13, we reproduce Fig. 6 from the main text at the bottom and plot obtained from the very same experiments at the top. While the ReLU networks showed very little to no over-training, the sigmoidal networks showed more significant over-training. However, the qualitative dependence of the generalisation errors on was observed to be the same in this experiment. In particular, the early-stopping generalisation error also shows two different regimes, one where increasing the network hurts generalisation (), and one where it improves generalisation or at least doesn’t seem to affect it much (small ).

## Appendix H Explicit form of the integrals appearing in the equations of motion of sigmoidal networks

To be as self-contained as possible, here we collect the explicit forms of the integrals , , and that appear in the equations of motion for the order parameters and the generalisation error for networks with , see Eq. (37). They were first given by [15, 16]. Each average is taken w.r.t. a multivariate normal distribution with mean 0 and covariance matrix , whose components we denote with small letters. The integration variables are always components of , while and can be components of either or .

 J2 ≡⟨g′(u)g′(v)⟩=2π(1+c11+c22+c11c22−c212)−1/2 (60) I2 ≡12⟨g(w)g(z)⟩=1πarcsinc12√1+c11√1+c12. (61) I3 ≡⟨g′(u)wg(z)⟩=2π1√Λ3c23(1+c11)−c12c131+c11 (62) I4 ≡⟨g′(u)g′(v)g(w)g(z)⟩=4π21√Λ4arcsin(Λ0√Λ1Λ2) (63)

where

 Λ4=(1+c11)(1+c22)−c212 (64)

and