Dynamics of stochastic gradient descent for two-layer neural networks in the teacher-student setup

06/18/2019 ∙ by Sebastian Goldt, et al. ∙ 11

Deep neural networks achieve stellar generalisation even when they have enough parameters to easily fit all their training data. We study the dynamics and the performance of two-layer neural networks in the teacher-student setup, where one network, the student, is trained on data generated by another network, called the teacher, using stochastic gradient descent (SGD). We show how the dynamics of SGD is captured by a set of differential equations and prove that this description is asymptotically exact in the limit of large inputs. Using this framework, we calculate the final generalisation error of student networks that have more parameters than their teachers. We find that the final generalisation error of the student increases with network size when training only the first layer, but stays constant or even decreases with size when training both layers. We show that these different behaviours have their root in the different solutions SGD finds for different activation functions. Our results indicate that achieving good generalisation in neural networks goes beyond the properties of SGD alone and depends on the interplay of at least the algorithm, the model architecture, and the data set.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Online learning in teacher-student neural networks

We consider a supervised regression problem with training set with . The components of the inputs

are i.i.d. draws from the standard normal distribution 

. The scalar outputs are the output of a network with hidden units, a non-linear activation function and fixed weights with an additive output noise , called the teacher (see also Fig. 1a):

(1)

where is the th row of , and the local field of the th teacher node is . We will analyse three different network types: sigmoidal with , ReLU with , and linear networks where .

A second two-layer network with hidden units and weights , called the student, is then trained using SGD on the quadratic training loss . We emphasise that the student network may have a larger number of hidden units than the teacher and thus be over-parameterised with respect to the generative model of its training data.

The SGD algorithm defines a Markov process with update rule given by the coupled SGD recursion relations

(2)
(3)

We can choose different learning rates and for the two layers and denote by the derivative of the activation function evaluated at the local field of the student’s th hidden unit , and we defined the error term . We will use the indices to refer to student nodes, and to denote teacher nodes. We take initial weights at random from

for sigmoidal networks, while initial weights have variance 

for ReLU and linear networks.

The key quantity in our approach is the generalisation error of the student with respect to the teacher:

(4)

where the angled brackets denote an average over the input distribution. We can make progress by realising that can be expressed as a function of a set of macroscopic variables, called order parameters in statistical physics,[21, 41, 42]

(5)

together with the second-layer weights and . Intuitively, the teacher-student overlaps measure the overlap or the similarity between the weights of the th student node and the th teacher node. The matrix quantifies the overlap of the weights of different student nodes with each other, and the corresponding overlap of the teacher nodes are collected in the matrix

. We will find it convenient to collect all order parameters in a single vector

(6)

and we write the full expression for in Eq. (S30).

In a series of classic papers, Biehl, Schwarze, Saad, Solla and Riegler [41, 42, 43, 44, 45] derived a closed set of ordinary differential equations for the time evolution of the order parameters (see SM Sec. B). Together with the expression for the generalisation error , these equations give a complete description of the generalisation dynamics of the student, which they analysed for the special case when only the first layer is trained [43, 45]. Our first contribution is to provide a rigorous foundation for these results by proving the following theorem.

Theorem 1.1.

Assume that (A1) Both the sequences and ,

, are i.i.d. random variables;

is drawn from a normal distribution with mean 0 and covariance matrix , while is a Gaussian random variable with mean zero and unity variance; (A2) the function is bounded and its derivatives up to and including the second order exist and are bounded, too; (A3) the initial macroscopic state is deterministic and bounded by a constant; (A4) the constants , , , and are all finite. Define .

Choose . Under assumptions (A1) – (A4), and for any , the macroscopic state satisfies

(7)

where is a constant depending on , but not on , and is a deterministic function that is the unique solution of the ODE

(8)

with initial condition . In particular, we have

(9a)
(9b)
(9c)

We prove Theorem 1.1 using the theory of convergence of stochastic processes and a coupling trick introduced recently by Wang et al. [46] in Sec. A of the SM. The content of the theorem is illustrated in Fig. 1b, where we plot obtained by numerically integrating (9) (solid) and from a single run of SGD (2) (crosses) for sigmoidal students and varying , which are in very good agreement.

Figure 1: The analytical description of the generalisation dynamics of sigmoidal networks matches experiments. (a) We consider two-layer neural networks with a very large input layer. (b) We plot the learning dynamics obtained by integration of the ODEs (9) (solid) and from a single run of SGD (2) (crosses) for students with different numbers of hidden units . The insets show the values of the teacher-student overlaps  (5) for a student with at the two times indicated by the arrows. .

Given a set of non-linear, coupled ODE such as Eqns. (9), finding the asymptotic fixed points analytically to compute the generalisation error is all but impossible. In the following, we will therefore focus on analysing the asymptotic fixed points found by numerically integrating the equations of motion. The form of these fixed points will reveal that SGD finds different solutions with drastically different performance for the different activation functions and setups we consider. Second, knowledge of these fixed points allows us to make analytical and quantitative predictions for the asymptotic performance of the networks which agree well with experiments. We also note that several recent theorems [29, 31, 30] about the global convergence of SGD do not apply in our setting because we have a finite number of hidden units.

2 Asymptotic generalisation error of Soft Committee machines

We will first study networks where the second layer weights are fixed at . These networks are called a Soft Committee Machine (SCM) in the statistical physics literature and are the case studied most commonly so far [41, 42, 43, 45, 18, 27]. One notable feature of in SCMs is the existence of a long plateau with sub-optimal generalisation error during training. During this period, all student nodes have roughly the same overlap with all the teacher nodes, (left inset in Fig. 1b). As training continues, the student nodes “specialise” and each of them becomes strongly correlated with a single teacher node (right inset), leading to a sharp decrease in . This effect is well-known for both batch and online learning [18] and will be key for our analysis.

Let us now use the equations of motion (9) to analyse the asymptotic generalisation error of neural networks  after training has converged and in particular its scaling with . Our first contribution is to reduce the remaining equations of motion to a set of eight coupled differential equations for any combination of and in Sec. C. This enables us to obtain a closed-form expression for as follows.

In the absence of output noise (), the generalisation error of a student with will asymptotically tend to zero as . On the level of the order parameters, this corresponds to reaching a stable fixed point of (9) with . In the presence of small output noise , this fixed point becomes unstable and the order parameters instead converge to another, nearby fixed point with . The values of the order parameters at that fixed point can be obtained by perturbing Eqns. (9) to first order in , and the corresponding generalisation error turns out to be in excellent agreement with the generalisation error obtained when training a neural network using (2) from random initial conditions, which we show in Fig. 2a.

Sigmoidal networks.

We have performed this calculation for teacher and student networks with . We relegate the details to Sec. C.2, and content us here to state the asymptotic value of the generalisation error to first order in ,

(10)

where is a lengthy rational function of its variables. We plot our result in Fig. 2a together with the final generalisation error obtained in a single run of SGD (2) for a neural network with initial weights drawn i.i.d. from and find excellent agreement, which we confirmed for a range of values for , , and .

Figure 2: The asymptotic generalisation error of Soft Committee Machines increases with the network size. . (a) Our theoretical prediction for for sigmoidal (solid) and linear (dashed), Eqns. (10) and (12), agree perfectly with the result obtained from a single run of SGD (2) starting from random initial weights (crosses). (b) The final overlap matrices and  (5) at the end of an experiment with . Networks with sigmoidal activation function (top) show clear signs of specialisation as described in Sec. 2. ReLU networks (bottom) instead converge to solutions where all of the student’s nodes have finite overlap with teacher nodes.

One notable feature of Fig. 2a is that with all else being equal, SGD alone fails to regularise the student networks of increasing size in our setup, instead yielding students whose generalisation error increases linearly with . One might be tempted to mitigate this effect by simultaneously decreasing the learning rate for larger students. However, lowering the learning rate incurs longer training times, which requires more data for online learning. This trade-off is also found in statistical learning theory, where models with more parameters (higher ) and thus a higher complexity class (e.g. VC dimension or Rademacher complexity [4]) generalise just as well as smaller ones when given more data. In practice, however, more data might not be readily available, and we show in Fig. S2 of the SM that even when choosing , the generalisation error still increases with before plateauing at a constant value.

We can gain some intuition for the scaling of by considering the asymptotic overlap matrices and shown in the left half of Fig. 2b. In the over-parameterised case, student nodes are effectively trying to specialise to teacher nodes which do not exist, or equivalently, have weights zero. These student nodes do not carry any information about the teachers output, but they pick up fluctuations from output noise and thus increase . This intuition is borne out by an expansion of in the limit of small learning rate , which yields

(11)

which is indeed the sum of the error of independent hidden units that are specialised to a single teacher hidden unit, and superfluous units contributing each the error of a hidden unit that is “learning” from a hidden unit with zero weights (see also Sec. D of the SM).

Linear networks.

Two possible explanations for the scaling in sigmoidal networks may be the specialisation of the hidden units or the fact that teacher and student network can implement functions of different range if . To test these hypotheses, we calculated for linear neural networks [47, 48] with . Linear networks lack a specialisation transition [27] and their output range is set by the magnitude of their weights, rather than their number of hidden units. Following the same steps as before, a perturbative calculation in the limit of small noise variance yields

(12)

This result is again in perfect agreement with experiments, as we demonstrate in Fig. 2a. In the limit of small learning rates , Eq. (10) simplifies to yield the same scaling as for sigmoidal networks,

(13)

This shows that the scaling is not just a consequence of either specialisation or the mismatched range of the networks’ output functions. The optimal number of hidden units for linear networks is for all

, because linear networks implement an effective linear transformation with an effective matrix

. Adding hidden units to a linear network hence does not augment the class of functions it can implement, but it adds redundant parameters which pick up fluctuations from the teacher’s output noise, increasing .

ReLU networks.

The analytical calculation of , described above, for ReLU networks poses some additional technical challenges, so we resort to experiments to investigate this case. We found that the asymptotic generalisation error of a ReLU student learning from a ReLU teacher has the same scaling as the one we found analytically for networks with sigmoidal and linear activation functions: (see Fig. S3). Looking at the final overlap matrices and for ReLU networks in the bottom half of Fig. 2b, we see that instead of the one-to-one specialisation of sigmoidal networks, all student nodes have a finite overlap with some teacher node. This is a consequence of the fact that it is much simpler to re-express the sum of ReLU units with

ReLU units. However, there are still a lot of redundant degrees of freedom in the student, which all pick up fluctuations from the teacher’s output noise and increase 

.

Discussion.

The key result of this section has been that the generalisation error of SCMs scales as

(14)

Before moving on the full two-layer network, we discuss a number of experiments that we performed to check the robustness of this result (Details can be found in Sec. G of the SM). A standard regularisation method is adding weight decay to the SGD updates (2). However, we did not find a scenario in our experiments where weight decay improved the performance of a student with . We also made sure that our results persist when performing SGD with mini-batches. We investigated the impact of higher-order correlations in the inputs by replacing Gaussian inputs with MNIST images, with all other aspects of our setup the same, and the same - curve as for Gaussian inputs. Finally, we analysed the impact of having a finite training set. The behaviour of linear networks and of non-linear networks with large but finite training sets did not change qualitatively. However, as we reduce the size of the training set, we found that the lowest asymptotic generalisation error was obtained with networks that have .

3 Training both layers: Asymptotic generalisation error of a neural network

We now study the performance of two-layer neural networks when both layers are trained according to the SGD updates (2) and (3). We set all the teacher weights equal to a constant value, , to ensure comparability between experiments. However, we train all second-layer weights of the student independently and do not rely on the fact that all second-layer teacher weights have the same value. Note that learning the second layer is not needed from the point of view of statistical learning: the networks from the previous section are already expressive enough to capture the students, and we are thus slightly increasing the over-parameterisation even further. Yet, we will see that the generalisation properties will be significantly enhanced.

Figure 3: The performance of sigmoidal networks improves with network size when training both layers with SGD. (a) Generalisation dynamics observed experimentally for students with increasing , with all other parameters being equal. (). (b) Overlap matrices , , and second layer weights of the student at the end of the run with shown in (a). (c) Theoretical prediction for (solid) against observed after integration of the ODE until convergence (crosses) (9) ().

Sigmoidal networks.

We plot the generalisation dynamics of students with increasing trained on a teacher with in Fig. 3a. Our first observation is that increasing the student size  decreases the asymptotic generalisation error , with all other parameters being equal, in stark contrast to the SCMs of the previous section.

A look at the order parameters after convergence in the experiments from Fig. 3a reveals the intriguing pattern of specialisation of the student’s hidden units behind this behaviour, shown for in Fig. 3b. First, note that all the hidden units of the student have non-negligible weights (). Two student nodes () have specialised to the first teacher node, i.e. their weights are very close to the weights of the first teacher node (). The corresponding second-layer weights approximately fulfil

. Summing the output of these two student hidden units is thus approximately equivalent to an empirical average of two estimates of the output of the teacher node. The remaining three student nodes all specialised to the second teacher node, and their outgoing weights approximately sum to

. This pattern suggests that SGD has found a set of weights for both layers where the student’s output is a weighted average of several estimates of the output of the teacher’s nodes. We call this the denoising solution and note that it resembles the solutions found in the mean-field limit of an infinite hidden layer [29, 31] where the neurons become redundant and follow a distribution dynamics (in our case, a simple one with few peaks, as e.g. Fig. 1 in [31]).

We confirmed this intuition by using an ansatz for the order parameters that corresponds to a denoising solution to solve the equations of motion (9) perturbatively in the limit of small noise to calculate for sigmoidal networks after training both layers, similarly to the approach in Sec. 2. While this approach can be extended to any and , we focused on the case where to obtain manageable expressions; see Sec. E of the SM for details on the derivation. While the final expression is again too long to be given here, we plot it with solid lines in Fig. 3c. The crosses in the same plot are the asymptotic generalisation error obtained by integration of the ODE (9) starting from random initial conditions, and show very good agreement.

While our result holds for any , we note from Fig. 3c that the curves for different are qualitatively similar. We find a particular simple result for in the limit of small learning rates, where:

(15)

This result should be contrasted with the behaviour found for SCM.

Experimentally, we robustly observed that training both layers of the network yields better performance than training only the first layer with the second layer weights fixed to . However, convergence to the denoising solution can be difficult for large students which might get stuck on a long plateau where their nodes are not evenly distributed among the teacher nodes. While it is easy to check that such a network has a higher value of than the denoising solution, the difference is small, and hence the driving force that pushes the student out of the corresponding plateaus is small, too. These observations demonstrate that in our setup, SGD does not always find the solution with the lowest generalisation error in finite time.

Figure 4:

Asymptotic performance of linear two layer network. Error bars indicate one standard deviation over five runs. Parameters:

.

ReLU and linear networks.

We found experimentally that remains constant with increasing in ReLU and in linear networks when training both layers. We plot an exemplary learning curve in green for linear networks in Fig. 4, but note that the entire figure looks qualitatively exactly the same for ReLU networks (Fig. S4). This behaviour was also observed in linear networks trained by batch gradient descent, starting from small initial weights [49]. While this scaling is an improvement over its increase with for the SCM, (blue curve), this is not the decay that we observed for sigmoidal networks. A possible explanation is the lack of specialisation in linear and ReLU networks (see Sec. 2), without which the denoising solution found in sigmoidal networks is not possible. Indeed, in our experiments we always found that after convergence, any student node had a finite overlap with all the teacher nodes. We also considered normalised SCM, where we train only the first layer and fix the second-layer weights at and . The asymptotic error of normalised SCM decreases with (orange curve in Fig. 4), because the second-layer weights effectively reduce the learning rate, as can be easily seen from the SGD updates (2), and we know from our analysis of linear SCM in Sec. 2 that . In SM Sec. F we show analytically how imbalance in the norms of the first and second layer weights can lead to a larger effective learning rate. Normalised SCM also beat the performance students where we trained both layers, starting from small initial weights in both cases. This is surprising because we checked experimentally that the weights of a normalised SCM after training are a fixed point of the SGD dynamics when training both layers. However, we confirmed experimentally that SGD does not find this fixed point when starting with random initial weights.

Discussion.

The qualitative difference between training both or only the first layer of neural networks is particularly striking for linear networks, where fixing one layer does not change the class of functions the model can implement, but makes a dramatic difference for their asymptotic performance. This observation highlights two important points: first, the performance of a network is not just determined by the number of additional parameters, but also by how the additional parameters are distributed in the model. Second, the non-linear dynamics of SGD means that changing which weights are trainable can alter the training dynamics in unexpected ways. We saw this for two-layer linear networks, where SGD did not find the optimal fixed point, and in the non-linear sigmoidal networks, where training the second layer allowed the student to decrease its final error with every additional hidden unit instead of increasing it like in the SCM.

Acknowledgements

SG and LZ acknowledge funding from the ERC under the European Union’s Horizon 2020 Research and Innovation Programme Grant Agreement 714608-SMiLe. MA thanks the Swartz Program in Theoretical Neuroscience at Harvard University for support. AS acknowledges funding by the European Research Council, grant 725937 NEUROABSTRACTION. FK acknowledges support from “Chaire de recherche sur les modèles et sciences des données”, Fondation CFM pour la Recherche-ENS, and from the French National Research Agency (ANR) grant PAIL.

References

Appendix A Proof of Theorem 1.1

a.1 Outline

We will prove Theorem 1.1 in two steps. First, we will show that the mean values of the order parameters , and are given by the expressions used in the equations of motion (Lemma A.1) and that they concentrate, i.e. that their variance is bounded by a term of order . This ensures that the leading-order of the average increment is captured by the ODE of Theorem 1.1, and that the stochastic part of the increment of the order parameters can be ignored in the thermodynamic limit . In other words, the two bounds ensure that the stochastic Markov process converges to a deterministic process. To complete the proof, we use a form of the coupling trick as described by Wang et al. [46].

a.2 First moments of the increment

Lemma A.1.

Under the same setting as Theorem 1.1, for all , we have

(S1)
Proof.

We first recall that contains all time-dependent order parameters , , and , so we will prove the Lemma in turn for each of them. In fact, in each case we can prove a slightly stronger result which encompasses the required bound.

For the teacher-student overlaps , we multiply the update (2) with on both sides and find that

(S2)

The local field of the teacher is is a Gaussian random variable with mean zero and variance . Taking the conditional expectation, we find

(S3)

as required.

For the student-student overlaps , we multiply the update (2) by and find that

(S4)

Using assumption (A1), we see that the term

concentrates to yield 1 by the central limit theorem. Thus we find after taking the conditional expectation of both sides and using

that

(S5)

Finally, it is easy to convince oneself that taking the conditional expectation of the update for the second-layer weights (3) yields

(S6)

which completes the proof of Lemma A.1. ∎

a.3 Second moments of the increment

We now proceed to bound the second-order moments of the increments of the time-dependent order parameters. We collect these bounds in the following lemma:

Lemma A.2.

Under the assumptions of Theorem 1.1, for all , we have that

(S7)

Before proceeding with the proof, we state a simple technical lemma that will be helpful in the following; we relegate its proof to Sec. A.5.

Lemma A.3.

Under the same assumptions as Theorem 1.1, we have for all that

(S8)

where is a constant independent of .

Proof of Lemma a.2.

We first note all order parameters obey update equations of the form

(S9)

where we have emphasised that the update function may depend on all order parameters at time and the th sample shown to the student . For the variance of the order parameter , a little algebra yields the recursion relation

(S10)

We will now use complete induction to show that for any , the update of the variance at every step is bounded by as required. In particular, this means showing that the term proportional to actually scales as .

For the induction start, we note that by Assumption A3, we have . Hence the variance of any order parameter after a single step of SGD reads

(S11)
(S12)

In going from the first to the second line, we have used that all order parameters are uncorrelated at step , since the weights are initially uncorrelated.

For the induction step, we assume that the variance after steps is . By using the existence and boundedness of the derivatives of the activation function, we can write and expand the terms proportional to using a multivariate Taylor expansion in . We find that

(S13)

We are justified in truncating the expansion since we assumed that . If the functions are bounded by a constant, this completes the induction and shows that the variance of the increment of the order parameters is bounded by , as required.

It is easy to check that all three functions , and fulfill this condition because of the boundedness of and its derivatives (A2) and of Lemma A.3, which completes the proof of Lemma A.2. ∎

a.4 Putting it all together

Having proved both Lemmas A.1 and A.2, we can proceed to prove Theorem 1.1 by using the coupling trick in the form given by Wang et al. [46] for another online learning problem, namely the training of generative adversarial networks. We paraphrase the coupling trick as given by Wang et al. in the following to make the proof self-contained and refer to the supplemental material of their paper for additional details.

Proof of Theorem 1.1.

We first define a stochastic process that is coupled with the Markov process as

(S14)

Wang et al. [46] showed that for such a process, when Lemma A.1 holds, we have that

(S15)

for all . We then define a deterministic process

(S16)

which is a standard first-order finite difference approximation of the equations of motion (9), for which the standard Euler argument gives

(S17)

Wang et al. [46] further showed that for such a process, using Lemma A.2, we have

(S18)

Finally, combining Eqs. (S15), (S18) and (S17), we have

(S19)

which completes the proof. ∎

a.5 Additional proof details

Proof of Lemma a.3.

The increment of reads explicitly

(S20)

To bound the value of after steps, we consider the three terms in the sum each in turn. We first note that the sum of the output noise variables is a simple sum over uncorrelated, (sub-) Gaussian random variables rescaled by and thus by Hoeffding’s inequality almost surely smaller than a constant [50].

For the first two terms, we can use an argument similar to the one used to prove the bound on the variance of the increment of the order parameters. We first note that is a bounded function by Assumption (A2) and that the initial conditions of the second-layer weights are bounded by a constant by Assumption (A3). Hence, after a first step, the weight has increased by a term bounded by . Actually, at every step where the weight is bounded by a constant, its increase will be bounded by . Hence the magnitude of for , as required. ∎

Appendix B Derivation of the ODE description of the generalisation dynamics of online learning

Here we demonstrate how to evaluate the averages found in the equations of motion for the order parameters (9), following the classic work by Biehl and Schwarze [41] and Saad and Solla [42, 43]. We repeat the two main technical assumption of our work, namely namely having a large network () and a data set that is large enough to allow that we visit every sample only once before training converges. Both will play a key role in the following computations.

b.1 Expressing the generalisation error in terms of order parameters

We first demonstrate how the assumptions stated above allow to rewrite the generalisation error in terms of a number of order parameters. We have

(S21)
(S22)

where we have used the local fields and . Here and throughout this paper, we will use the indices to refer to hidden units of the student, and indices to denote hidden units of the teacher. Since the input only appears in only via products with the weights of the teacher and the student, we can replace the high-dimensional average over the input distribution by an average over the local fields and . The assumption that the training set is large enough to allow that we visit every sample in the training set only once guarantees that the inputs and the weights of the networks are uncorrelated. Taking the limit ensures that the local fields are jointly normally distributed with mean zero (). Their covariance is also easily found: writing for the th component of the th weight vector, we have

(S23)

since . Likewise, we define

(S24)

The variables , , and are called order parameters in statistical physics and measure the overlap between student and teacher weight vectors and and their self-overlaps, respectively. Crucially, from Eq. (S22) we see that they are sufficient to determine the generalisation error . We can thus write the generalisation error as

(S25)

where we have defined

(S26)

The average in Eq. (S26) is taken over a normal distribution for the local fields and with mean and covariance matrix

(S27)

Since we are using the indices for student units and for teacher hidden units, we have

(S28)

where the covariance matrix of the joint of distribution and is given by

(S29)

and likewise for . We will use this convention to denote integrals throughout this section. For the generalisation error, this means that it can be expressed in terms of the order parameters alone as

(S30)

b.2 ODEs for the evolution of the order parameters

Expressing the generalisation error in terms of the order parameters as we have in Eq. (S30) is of course only useful if we can track the evolution of the order parameters over time. We can derive ODEs that allow us to do precisely that for the order parameters by squaring the weight update of  (2) and for taking the inner product of (2) with , respectively, which yields the equations of motion (9).

To make progress however, i.e. to obtain a closed set of differential equations for and , we need to evaluate the averages over the local fields. In particular, we have to compute three types of averages:

(S31)

where is one the local fields of the student, while and can be local fields of either the student or the teacher;

(S32)

where and are local fields of the student, while and can be local fields of both; and finally

(S33)

where and are local fields of the teacher. In each of these integrals, the average is taken with respect to a multivariate normal distribution for the local fields with zero mean and a covariance matrix whose entries are chosen in the same way as discussed for .

We can re-write Eqns. (9) with these definitions in a more explicit form as [42, 43, 44]

(S34)
(S35)
(S36)

The explicit form of the integrals , , and is given in Sec. H for the case . Solving these equations numerically for and and substituting their values in to the expression for the generalisation error (S25) gives the full generalisation dynamics of the student. We show the resulting learning curves together with the result of a single simulation in Fig. 2 of the main text. We have bundled our simulation software and our ODE integrator as a user-friendly library with example programs at https://github.com/sgoldt/pyscm. In Sec. C, we discuss how to extract information from them in an analytical way.

Appendix C Calculation of in the limit of small noise for Soft Committee Machines

Our aim is to understand the asymptotic value of the generalisation error

(S37)

We focus on students that have more hidden units than the teacher, . These students are thus over-parameterised with respect to the generative model of the data and we define

(S38)

as the number of additional hidden units in the student network. In this section, we focus on the sigmoidal activation function

(S39)

unless stated otherwise.

Eqns. (S34ff) are a useful tool to analyse the generalisation dynamics and they allowed Saad and Solla to gain plenty of analytical insight into the special case  [42, 43]. However, they are also a bit unwieldy. In particular, the number of ODEs that we need to solve grows with and as . To gain some analytical insight, we make use of the symmetries in the problem, e.g. the permutation symmetry of the hidden units of the student, and re-parametrised the matrices and in terms of eight order parameters that obey a set of self-consistent ODEs for any . We choose the following parameterisation with eight order parameters:

(S40)
(S41)

which in matrix form for the case and read:

(S42)

We choose this number of order parameters and this particular setup for the overlap matrices and for two reasons: it is the smallest number of variables for which we were able to self-consistently close the equations of motion (S34), and they agree with numerical evidence obtained from integrating the full equations of motion (S34).

By substituting this ansatz into the equations of motion (S34), we find a set of eight ODEs for the order parameters. These equations are rather unwieldy and some of them do not even fit on one page, which is why we do not print them here in full; instead, we provide a Mathematica notebook where they can be found and interacted with together with the source at http://www.github.com/sgoldt/pyscm These equations allow for a detailed analysis of the effect of over-parameterisation on the asymptotic performance of the student, as we will discuss now.

c.1 Heavily over-parameterised students can learn perfectly from a noiseless teacher using online learning

For a teacher with and in the absence of noise in the teacher’s outputs (), there exists a fixed point of the ODEs with , , and perfect generalisation . Online learning will find this fixed point [42, 43]. More precisely, after a plateau whose length depends on the size of the network for the sigmoidal network, the generalisation error eventually begins an exponential decay to the optimal solution with zero generalisation error. The learning rates are chosen such that learning converges, but aren’t optimised otherwise.

c.2 Perturbative solution of the ODEs

We have calculated the asymptotic value of the generalisation error for a teacher with to first order in the variance of the noise . To do so, we performed a perturbative expansion around the fixed point

(S43)
(S44)

with the ansatz