1 Online learning in teacher-student neural networks
We consider a supervised regression problem with training set with . The components of the inputs
are i.i.d. draws from the standard normal distribution. The scalar outputs are the output of a network with hidden units, a non-linear activation function and fixed weights with an additive output noise , called the teacher (see also Fig. 1a):
where is the th row of , and the local field of the th teacher node is . We will analyse three different network types: sigmoidal with , ReLU with , and linear networks where .
A second two-layer network with hidden units and weights , called the student, is then trained using SGD on the quadratic training loss . We emphasise that the student network may have a larger number of hidden units than the teacher and thus be over-parameterised with respect to the generative model of its training data.
The SGD algorithm defines a Markov process with update rule given by the coupled SGD recursion relations
We can choose different learning rates and for the two layers and denote by the derivative of the activation function evaluated at the local field of the student’s th hidden unit , and we defined the error term . We will use the indices to refer to student nodes, and to denote teacher nodes. We take initial weights at random from
for sigmoidal networks, while initial weights have variancefor ReLU and linear networks.
The key quantity in our approach is the generalisation error of the student with respect to the teacher:
where the angled brackets denote an average over the input distribution. We can make progress by realising that can be expressed as a function of a set of macroscopic variables, called order parameters in statistical physics,[21, 41, 42]
together with the second-layer weights and . Intuitively, the teacher-student overlaps measure the overlap or the similarity between the weights of the th student node and the th teacher node. The matrix quantifies the overlap of the weights of different student nodes with each other, and the corresponding overlap of the teacher nodes are collected in the matrix
. We will find it convenient to collect all order parameters in a single vector
and we write the full expression for in Eq. (S30).
In a series of classic papers, Biehl, Schwarze, Saad, Solla and Riegler [41, 42, 43, 44, 45] derived a closed set of ordinary differential equations for the time evolution of the order parameters (see SM Sec. B). Together with the expression for the generalisation error , these equations give a complete description of the generalisation dynamics of the student, which they analysed for the special case when only the first layer is trained [43, 45]. Our first contribution is to provide a rigorous foundation for these results by proving the following theorem.
Assume that (A1) Both the sequences and , ,
are i.i.d. random variables;
, are i.i.d. random variables;is drawn from a normal distribution with mean 0 and covariance matrix , while is a Gaussian random variable with mean zero and unity variance; (A2) the function is bounded and its derivatives up to and including the second order exist and are bounded, too; (A3) the initial macroscopic state is deterministic and bounded by a constant; (A4) the constants , , , and are all finite. Define .
Choose . Under assumptions (A1) – (A4), and for any , the macroscopic state satisfies
where is a constant depending on , but not on , and is a deterministic function that is the unique solution of the ODE
with initial condition . In particular, we have
We prove Theorem 1.1 using the theory of convergence of stochastic processes and a coupling trick introduced recently by Wang et al.  in Sec. A of the SM. The content of the theorem is illustrated in Fig. 1b, where we plot obtained by numerically integrating (9) (solid) and from a single run of SGD (2) (crosses) for sigmoidal students and varying , which are in very good agreement.
Given a set of non-linear, coupled ODE such as Eqns. (9), finding the asymptotic fixed points analytically to compute the generalisation error is all but impossible. In the following, we will therefore focus on analysing the asymptotic fixed points found by numerically integrating the equations of motion. The form of these fixed points will reveal that SGD finds different solutions with drastically different performance for the different activation functions and setups we consider. Second, knowledge of these fixed points allows us to make analytical and quantitative predictions for the asymptotic performance of the networks which agree well with experiments. We also note that several recent theorems [29, 31, 30] about the global convergence of SGD do not apply in our setting because we have a finite number of hidden units.
2 Asymptotic generalisation error of Soft Committee machines
We will first study networks where the second layer weights are fixed at . These networks are called a Soft Committee Machine (SCM) in the statistical physics literature and are the case studied most commonly so far [41, 42, 43, 45, 18, 27]. One notable feature of in SCMs is the existence of a long plateau with sub-optimal generalisation error during training. During this period, all student nodes have roughly the same overlap with all the teacher nodes, (left inset in Fig. 1b). As training continues, the student nodes “specialise” and each of them becomes strongly correlated with a single teacher node (right inset), leading to a sharp decrease in . This effect is well-known for both batch and online learning  and will be key for our analysis.
Let us now use the equations of motion (9) to analyse the asymptotic generalisation error of neural networks after training has converged and in particular its scaling with . Our first contribution is to reduce the remaining equations of motion to a set of eight coupled differential equations for any combination of and in Sec. C. This enables us to obtain a closed-form expression for as follows.
In the absence of output noise (), the generalisation error of a student with will asymptotically tend to zero as . On the level of the order parameters, this corresponds to reaching a stable fixed point of (9) with . In the presence of small output noise , this fixed point becomes unstable and the order parameters instead converge to another, nearby fixed point with . The values of the order parameters at that fixed point can be obtained by perturbing Eqns. (9) to first order in , and the corresponding generalisation error turns out to be in excellent agreement with the generalisation error obtained when training a neural network using (2) from random initial conditions, which we show in Fig. 2a.
We have performed this calculation for teacher and student networks with . We relegate the details to Sec. C.2, and content us here to state the asymptotic value of the generalisation error to first order in ,
where is a lengthy rational function of its variables. We plot our result in Fig. 2a together with the final generalisation error obtained in a single run of SGD (2) for a neural network with initial weights drawn i.i.d. from and find excellent agreement, which we confirmed for a range of values for , , and .
One notable feature of Fig. 2a is that with all else being equal, SGD alone fails to regularise the student networks of increasing size in our setup, instead yielding students whose generalisation error increases linearly with . One might be tempted to mitigate this effect by simultaneously decreasing the learning rate for larger students. However, lowering the learning rate incurs longer training times, which requires more data for online learning. This trade-off is also found in statistical learning theory, where models with more parameters (higher ) and thus a higher complexity class (e.g. VC dimension or Rademacher complexity ) generalise just as well as smaller ones when given more data. In practice, however, more data might not be readily available, and we show in Fig. S2 of the SM that even when choosing , the generalisation error still increases with before plateauing at a constant value.
We can gain some intuition for the scaling of by considering the asymptotic overlap matrices and shown in the left half of Fig. 2b. In the over-parameterised case, student nodes are effectively trying to specialise to teacher nodes which do not exist, or equivalently, have weights zero. These student nodes do not carry any information about the teachers output, but they pick up fluctuations from output noise and thus increase . This intuition is borne out by an expansion of in the limit of small learning rate , which yields
which is indeed the sum of the error of independent hidden units that are specialised to a single teacher hidden unit, and superfluous units contributing each the error of a hidden unit that is “learning” from a hidden unit with zero weights (see also Sec. D of the SM).
Two possible explanations for the scaling in sigmoidal networks may be the specialisation of the hidden units or the fact that teacher and student network can implement functions of different range if . To test these hypotheses, we calculated for linear neural networks [47, 48] with . Linear networks lack a specialisation transition  and their output range is set by the magnitude of their weights, rather than their number of hidden units. Following the same steps as before, a perturbative calculation in the limit of small noise variance yields
This result is again in perfect agreement with experiments, as we demonstrate in Fig. 2a. In the limit of small learning rates , Eq. (10) simplifies to yield the same scaling as for sigmoidal networks,
This shows that the scaling is not just a consequence of either specialisation or the mismatched range of the networks’ output functions. The optimal number of hidden units for linear networks is for all
, because linear networks implement an effective linear transformation with an effective matrix. Adding hidden units to a linear network hence does not augment the class of functions it can implement, but it adds redundant parameters which pick up fluctuations from the teacher’s output noise, increasing .
The analytical calculation of , described above, for ReLU networks poses some additional technical challenges, so we resort to experiments to investigate this case. We found that the asymptotic generalisation error of a ReLU student learning from a ReLU teacher has the same scaling as the one we found analytically for networks with sigmoidal and linear activation functions: (see Fig. S3). Looking at the final overlap matrices and for ReLU networks in the bottom half of Fig. 2b, we see that instead of the one-to-one specialisation of sigmoidal networks, all student nodes have a finite overlap with some teacher node. This is a consequence of the fact that it is much simpler to re-express the sum of ReLU units with
ReLU units. However, there are still a lot of redundant degrees of freedom in the student, which all pick up fluctuations from the teacher’s output noise and increase.
The key result of this section has been that the generalisation error of SCMs scales as
Before moving on the full two-layer network, we discuss a number of experiments that we performed to check the robustness of this result (Details can be found in Sec. G of the SM). A standard regularisation method is adding weight decay to the SGD updates (2). However, we did not find a scenario in our experiments where weight decay improved the performance of a student with . We also made sure that our results persist when performing SGD with mini-batches. We investigated the impact of higher-order correlations in the inputs by replacing Gaussian inputs with MNIST images, with all other aspects of our setup the same, and the same - curve as for Gaussian inputs. Finally, we analysed the impact of having a finite training set. The behaviour of linear networks and of non-linear networks with large but finite training sets did not change qualitatively. However, as we reduce the size of the training set, we found that the lowest asymptotic generalisation error was obtained with networks that have .
3 Training both layers: Asymptotic generalisation error of a neural network
We now study the performance of two-layer neural networks when both layers are trained according to the SGD updates (2) and (3). We set all the teacher weights equal to a constant value, , to ensure comparability between experiments. However, we train all second-layer weights of the student independently and do not rely on the fact that all second-layer teacher weights have the same value. Note that learning the second layer is not needed from the point of view of statistical learning: the networks from the previous section are already expressive enough to capture the students, and we are thus slightly increasing the over-parameterisation even further. Yet, we will see that the generalisation properties will be significantly enhanced.
We plot the generalisation dynamics of students with increasing trained on a teacher with in Fig. 3a. Our first observation is that increasing the student size decreases the asymptotic generalisation error , with all other parameters being equal, in stark contrast to the SCMs of the previous section.
A look at the order parameters after convergence in the experiments from Fig. 3a reveals the intriguing pattern of specialisation of the student’s hidden units behind this behaviour, shown for in Fig. 3b. First, note that all the hidden units of the student have non-negligible weights (). Two student nodes () have specialised to the first teacher node, i.e. their weights are very close to the weights of the first teacher node (). The corresponding second-layer weights approximately fulfil
. Summing the output of these two student hidden units is thus approximately equivalent to an empirical average of two estimates of the output of the teacher node. The remaining three student nodes all specialised to the second teacher node, and their outgoing weights approximately sum to. This pattern suggests that SGD has found a set of weights for both layers where the student’s output is a weighted average of several estimates of the output of the teacher’s nodes. We call this the denoising solution and note that it resembles the solutions found in the mean-field limit of an infinite hidden layer [29, 31] where the neurons become redundant and follow a distribution dynamics (in our case, a simple one with few peaks, as e.g. Fig. 1 in ).
We confirmed this intuition by using an ansatz for the order parameters that corresponds to a denoising solution to solve the equations of motion (9) perturbatively in the limit of small noise to calculate for sigmoidal networks after training both layers, similarly to the approach in Sec. 2. While this approach can be extended to any and , we focused on the case where to obtain manageable expressions; see Sec. E of the SM for details on the derivation. While the final expression is again too long to be given here, we plot it with solid lines in Fig. 3c. The crosses in the same plot are the asymptotic generalisation error obtained by integration of the ODE (9) starting from random initial conditions, and show very good agreement.
While our result holds for any , we note from Fig. 3c that the curves for different are qualitatively similar. We find a particular simple result for in the limit of small learning rates, where:
This result should be contrasted with the behaviour found for SCM.
Experimentally, we robustly observed that training both layers of the network yields better performance than training only the first layer with the second layer weights fixed to . However, convergence to the denoising solution can be difficult for large students which might get stuck on a long plateau where their nodes are not evenly distributed among the teacher nodes. While it is easy to check that such a network has a higher value of than the denoising solution, the difference is small, and hence the driving force that pushes the student out of the corresponding plateaus is small, too. These observations demonstrate that in our setup, SGD does not always find the solution with the lowest generalisation error in finite time.
ReLU and linear networks.
We found experimentally that remains constant with increasing in ReLU and in linear networks when training both layers. We plot an exemplary learning curve in green for linear networks in Fig. 4, but note that the entire figure looks qualitatively exactly the same for ReLU networks (Fig. S4). This behaviour was also observed in linear networks trained by batch gradient descent, starting from small initial weights . While this scaling is an improvement over its increase with for the SCM, (blue curve), this is not the decay that we observed for sigmoidal networks. A possible explanation is the lack of specialisation in linear and ReLU networks (see Sec. 2), without which the denoising solution found in sigmoidal networks is not possible. Indeed, in our experiments we always found that after convergence, any student node had a finite overlap with all the teacher nodes. We also considered normalised SCM, where we train only the first layer and fix the second-layer weights at and . The asymptotic error of normalised SCM decreases with (orange curve in Fig. 4), because the second-layer weights effectively reduce the learning rate, as can be easily seen from the SGD updates (2), and we know from our analysis of linear SCM in Sec. 2 that . In SM Sec. F we show analytically how imbalance in the norms of the first and second layer weights can lead to a larger effective learning rate. Normalised SCM also beat the performance students where we trained both layers, starting from small initial weights in both cases. This is surprising because we checked experimentally that the weights of a normalised SCM after training are a fixed point of the SGD dynamics when training both layers. However, we confirmed experimentally that SGD does not find this fixed point when starting with random initial weights.
The qualitative difference between training both or only the first layer of neural networks is particularly striking for linear networks, where fixing one layer does not change the class of functions the model can implement, but makes a dramatic difference for their asymptotic performance. This observation highlights two important points: first, the performance of a network is not just determined by the number of additional parameters, but also by how the additional parameters are distributed in the model. Second, the non-linear dynamics of SGD means that changing which weights are trainable can alter the training dynamics in unexpected ways. We saw this for two-layer linear networks, where SGD did not find the optimal fixed point, and in the non-linear sigmoidal networks, where training the second layer allowed the student to decrease its final error with every additional hidden unit instead of increasing it like in the SCM.
SG and LZ acknowledge funding from the ERC under the European Union’s Horizon 2020 Research and Innovation Programme Grant Agreement 714608-SMiLe. MA thanks the Swartz Program in Theoretical Neuroscience at Harvard University for support. AS acknowledges funding by the European Research Council, grant 725937 NEUROABSTRACTION. FK acknowledges support from “Chaire de recherche sur les modèles et sciences des données”, Fondation CFM pour la Recherche-ENS, and from the French National Research Agency (ANR) grant PAIL.
-  Yann LeCun, Yoshua Bengio, and Geoffrey E. Hinton. Deep learning. Nature, 521(7553):436–444, 2015.
-  K. Simonyan and A. Zisserman. Very Deep Convolutional Networks for Large-Scale Image Recognition. In International Conference on Learning Representations, 2015.
Peter L. Bartlett and Shahar Mendelson.
Rademacher and Gaussian complexities: Risk bounds and structural
Journal of Machine Learning Research, 3(3):463–482, 2003.
-  Mehryar Mohri, Afshin Rostamizadeh, and Ameet Talwalkar. Foundations of Machine Learning. MIT Press, 2012.
-  Behnam Neyshabur, Ryota Tomioka, and Nathan Srebro. Norm-Based Capacity Control in Neural Networks. In Conference on Learning Theory, 2015.
-  Noah Golowich, Alexander Rakhlin, and Ohad Shamir. Size-Independent Sample Complexity of Neural Networks. arxiv:1712.06541, 2017.
Gintare Karolina Dziugaite and Daniel M. Roy.
Computing Nonvacuous Generalization Bounds for Deep (Stochastic)
Neural Networks with Many More Parameters than Training Data.
Proceedings of the Thirty-Third Conference on Uncertainty in Artificial Intelligence, 2017.
-  Sanjeev Arora, Rong Ge, Behnam Neyshabur, and Yi Zhang. Stronger generalization bounds for deep nets via a compression approach. arxiv:1802.05296, 2018.
-  Zeyuan Allen-Zhu, Yuanzhi Li, and Yingyu Liang. Learning and Generalization in Overparameterized Neural Networks, Going Beyond Two Layers. arXiv:1811.04918, 2018.
-  Behnam Neyshabur, Ryota Tomioka, and Nathan Srebro. In search of the real inductive bias: On the role of implicit regularization in deep learning. In ICLR, 2015.
-  Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, and Oriol Vinyals. Understanding deep learning requires rethinking generalization. In ICLR, 2017.
-  Devansh Arpit, Stanisław Jastrz, Maxinder S Kanwal, Tegan Maharaj, Asja Fischer, Aaron Courville, and Yoshua Bengio. A Closer Look at Memorization in Deep Networks. In Proceedings of the 34th International Conference on Machine Learning, 2017.
-  Pratik Chaudhari and Stefano Soatto. On the inductive bias of stochastic gradient descent. In International Conference on Learning Representations, 2018.
-  Daniel Soudry, Elad Hoffer, and Nathan Srebro. The implicit bias of gradient descent on separable data. In International Conference on Learning Representations, 2018.
-  Suriya Gunasekar, Blake Woodworth, Srinadh Bhojanapalli, Behnam Neyshabur, and Nathan Srebro. Implicit Regularization in Matrix Factorization. In Advances in Neural Information Processing Systems 30, pages 6151–6159, 2017.
-  Yuanzhi Li, Tengyu Ma, and Hongyang Zhang. Algorithmic Regularization in Over-parameterized Matrix Sensing and Neural Networks with Quadratic Activations. In Conference on Learning Theory, pages 2–47, 2018.
-  H. Sebastian Seung, Haim Sompolinsky, and N. Tishby. Statistical mechanics of learning from examples. Physical Review A, 45(8):6056–6091, 1992.
-  Andreas Engel and Christian Van den Broeck. Statistical Mechanics of Learning. Cambridge University Press, 2001.
-  V Vapnik and Vlamimir Vapnik. Statistical learning theory wiley. New York, pages 156–160, 1998.
-  E. Gardner and B. Derrida. Three unfinished works on the optimal storage capacity of networks. Journal of Physics A: Mathematical and General, 22(12):1983–1994, 1989.
-  Wolfgang Kinzel, P Ruján, and P Rujan. Improving a Network Generalization Ability by Selecting Examples. EPL (Europhysics Letters), 13(5):473–477, 1990.
-  Timothy L. H. Watkin, Albrecht Rau, and Michael Biehl. The statistical mechanics of learning a rule. Reviews of Modern Physics, 65(2):499–556, 1993.
-  Lenka Zdeborová and Florent Krzakala. Statistical physics of inference: thresholds and algorithms. Adv. Phys., 65(5):453–552, 2016.
-  Madhu S. Advani and Surya Ganguli. Statistical mechanics of optimal convex inference in high dimensions. Physical Review X, 6(3):1–16, 2016.
-  Pratik Chaudhari, Anna Choromanska, Stefano Soatto, Yann LeCun, Carlo Baldassi, Christian Borgs, Jennifer Chayes, Levent Sagun, and Riccardo Zecchina. Entropy-SGD: Biasing Gradient Descent Into Wide Valleys. In ICLR, 2017.
-  Madhu Advani and Andrew M. Saxe. High-dimensional dynamics of generalization error in neural networks. arXiv:1710.03667, 2017.
-  Benjamin Aubin, Antoine Maillard, Jean Barbier, Florent Krzakala, Nicolas Macris, and Lenka Zdeborová. The committee machine: Computational to statistical gaps in learning a two-layers neural network. In Advances in Neural Information Processing Systems 31, pages 3227–3238, 2018.
-  Marco Baity-Jesi, Levent Sagun, Mario Geiger, Stefano Spigler, Gérard Ben Arous, Chiara Cammarota, Yann LeCun, Matthieu Wyart, and Giulio Biroli. Comparing Dynamics: Deep Neural Networks versus Glassy Systems. In Proceedings of the 35th International Conference on Machine Learning, 2018.
-  Song Mei, Andrea Montanari, and Phan-Minh Nguyen. A mean field view of the landscape of two-layer neural networks. Proceedings of the National Academy of Sciences, 115(33):E7665–E7671, 2018.
-  Grant M. Rotskoff and Eric Vanden-Eijnden. Parameters as interacting particles: long time convergence and asymptotic error scaling of neural networks. In Advances in neural information processing systems 31, pages 7146–7155, 2018.
-  Lénaïc Chizat and Francis Bach. On the global convergence of gradient descent for over-parameterized models using optimal transport. In Advances in Neural Information Processing Systems 31, pages 3040–3050, 2018.
-  Justin Sirignano and Konstantinos Spiliopoulos. Mean Field Analysis of Neural Networks. arXiv:1805.01053, 2018.
-  Arthur Jacot, Franck Gabriel, and Clément Hongler. Neural tangent kernel: Convergence and generalization in neural networks. In Advances in neural information processing systems, pages 8571–8580, 2018.
-  Simon S Du, Xiyu Zhai, Barnabas Poczos, and Aarti Singh. Gradient descent provably optimizes over-parameterized neural networks. arXiv preprint arXiv:1810.02054, 2018.
-  Zeyuan Allen-Zhu, Yuanzhi Li, and Yingyu Liang. Learning and generalization in overparameterized neural networks, going beyond two layers. arXiv preprint arXiv:1811.04918, 2018.
-  Zeyuan Allen-Zhu, Yuanzhi Li, and Zhao Song. A convergence theory for deep learning via over-parameterization. arXiv preprint arXiv:1811.03962, 2018.
-  Yuanzhi Li and Yingyu Liang. Learning Overparameterized Neural Networks via Stochastic Gradient Descent on Structured Data. In Advances in Neural Information Processing Systems 31, 2018.
-  Difan Zou, Yuan Cao, Dongruo Zhou, and Quanquan Gu. Stochastic gradient descent optimizes over-parameterized deep relu networks. arXiv preprint arXiv:1811.08888, 2018.
-  Lenaic Chizat and Francis Bach. A note on lazy training in supervised differentiable programming. arXiv preprint arXiv:1812.07956, 2018.
-  Song Mei, Theodor Misiakiewicz, and Andrea Montanari. Mean-field theory of two-layers neural networks: dimension-free bounds and kernel limit. arXiv preprint arXiv:1902.06015, 2019.
-  Michael Biehl and H. Schwarze. Learning by on-line gradient descent. J. Phys. A. Math. Gen., 28(3):643–656, 1995.
-  David Saad and Sara A. Solla. Exact Solution for On-Line Learning in Multilayer Neural Networks. Phys. Rev. Lett., 74(21):4337–4340, 1995.
-  David Saad and Sara A. Solla. On-line learning in soft committee machines. Phys. Rev. E, 52(4):4225–4243, 1995.
Peter Riegler and Michael Biehl.
On-line backpropagation in two-layered neural networks.Journal of Physics A: Mathematical and General, 28(20), 1995.
-  David Saad and Sara A. Solla. Learning with Noise and Regularizers Multilayer Neural Networks. In Advances in Neural Information Processing Systems 9, pages 260–266, 1997.
-  Chuang Wang, Hong Hu, and Yue M. Lu. A Solvable High-Dimensional Model of GAN. arXiv:1805.08349, 2018.
A. Krogh and J. A. Hertz.
Generalization in a linear perceptron in the presence of noise.Journal of Physics A: Mathematical and General, 25(5):1135–1147, 1992.
-  Andrew M. Saxe, James L. McClelland, and Surya Ganguli. Exact solutions to the nonlinear dynamics of learning in deep linear neural networks. In ICLR, 2014.
Andrew K. Lampinen and Surya Ganguli.
An analytic theory of generalization dynamics and transfer learning in deep linear networks.In International Conference on Learning Representations, 2019.
High-Dimensional Probability. Cambridge University Press, 2009.
Appendix A Proof of Theorem 1.1
We will prove Theorem 1.1 in two steps. First, we will show that the mean values of the order parameters , and are given by the expressions used in the equations of motion (Lemma A.1) and that they concentrate, i.e. that their variance is bounded by a term of order . This ensures that the leading-order of the average increment is captured by the ODE of Theorem 1.1, and that the stochastic part of the increment of the order parameters can be ignored in the thermodynamic limit . In other words, the two bounds ensure that the stochastic Markov process converges to a deterministic process. To complete the proof, we use a form of the coupling trick as described by Wang et al. .
a.2 First moments of the increment
Under the same setting as Theorem 1.1, for all , we have
We first recall that contains all time-dependent order parameters , , and , so we will prove the Lemma in turn for each of them. In fact, in each case we can prove a slightly stronger result which encompasses the required bound.
For the teacher-student overlaps , we multiply the update (2) with on both sides and find that
The local field of the teacher is is a Gaussian random variable with mean zero and variance . Taking the conditional expectation, we find
For the student-student overlaps , we multiply the update (2) by and find that
Using assumption (A1), we see that the term
concentrates to yield 1 by the central limit theorem. Thus we find after taking the conditional expectation of both sides and usingthat
Finally, it is easy to convince oneself that taking the conditional expectation of the update for the second-layer weights (3) yields
which completes the proof of Lemma A.1. ∎
a.3 Second moments of the increment
We now proceed to bound the second-order moments of the increments of the time-dependent order parameters. We collect these bounds in the following lemma:
Under the assumptions of Theorem 1.1, for all , we have that
Before proceeding with the proof, we state a simple technical lemma that will be helpful in the following; we relegate its proof to Sec. A.5.
Under the same assumptions as Theorem 1.1, we have for all that
where is a constant independent of .
Proof of Lemma a.2.
We first note all order parameters obey update equations of the form
where we have emphasised that the update function may depend on all order parameters at time and the th sample shown to the student . For the variance of the order parameter , a little algebra yields the recursion relation
We will now use complete induction to show that for any , the update of the variance at every step is bounded by as required. In particular, this means showing that the term proportional to actually scales as .
For the induction start, we note that by Assumption A3, we have . Hence the variance of any order parameter after a single step of SGD reads
In going from the first to the second line, we have used that all order parameters are uncorrelated at step , since the weights are initially uncorrelated.
For the induction step, we assume that the variance after steps is . By using the existence and boundedness of the derivatives of the activation function, we can write and expand the terms proportional to using a multivariate Taylor expansion in . We find that
We are justified in truncating the expansion since we assumed that . If the functions are bounded by a constant, this completes the induction and shows that the variance of the increment of the order parameters is bounded by , as required.
a.4 Putting it all together
Having proved both Lemmas A.1 and A.2, we can proceed to prove Theorem 1.1 by using the coupling trick in the form given by Wang et al.  for another online learning problem, namely the training of generative adversarial networks. We paraphrase the coupling trick as given by Wang et al. in the following to make the proof self-contained and refer to the supplemental material of their paper for additional details.
Proof of Theorem 1.1.
We first define a stochastic process that is coupled with the Markov process as
for all . We then define a deterministic process
which is a standard first-order finite difference approximation of the equations of motion (9), for which the standard Euler argument gives
which completes the proof. ∎
a.5 Additional proof details
Proof of Lemma a.3.
The increment of reads explicitly
To bound the value of after steps, we consider the three terms in the sum each in turn. We first note that the sum of the output noise variables is a simple sum over uncorrelated, (sub-) Gaussian random variables rescaled by and thus by Hoeffding’s inequality almost surely smaller than a constant .
For the first two terms, we can use an argument similar to the one used to prove the bound on the variance of the increment of the order parameters. We first note that is a bounded function by Assumption (A2) and that the initial conditions of the second-layer weights are bounded by a constant by Assumption (A3). Hence, after a first step, the weight has increased by a term bounded by . Actually, at every step where the weight is bounded by a constant, its increase will be bounded by . Hence the magnitude of for , as required. ∎
Appendix B Derivation of the ODE description of the generalisation dynamics of online learning
Here we demonstrate how to evaluate the averages found in the equations of motion for the order parameters (9), following the classic work by Biehl and Schwarze  and Saad and Solla [42, 43]. We repeat the two main technical assumption of our work, namely namely having a large network () and a data set that is large enough to allow that we visit every sample only once before training converges. Both will play a key role in the following computations.
b.1 Expressing the generalisation error in terms of order parameters
We first demonstrate how the assumptions stated above allow to rewrite the generalisation error in terms of a number of order parameters. We have
where we have used the local fields and . Here and throughout this paper, we will use the indices to refer to hidden units of the student, and indices to denote hidden units of the teacher. Since the input only appears in only via products with the weights of the teacher and the student, we can replace the high-dimensional average over the input distribution by an average over the local fields and . The assumption that the training set is large enough to allow that we visit every sample in the training set only once guarantees that the inputs and the weights of the networks are uncorrelated. Taking the limit ensures that the local fields are jointly normally distributed with mean zero (). Their covariance is also easily found: writing for the th component of the th weight vector, we have
since . Likewise, we define
The variables , , and are called order parameters in statistical physics and measure the overlap between student and teacher weight vectors and and their self-overlaps, respectively. Crucially, from Eq. (S22) we see that they are sufficient to determine the generalisation error . We can thus write the generalisation error as
where we have defined
The average in Eq. (S26) is taken over a normal distribution for the local fields and with mean and covariance matrix
Since we are using the indices for student units and for teacher hidden units, we have
where the covariance matrix of the joint of distribution and is given by
and likewise for . We will use this convention to denote integrals throughout this section. For the generalisation error, this means that it can be expressed in terms of the order parameters alone as
b.2 ODEs for the evolution of the order parameters
Expressing the generalisation error in terms of the order parameters as we have in Eq. (S30) is of course only useful if we can track the evolution of the order parameters over time. We can derive ODEs that allow us to do precisely that for the order parameters by squaring the weight update of (2) and for taking the inner product of (2) with , respectively, which yields the equations of motion (9).
To make progress however, i.e. to obtain a closed set of differential equations for and , we need to evaluate the averages over the local fields. In particular, we have to compute three types of averages:
where is one the local fields of the student, while and can be local fields of either the student or the teacher;
where and are local fields of the student, while and can be local fields of both; and finally
where and are local fields of the teacher. In each of these integrals, the average is taken with respect to a multivariate normal distribution for the local fields with zero mean and a covariance matrix whose entries are chosen in the same way as discussed for .
The explicit form of the integrals , , and is given in Sec. H for the case . Solving these equations numerically for and and substituting their values in to the expression for the generalisation error (S25) gives the full generalisation dynamics of the student. We show the resulting learning curves together with the result of a single simulation in Fig. 2 of the main text. We have bundled our simulation software and our ODE integrator as a user-friendly library with example programs at https://github.com/sgoldt/pyscm. In Sec. C, we discuss how to extract information from them in an analytical way.
Appendix C Calculation of in the limit of small noise for Soft Committee Machines
Our aim is to understand the asymptotic value of the generalisation error
We focus on students that have more hidden units than the teacher, . These students are thus over-parameterised with respect to the generative model of the data and we define
as the number of additional hidden units in the student network. In this section, we focus on the sigmoidal activation function
unless stated otherwise.
Eqns. (S34ff) are a useful tool to analyse the generalisation dynamics and they allowed Saad and Solla to gain plenty of analytical insight into the special case [42, 43]. However, they are also a bit unwieldy. In particular, the number of ODEs that we need to solve grows with and as . To gain some analytical insight, we make use of the symmetries in the problem, e.g. the permutation symmetry of the hidden units of the student, and re-parametrised the matrices and in terms of eight order parameters that obey a set of self-consistent ODEs for any . We choose the following parameterisation with eight order parameters:
which in matrix form for the case and read:
We choose this number of order parameters and this particular setup for the overlap matrices and for two reasons: it is the smallest number of variables for which we were able to self-consistently close the equations of motion (S34), and they agree with numerical evidence obtained from integrating the full equations of motion (S34).
By substituting this ansatz into the equations of motion (S34), we find a set of eight ODEs for the order parameters. These equations are rather unwieldy and some of them do not even fit on one page, which is why we do not print them here in full; instead, we provide a Mathematica notebook where they can be found and interacted with together with the source at http://www.github.com/sgoldt/pyscm These equations allow for a detailed analysis of the effect of over-parameterisation on the asymptotic performance of the student, as we will discuss now.
c.1 Heavily over-parameterised students can learn perfectly from a noiseless teacher using online learning
For a teacher with and in the absence of noise in the teacher’s outputs (), there exists a fixed point of the ODEs with , , and perfect generalisation . Online learning will find this fixed point [42, 43]. More precisely, after a plateau whose length depends on the size of the network for the sigmoidal network, the generalisation error eventually begins an exponential decay to the optimal solution with zero generalisation error. The learning rates are chosen such that learning converges, but aren’t optimised otherwise.
c.2 Perturbative solution of the ODEs
We have calculated the asymptotic value of the generalisation error for a teacher with to first order in the variance of the noise . To do so, we performed a perturbative expansion around the fixed point
with the ansatz