Neural Tangent Kernel: Convergence and Generalization in Neural Networks

06/20/2018 ∙ by Arthur Jacot, et al. ∙ Jacot & Sauter 0

At initialization, artificial neural networks (ANNs) are equivalent to Gaussian processes in the infinite-width limit, thus connecting them to kernel methods. We prove that the evolution of an ANN during training can also be described by a kernel: during gradient descent on the parameters of an ANN, the network function f_θ (which maps input vectors to output vectors) follows the kernel gradient of the functional cost (which is convex, in contrast to the parameter cost) w.r.t. a new kernel: the Neural Tangent Kernel (NTK). This kernel is central to describe the generalization features of ANNs. While the NTK is random at initialization and varies during training, in the infinite-width limit it converges to an explicit limiting kernel and it stays constant during training. This makes it possible to study the training of ANNs in function space instead of parameter space. Convergence of the training can then be related to the positive-definiteness of the limiting NTK. We prove the positive-definiteness of the limiting NTK when the data is supported on the sphere and the non-linearity is non-polynomial. We then focus on the setting of least-squares regression and show that in the infinite-width limit, the network function f_θ follows a linear differential equation during training. The convergence is fastest along the largest kernel principal components of the input data with respect to the NTK, hence suggesting a theoretical motivation for early stopping. Finally we study the NTK numerically, observe its behavior for wide networks, and compare it to the infinite-width limit.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Artificial neural networks (ANNs) have achieved impressive results in numerous areas of machine learning. While it has long been known that ANNs can approximate any function with sufficiently many hidden neurons

(9; 12), it is not known what the optimization of ANNs converges to. Indeed the loss surface of neural networks optimization problems is highly non-convex: it has a high number of saddle points which may slow down the convergence (5). A number of results (3; 15; 16) suggest that for wide enough networks, there are very few “bad” local minima, i.e. local minima with much higher cost than the global minimum. More recently, the investigation of the geometry of the loss landscape at initialization has been the subject of a precise study (10). The analysis of the dynamics of training in the large-width limit for shallow networks has seen recent progress as well (13). To the best of the authors knowledge, the dynamics of deep networks has however remained an open problem until the present paper: see the contributions section below.

A particularly mysterious feature of ANNs is their good generalization properties in spite of their usual over-parametrization (18). It seems paradoxical that a reasonably large neural network can fit random labels, while still obtaining good test accuracy when trained on real data (21). It can be noted that in this case, kernel methods have the same properties (1).

In the infinite-width limit, ANNs have a Gaussian distribution described by a kernel

(14; 11)

. These kernels are used in Bayesian inference or Support Vector Machines, yielding results comparable to ANNs trained with gradient descent

(11; 2). We will see that in the same limit, the behavior of ANNs during training is described by a related kernel, which we call the neural tangent network (NTK).

1.1 Contribution

We study the network function of an ANN, which maps an input vector to an output vector, where is the vector of the parameters of the ANN. In the limit as the widths of the hidden layers tend to infinity, the network function at initialization, converges to a Gaussian distribution (14; 11).

In this paper, we investigate fully connected networks in this infinite-width limit, and describe the dynamics of the network function during training:

  • During gradient descent, we show that the dynamics of follows that of the so-called kernel gradient descent

    in function space with respect to a limiting kernel, which only depends on the depth of the network, the choice of nonlinearity and the initialization variance.

  • The convergence properties of ANNs during training can then be related to the positive-definiteness of the infinite-width limit NTK. In the case when the dataset is supported on a sphere, we prove this positive-definiteness using recent results on dual activation functions

    (4). The values of the network function outside the training set is described by the NTK, which is crucial to understand how ANN generalize.

  • For a least-squares regression loss, the network function

    follows a linear differential equation in the infinite-width limit, and the eigenfunctions of the Jacobian are the kernel principal components of the input data. This shows a direct connection to kernel methods and motivates the use of early stopping to reduce overfitting in the training of ANNs.

  • Finally we investigate these theoretical results numerically for an artificial dataset (of points on the unit circle) and for the MNIST dataset. In particular we observe that the behavior of wide ANNs is close to the theoretical limit.

2 Neural networks

In this article, we consider fully-connected ANNs with layers numbered from (input) to (output), each containing neurons, and with a Lipschitz, twice differentiable nonlinearity function , with bounded second derivative 111While these smoothness assumptions greatly simplify the proofs of our results, they do not seem to be strictly needed for the results to hold true..

This paper focuses on the ANN realization function , mapping parameters to functions in a space . The dimension of the parameter space is : the parameters consist of the connection matrices

and bias vectors

for . In our setup, the parameters are initialized as iid Gaussians .

For a fixed distribution on the input space , the function space is defined as . On this space, we consider the seminorm , defined in terms of the bilinear form

In this paper, we assume that the input distribution is the empirical distribution on a finite dataset , i.e the sum of Dirac measures .

We define the network function by , where the functions (called preactivations) and (called activations) are defined from the -th to the -th layer by:

where the nonlinearity is applied entrywise. The scalar is a parameter which allows us to tune the influence of the bias on the training.

Remark 1.

Our definition of the realization function slightly differs from the classical one. Usually, the factors and the parameter are absent and the parameters are initialized using what is sometimes called LeCun initialization, taking and (or sometimes ) to compensate. While the set of representable functions is the same for both parametrizations (with or without the factors and ), the derivatives of the realization function with respect to the connections and bias are scaled by and respectively in comparison to the classical parametrization.

The factors are key to obtaining a consistent asymptotic behavior of neural networks as the widths of the hidden layers grow to infinity. However a side-effect of these factors is that they reduce greatly the influence of the connection weights during training when is large: the factor is introduced to balance the influence of the bias and connection weights. In our numerical experiments, we take and use a learning rate of , which is larger than usual, see Section 6. This gives a behaviour similar to that of a classical network of width with a learning rate of .

3 Kernel gradient

The training of an ANN consists in optimizing in the function space with respect to a functional cost , such as a regression or cross-entropy cost. Even for a convex functional cost , the composite cost is in general highly non-convex (3). We will show that during training, the network function follows a descent along the kernel gradient with respect to the Neural Tangent Kernel (NTK) which we introduce in Section 4. This makes it possible to study the training of ANNs in the function space , on which the cost is convex.

A multi-dimensional kernel is a function , which maps any pair to an -matrix such that (equivalently

is a symmetric tensor in

). Such a kernel defines a bilinear map on , taking the expectation over independent :

The kernel is positive definite with respect to if .

We denote by the dual of with respect to , i.e. the set of linear forms of the form for some . Two elements of define the same linear form if and only if they are equal on the data. The constructions in the paper do not depend on the element chosen in order to represent as . Using the fact that the partial application of the kernel is a function in , we can define a map mapping a dual element to the function with values:

For our setup, which is that of a finite dataset , the cost functional only depends on the values of at the data points. As a result, the (functional) derivative of the cost at a point can be viewed as an element of , which we write . We denote by , a corresponding dual element, such that .

The kernel gradient is defined as . In contrast to which is only defined on the dataset, the kernel gradient generalizes to values outside the dataset thanks to the kernel :

A time-dependent function follows the kernel gradient descent with respect to if it satisfies the differential equation

During kernel gradient descent, the cost evolves as

Convergence to a critical point of is hence guaranteed if the kernel is positive definite with respect to : the cost is then strictly decreasing except at points such that . If the cost is convex and bounded from below, the function therefore converges to a global minimum as .

3.1 Random functions approximation

As a starting point to understand the convergence of ANN gradient descent to kernel gradient descent in the infinite-width limit, we introduce a simple model, inspired by the approach of (17).

A kernel can be approximated by a choice of random functions sampled independently from any distribution on whose (non-centered) covariance is given by the kernel :

These functions define a random linear parametrization

The partial derivatives of the parametrization are given by

Optimizing the cost through gradient descent, the parameters follow the ODE:

As a result the function evolves according to

where the right-hand side is equal to the kernel gradient with respect to the tangent kernel

This is a random -dimensional kernel with values

Performing gradient descent on the cost is therefore equivalent to performing kernel gradient descent with the tangent kernel in the function space. In the limit as

, by the law of large numbers, the (random) tangent kernel

tends to the fixed kernel , which makes this method an approximation of kernel gradient descent with respect to the limiting kernel .

4 Neural tangent kernel

For ANNs trained using gradient descent on the composition , the situation is very similar to that studied in the Section 3.1. During training, the network function evolves along the (negative) kernel gradient

with respect to the neural tangent kernel (NTK)

However, in contrast to , the realization function of ANNs is not linear. As a consequence, the derivatives and the neural tangent kernel depend on the parameters . The NTK is therefore random at initialization and varies during training, which makes the analysis of the convergence of more delicate.

In the next subsections, we show that, in the infinite-width limit, the NTK becomes deterministic at initialization and stays constant during training. Since at initialization is Gaussian in the limit, the asymptotic behavior of during training can be explicited in the function space .

4.1 Initialization

As observed in (14; 11), the output functions for tend to iid Gaussian processes in the infinite-width limit (a proof in our setup is given in the appendix):

Proposition 1.

For a network of depth at initialization, with a Lipschitz nonlinearity , and in the limit as , the output functions , for , tend (in law) to iid centered Gaussian processes of covariance , where is defined recursively by:

taking the expectation with respect to a centered Gaussian process of covariance .

Remark 2.

Strictly speaking, the existence of a suitable Gaussian measure with covariance is not needed: we only deal with the values of at (the joint measure on is simply a Gaussian vector in 2D). For the same reasons, in the proof of Proposition 1 and Theorem 1, we will freely speak of Gaussian processes without discussing their existence.

The first key result of our paper (proven in the appendix) is the following: in the same limit, the Neural Tangent Kernel (NTK) converges in probability to an explicit deterministic limit.

Theorem 1.

For a network of depth at initialization, with a Lipschitz nonlinearity , and in the limit as the layers width , the NTK converges in probability to a deterministic limiting kernel:

The scalar kernel is defined recursively by

where

taking the expectation with respect to a centered Gaussian process of covariance , and where denotes the derivative of .

Remark 3.

By Rademacher’s theorem, is defined everywhere, except perhaps on a set of zero Lebesgue measure.

Note that the limiting only depends on the choice of , the depth of the network and the variance of the parameters at initialization (which is equal to in our setting).

4.2 Training

Our second key result is that the NTK stays asymptotically constant during training. This applies for a slightly more general definition of training: the parameters are updated according to a training direction :

In the case of gradient descent, (see Section 3), but the direction may depend on another network, as is the case for e.g. Generative Adversarial Networks (8). We only assume that the integral stays stochastically bounded as the width tends to infinity, which is verified for e.g. least-squares regression, see Section 5.

Theorem 2.

Assume that is a Lipschitz, twice differentiable nonlinearity function, with bounded second derivative. For any such that the integral stays stochastically bounded, as , we have, uniformly for ,

As a consequence, in this limit, the dynamics of is described by the differential equation

Remark 4.

As the proof of the theorem (in the appendix) shows, the variation during training of the individual activations in the hidden layers shrinks as their width grows. However their collective variation is significant, which allows the parameters of the lower layers to learn: in the formula of the limiting NTK in Theorem 1, the second summand represents the learning due to the last layer, while the first summand represents the learning performed by the lower layers.

As discussed in Section 3, the convergence of kernel gradient descent to a critical point of the cost is guaranteed for positive definite kernels. The limiting NTK is positive definite if the span of the derivatives , becomes dense in w.r.t. the -norm as the width grows to infinity. It seems natural to postulate that the span of the preactivations of the last layer (which themselves appear in , corresponding to the connection weights of the last layer) becomes dense in , for a large family of measures and nonlinearities (see e.g. (9; 12) for classical theorems about ANNs and approximation). In the case when the dataset is supported on a sphere, the positive-definiteness of the limiting NTK can be shown using Gaussian integration techniques and existing positive-definiteness criteria, as given by the following proposition, proven in Appendix A.4:

Proposition 2.

For a non-polynomial Lipschitz nonlinearity , for any input dimension , the restriction of the limiting NTK to the unit sphere is positive-definite if .

5 Least-squares regression

Given a goal function and input distribution , the least-squares regression cost is

Theorems 1 and 2 apply to an ANN trained on such a cost. Indeed the norm of the training direction is strictly decreasing during training, bounding the integral. We are therefore interested in the behavior of a function during kernel gradient descent with a kernel (we are of course especially interested in the case ):

The solution of this differential equation can be expressed in terms of the map :

where is the exponential of . If can be diagonalized by eigenfunctions

with eigenvalues

, the exponential has the same eigenfunctions with eigenvalues .

For a finite dataset of size , the map takes the form

The map has at most positive eigenfunctions, and they are the kernel principal components of the data with respect to to the kernel (19; 20). The corresponding eigenvalues is the variance captured by the component.

Decomposing the difference

along the eigenspaces of

, the trajectory of the function reads

where is in the kernel (null-space) of and .

The above decomposition can be seen as a motivation for the use of early stopping. The convergence is indeed faster along the eigenspaces corresponding to larger eigenvalues . Early stopping hence focuses the convergence on the most relevant kernel principal components, while avoiding to fit the ones in eigenspaces with lower eigenvalues (such directions are typically the ‘noisier’ ones: for instance, in the case of the RBF kernel, lower eigenvalues correspond to high frequency functions).

Note that by the linearity of the map , if is initialized with a Gaussian distribution (as is the case for ANNs in the infinite-width limit), then is Gaussian for all times . Assuming that the kernel is positive definite on the data (implying that the Gram marix is invertible), as limit, we get that takes the form

with the -vectors , and given by

The first term, the mean, has an important statistical interpretation: it is the maximum-a-posteriori (MAP) estimate given a Gaussian prior on functions

and the conditions

. Equivalently, it is equal to the kernel ridge regression

(20) as the regularization goes to zero (). The second term is a centered Gaussian whose variance vanishes on the points of the dataset.

6 Numerical experiments

In the following numerical experiments, fully connected ANNs of various widths are compared to the theoretical infinite-width limit. We choose the size of the hidden layers to all be equal to the same value

and we take the ReLU nonlinearity

.

In the first two experiments, we consider the case

. Moreover, the input elements are taken on the unit circle. This can be motivated by the structure of high-dimensional data, where the centered data points often have roughly the same norm

222The classical example is for data following a Gaussian distribution : as the dimension grows, all data points have approximately the same norm ..

In all experiments, we took (note that by our results, a network with outputs behaves asymptotically like networks with scalar outputs trained independently). Finally, the value of the parameter is chosen as , see Remark 1.

Figure 1: Convergence of the NTK to a fixed limit for two widths and two times .
Figure 2: Networks function near convergence for two widths and 10th, 50th and 90th percentiles of the asymptotic Gaussian distribution.

6.1 Convergence of the NTK

The first experiment illustrates the convergence of the NTK of a network of depth for two different widths . The function is plotted for a fixed and on the unit circle in Figure 2. To observe the distribution of the NTK, independent initializations are performed for both widths. The kernels are plotted at initialization and then after steps of gradient descent with learning rate (i.e. at ). We approximate the function with a least-squares cost on random inputs.

For the wider network, the NTK shows less variance and is smoother. It is interesting to note that the expectation of the NTK is very close for both networks widths. After steps of training, we observe that the NTK tends to “inflate”. As expected, this effect is much less apparent for the wider network () where the NTK stays almost fixed, than for the smaller network ().

6.2 Kernel regression

For a regression cost, the infinite-width limit network function has a Gaussian distribution for all times and in particular at convergence (see Section 5). We compared the theoretical Gaussian distribution at to the distribution of the network function of a finite-width network for a large time . For two different widths and for random initializations each, a network is trained on a least-squares cost on points of the unit circle for steps with learning rate and then plotted in Figure 2.

We also approximated the kernels and using a large-width network () and used them to calculate and plot the 10th, 50th and 90-th percentiles of the limiting Gaussian distribution.

The distributions of the network functions are very similar for both widths: their mean and variance appear to be close to those of the limiting distribution . Even for relatively small widths (), the NTK gives a good indication of the distribution of as .

6.3 Convergence along a principal component

We now illustrate our result on the MNIST dataset of handwritten digits made up of grayscale images of dimension , yielding a dimension of .

We computed the first 3 principal components of a batch of digits with respect to the NTK of a high-width network (giving an approximation of the limiting kernel) using a power iteration method. The respective eigenvalues are , and . The kernel PCA is non-centered, the first component is therefore almost equal to the constant function, which explains the large gap between the first and second eigenvalues333It can be observed numerically, that if we choose instead of our recommended , the gap between the first and the second principal component is about ten times bigger, which makes training more difficult.. The next two components are much more interesting as can be seen in Figure 2(a), where the batch is plotted with and coordinates corresponding to the 2nd and 3rd components.

(a) The 2nd and 3rd principal components of MNIST.
(b) Deviation of the network function from the straight line.
(c) Convergence of along the 2nd principal component.
Figure 3:

We have seen in Section 5 how the convergence of kernel gradient descent follows the kernel principal components. If the difference at initialization is equal (or proportional) to one of the principal components , then the function will converge along a straight line (in the function space) to at an exponential rate .

We tested whether ANNs of various widths behave in a similar manner. We set the goal of the regression cost to and let the network converge. At each time step , we decomposed the difference into a component proportional to and another one orthogonal to . In the infinite-width limit, the first component decays exponentially fast while the second is null (), as the function converges along a straight line.

As expected, we see in Figure 2(b) that the wider the network, the less it deviates from the straight line (for each width we performed two independent trials). As the width grows, the trajectory along the 2nd principal component (shown in Figure 2(c)) converges to the theoretical limit shown in blue.

A surprising observation is that smaller networks appear to converge faster than wider ones. This may be explained by the inflation of the NTK observed in our first experiment. Indeed, multiplying the NTK by a factor is equivalent to multiplying the learning rate by the same factor. However, note that since the NTK of large-width network is more stable during training, larger learning rates can in principle be taken. One must hence be careful when comparing the convergence speed in terms of the number of steps (rather than in terms of the time ): both the inflation effect and the learning rate must be taken into account.

7 Conclusion

This paper introduces a new tool to study ANNs, the Neural Tangent Kernel (NTK), which describes the local dynamics of an ANN during gradient descent. This leads to a new connection between ANN training and kernel methods: in the infinite-width limit, an ANN can be described in the function space directly by the limit of the NTK, an explicit constant kernel , which only depends on its depth, nonlinearity and parameter initialization variance. More precisely, in this limit, ANN gradient descent is shown to be equivalent to a kernel gradient descent with respect to . The limit of the NTK is hence a powerful tool to understand the generalization properties of ANNs, and it allows one to study the influence of the depth and nonlinearity on the learning abilities of the network. The analysis of training using NTK allows one to relate convergence of ANN training with the positive-definiteness of the limiting NTK and leads to a characterization of the directions favored by early stopping methods.

Acknowledgements

The authors thank K. Kytölä for many interesting discussions. The second author was supported by the ERC CG CRITICAL. The last author acknowledges support from the ERC SG Constamis, the NCCR SwissMAP, the Blavatnik Family Foundation and the Latsis Foundation.

References

  • (1) M. Belkin, S. Ma, and S. Mandal.

    To understand deep learning we need to understand kernel learning.

    arXiv preprint, Feb 2018.
  • (2) Y. Cho and L. K. Saul. Kernel methods for deep learning. In Advances in Neural Information Processing Systems 22, pages 342–350. Curran Associates, Inc., 2009.
  • (3) A. Choromanska, M. Henaff, M. Mathieu, G. B. Arous, and Y. LeCun. The Loss Surfaces of Multilayer Networks. Journal of Machine Learning Research, 38:192–204, nov 2015.
  • (4) A. Daniely, R. Frostig, and Y. Singer. Toward deeper understanding of neural networks: The power of initialization and a dual view on expressivity. In D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, editors, Advances in Neural Information Processing Systems 29, pages 2253–2261. Curran Associates, Inc., 2016.
  • (5) Y. N. Dauphin, R. Pascanu, C. Gulcehre, K. Cho, S. Ganguli, and Y. Bengio. Identifying and attacking the saddle point problem in high-dimensional non-convex optimization. In Proceedings of the 27th International Conference on Neural Information Processing Systems - Volume 2, NIPS’14, pages 2933–2941, Cambridge, MA, USA, 2014. MIT Press.
  • (6) S. S. Dragomir. Some Gronwall Type Inequalities and Applications. Nova Science Publishers, 2003.
  • (7) T. Gneiting. Strictly and non-strictly positive definite functions on spheres. Bernoulli, 19(4):1327–1349, 2013.
  • (8) I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Generative Adversarial Networks. NIPS’14 Proceedings of the 27th International Conference on Neural Information Processing Systems - Volume 2, pages 2672–2680, jun 2014.
  • (9) K. Hornik, M. Stinchcombe, and H. White. Multilayer feedforward networks are universal approximators. Neural Networks, 2(5):359 – 366, 1989.
  • (10) R. Karakida, S. Akaho, and S.-i. Amari. Universal Statistics of Fisher Information in Deep Neural Networks: Mean Field Approach. jun 2018.
  • (11) J. H. Lee, Y. Bahri, R. Novak, S. S. Schoenholz, J. Pennington, and J. Sohl-Dickstein. Deep neural networks as gaussian processes. ICLR, 2018.
  • (12) M. Leshno, V. Lin, A. Pinkus, and S. Schocken. Multilayer feedforward networks with a non-polynomial activation function can approximate any function. Neural Networks, 6(6):861–867, 1993.
  • (13) S. Mei, A. Montanari, and P.-M. Nguyen. A mean field view of the landscape of two-layer neural networks. Proceedings of the National Academy of Sciences, 115(33):E7665–E7671, 2018.
  • (14) R. M. Neal. Bayesian Learning for Neural Networks. Springer-Verlag New York, Inc., Secaucus, NJ, USA, 1996.
  • (15) R. Pascanu, Y. N. Dauphin, S. Ganguli, and Y. Bengio. On the saddle point problem for non-convex optimization. arXiv preprint, 2014.
  • (16) J. Pennington and Y. Bahri.

    Geometry of neural network loss surfaces via random matrix theory.

    In Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, pages 2798–2806, International Convention Centre, Sydney, Australia, 06–11 Aug 2017. PMLR.
  • (17) A. Rahimi and B. Recht. Random features for large-scale kernel machines. In Advances in Neural Information Processing Systems 20, pages 1177–1184. Curran Associates, Inc., 2008.
  • (18) L. Sagun, U. Evci, V. U. Güney, Y. Dauphin, and L. Bottou. Empirical analysis of the hessian of over-parametrized neural networks. CoRR, abs/1706.04454, 2017.
  • (19) B. Schölkopf, A. Smola, and K.-R. Müller. Nonlinear component analysis as a kernel eigenvalue problem. Neural Computation, 10(5):1299–1319, 1998.
  • (20) J. Shawe-Taylor and N. Cristianini. Kernel Methods for Pattern Analysis. Cambridge University Press, New York, NY, USA, 2004.
  • (21) C. Zhang, S. Bengio, M. Hardt, B. Recht, and O. Vinyals. Understanding deep learning requires rethinking generalization. ICLR 2017 proceedings, Feb 2017.

Appendix A Appendix

This appendix is dedicated to proving the key results of this paper, namely Proposition 1 and Theorems 1 and 2, which describe the asymptotics of neural networks at initialization and during training.

We study the limit of the NTK as sequentially, i.e. we first take , then , etc. This leads to much simpler proofs, but our results could in principle be strengthened to the more general setting when .

A natural choice of convergence to study the NTK is with respect to the operator norm on kernels:

where the expectation is taken over two independent . This norm depends on the input distribution . In our setting, is taken to be the empirical measure of a finite dataset of distinct samples . As a result, the operator norm of is equal to the leading eigenvalue of the Gram matrix . In our setting, convergence in operator norm is hence equivalent to pointwise convergence of on the dataset.

a.1 Asymptotics at Initialization

It has already been observed [14, 11] that the output functions for tend to iid Gaussian processes in the infinite-width limit.

Proposition 1.

For a network of depth at initialization, with a Lipschitz nonlinearity , and in the limit as sequentially, the output functions , for , tend (in law) to iid centered Gaussian processes of covariance , where is defined recursively by:

taking the expectation with respect to a centered Gaussian process of covariance .

Proof.

We prove the result by induction. When , there are no hidden layers and is a random affine function of the form:

All output functions are hence independent and have covariance as needed.

The key to the induction step is to consider an -network as the following composition: an -network mapping the input to the pre-activations , followed by an elementwise application of the nonlinearity and then a random affine map . The induction hypothesis gives that in the limit as sequentially the preactivations tend to iid Gaussian processes with covariance . The outputs

conditioned on the values of are iid centered Gaussians with covariance

By the law of large numbers, as , this covariance tends in probability to the expectation

In particular the covariance is deterministic and hence independent of . As a consequence, the conditioned and unconditioned distributions of are equal in the limit: they are iid centered Gaussian of covariance . ∎

In the infinite-width limit, the neural tangent kernel, which is random at initialization, converges in probability to a deterministic limit.

Theorem 1.

For a network of depth at initialization, with a Lipschitz nonlinearity , and in the limit as the layers width sequentially, the NTK converges in probability to a deterministic limiting kernel:

The scalar kernel is defined recursively by

where

taking the expectation with respect to a centered Gaussian process of covariance , and where denotes the derivative of .

Proof.

The proof is again by induction. When , there is no hidden layer and therefore no limit to be taken. The neural tangent kernel is a sum over the entries of and those of :

Here again, the key to prove the induction step is the observation that a network of depth is an -network mapping the inputs to the preactivations of the -th layer followed by a nonlinearity and a random affine function. For a network of depth , let us therefore split the parameters into the parameters of the first layers and those of the last layer .

By Proposition 1 and the induction hypothesis, as the pre-activations are iid centered Gaussian with covariance and the neural tangent kernel of the smaller network converges to a deterministic limit:

We can split the neural tangent network into a sum over the parameters of the first layers and the remaining parameters and .

For the first sum let us observe that by the chain rule:

By the induction hypothesis, the contribution of the parameters to the neural tangent kernel therefore converges as :