DeepAI AI Chat
Log In Sign Up

Characterizing the Spectrum of the NTK via a Power Series Expansion

by   Michael Murray, et al.

Under mild conditions on the network initialization we derive a power series expansion for the Neural Tangent Kernel (NTK) of arbitrarily deep feedforward networks in the infinite width limit. We provide expressions for the coefficients of this power series which depend on both the Hermite coefficients of the activation function as well as the depth of the network. We observe faster decay of the Hermite coefficients leads to faster decay in the NTK coefficients. Using this series, first we relate the effective rank of the NTK to the effective rank of the input-data Gram. Second, for data drawn uniformly on the sphere we derive an explicit formula for the eigenvalues of the NTK, which shows faster decay in the NTK coefficients implies a faster decay in its spectrum. From this we recover existing results on eigenvalue asymptotics for ReLU networks and comment on how the activation function influences the RKHS. Finally, for generic data and activation functions with sufficiently fast Hermite coefficient decay, we derive an asymptotic upper bound on the spectrum of the NTK.


page 1

page 2

page 3

page 4


Wide neural networks: From non-gaussian random fields at initialization to the NTK geometry of training

Recent developments in applications of artificial neural networks with o...

Effect of Activation Functions on the Training of Overparametrized Neural Nets

It is well-known that overparametrized neural networks trained using gra...

Neural network integral representations with the ReLU activation function

We derive a formula for neural network integral representations on the s...

Generalised Wendland functions for the sphere

In this paper we compute the spherical Fourier expansions coefficients f...

Decay estimate of bivariate Chebyshev coefficients for functions with limited smoothness

We obtain the decay bounds for Chebyshev series coefficients of function...

The Neural Covariance SDE: Shaped Infinite Depth-and-Width Networks at Initialization

The logit outputs of a feedforward neural network at initialization are ...

On the adaptive spectral approximation of functions using redundant sets and frames

The approximation of smooth functions with a spectral basis typically le...