A Revision of Neural Tangent Kernel-based Approaches for Neural Networks

by   Kyung-Su Kim, et al.

Recent theoretical works based on the neural tangent kernel (NTK) have shed light on the optimization and generalization of over-parameterized networks, and partially bridge the gap between their practical success and classical learning theory. Especially, using the NTK-based approach, the following three representative results were obtained: (1) A training error bound was derived to show that networks can fit any finite training sample perfectly by reflecting a tighter characterization of training speed depending on the data complexity. (2) A generalization error bound invariant of network size was derived by using a data-dependent complexity measure (CMD). It follows from this CMD bound that networks can generalize arbitrary smooth functions. (3) A simple and analytic kernel function was derived as indeed equivalent to a fully-trained network. This kernel outperforms its corresponding network and the existing gold standard, Random Forests, in few shot learning. For all of these results to hold, the network scaling factor κ should decrease w.r.t. sample size n. In this case of decreasing κ, however, we prove that the aforementioned results are surprisingly erroneous. It is because the output value of trained network decreases to zero when κ decreases w.r.t. n. To solve this problem, we tighten key bounds by essentially removing κ-affected values. Our tighter analysis resolves the scaling problem and enables the validation of the original NTK-based results.


page 1

page 2

page 3

page 4


Fine-Grained Analysis of Optimization and Generalization for Overparameterized Two-Layer Neural Networks

Recent works have cast some light on the mystery of why deep nets fit an...

Uniform Generalization Bounds for Overparameterized Neural Networks

An interesting observation in artificial neural networks is their favora...

On the Provable Generalization of Recurrent Neural Networks

Recurrent Neural Network (RNN) is a fundamental structure in deep learni...

On the Generalization Power of Overfitted Two-Layer Neural Tangent Kernel Models

In this paper, we study the generalization performance of min ℓ_2-norm o...

Interpolation and Learning with Scale Dependent Kernels

We study the learning properties of nonparametric ridge-less least squar...

A Note on Lazy Training in Supervised Differentiable Programming

In a series of recent theoretical works, it has been shown that strongly...

Of Kernels and Queues: when network calculus meets analytic combinatorics

Stochastic network calculus is a tool for computing error bounds on the ...