
FineGrained Analysis of Optimization and Generalization for Overparameterized TwoLayer Neural Networks
Recent works have cast some light on the mystery of why deep nets fit an...
read it

Understanding Generalization of Deep Neural Networks Trained with Noisy Labels
Overparameterized deep neural networks trained by simple firstorder me...
read it

On the Generalization Power of Overfitted TwoLayer Neural Tangent Kernel Models
In this paper, we study the generalization performance of min ℓ_2norm o...
read it

Generalization bound of globally optimal nonconvex neural network training: Transportation map estimation by infinite dimensional Langevin dynamics
We introduce a new theoretical framework to analyze deep learning optimi...
read it

Interpolation and Learning with Scale Dependent Kernels
We study the learning properties of nonparametric ridgeless least squar...
read it

A Note on Lazy Training in Supervised Differentiable Programming
In a series of recent theoretical works, it has been shown that strongly...
read it

Of Kernels and Queues: when network calculus meets analytic combinatorics
Stochastic network calculus is a tool for computing error bounds on the ...
read it
A Revision of Neural Tangent Kernelbased Approaches for Neural Networks
Recent theoretical works based on the neural tangent kernel (NTK) have shed light on the optimization and generalization of overparameterized networks, and partially bridge the gap between their practical success and classical learning theory. Especially, using the NTKbased approach, the following three representative results were obtained: (1) A training error bound was derived to show that networks can fit any finite training sample perfectly by reflecting a tighter characterization of training speed depending on the data complexity. (2) A generalization error bound invariant of network size was derived by using a datadependent complexity measure (CMD). It follows from this CMD bound that networks can generalize arbitrary smooth functions. (3) A simple and analytic kernel function was derived as indeed equivalent to a fullytrained network. This kernel outperforms its corresponding network and the existing gold standard, Random Forests, in few shot learning. For all of these results to hold, the network scaling factor κ should decrease w.r.t. sample size n. In this case of decreasing κ, however, we prove that the aforementioned results are surprisingly erroneous. It is because the output value of trained network decreases to zero when κ decreases w.r.t. n. To solve this problem, we tighten key bounds by essentially removing κaffected values. Our tighter analysis resolves the scaling problem and enables the validation of the original NTKbased results.
READ FULL TEXT
Comments
There are no comments yet.