
Exact Gap between Generalization Error and Uniform Convergence in Random Feature Models
Recent work showed that there could be a large gap between the classical...
read it

On the robustness of minimumnorm interpolators
This article develops a general theory for minimumnorm interpolated est...
read it

Uniform convergence may be unable to explain generalization in deep learning
We cast doubt on the power of uniform convergencebased generalization b...
read it

Interpolation under latent factor regression models
This work studies finitesample properties of the risk of the minimumno...
read it

Consistency of Interpolation with Laplace Kernels is a HighDimensional Phenomenon
We show that minimumnorm interpolation in the Reproducing Kernel Hilber...
read it

Fitting Spectral Decay with the kSupport Norm
The spectral ksupport norm enjoys good estimation properties in low ran...
read it

Generalization error of minimum weighted norm and kernel interpolation
We study the generalization error of functions that interpolate prescrib...
read it
On Uniform Convergence and LowNorm Interpolation Learning
We consider an underdetermined noisy linear regression model where the minimumnorm interpolating predictor is known to be consistent, and ask: can uniform convergence in a norm ball, or at least (following Nagarajan and Kolter) the subset of a norm ball that the algorithm selects on a typical input set, explain this success? We show that uniformly bounding the difference between empirical and population errors cannot show any learning in the norm ball, and cannot show consistency for any set, even one depending on the exact algorithm and distribution. But we argue we can explain the consistency of the minimalnorm interpolator with a slightly weaker, yet standard, notion, uniform convergence of zeroerror predictors. We use this to bound the generalization error of low (but not minimal) norm interpolating predictors.
READ FULL TEXT
Comments
There are no comments yet.