Asymptotic learning curves of kernel methods: empirical data v.s. Teacher-Student paradigm

05/26/2019
by   Stefano Spigler, et al.
0

How many training data are needed to learn a supervised task? It is often observed that the generalization error decreases as n^-β where n is the number of training examples and β an exponent that depends on both data and algorithm. In this work we measure β when applying kernel methods to real datasets. For MNIST we find β≈ 0.4 and for CIFAR10 β≈ 0.1. Remarkably, β is the same for regression and classification tasks, and for Gaussian or Laplace kernels. To rationalize the existence of non-trivial exponents that can be independent of the specific kernel used, we introduce the Teacher-Student framework for kernels. In this scheme, a Teacher generates data according to a Gaussian random field, and a Student learns them via kernel regression. With a simplifying assumption --- namely that the data are sampled from a regular lattice --- we derive analytically β for translation invariant kernels, using previous results from the kriging literature. Provided that the Student is not too sensitive to high frequencies, β depends only on the training data and their dimension. We confirm numerically that these predictions hold when the training points are sampled at random on a hypersphere. Overall, our results quantify how smooth Gaussian data should be to avoid the curse of dimensionality, and indicate that for kernel learning the relevant dimension of the data should be defined in terms of how the distance between nearest data points depends on n. With this definition one obtains reasonable effective smoothness estimates for MNIST and CIFAR10.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/16/2021

Locality defeats the curse of dimensionality in convolutional teacher-student scenarios

Convolutional neural networks perform a local and translationally-invari...
research
06/17/2020

How isotropic kernels learn simple invariants

We investigate how the training curve of isotropic kernel methods depend...
research
02/16/2021

Capturing the learning curves of generic features maps for realistic data sets with a teacher-student model

Teacher-student models provide a powerful framework in which the typical...
research
03/23/2020

Neural Networks and Polynomial Regression. Demystifying the Overparametrization Phenomena

In the context of neural network models, overparametrization refers to t...
research
04/21/2022

Provably Efficient Kernelized Q-Learning

We propose and analyze a kernelized version of Q-learning. Although a ke...
research
03/09/2021

More data or more parameters? Investigating the effect of data structure on generalization

One of the central features of deep learning is the generalization abili...

Please sign up or login with your details

Forgot password? Click here to reset