The L^∞ Learnability of Reproducing Kernel Hilbert Spaces

06/05/2023
by   Hongrui Chen, et al.
0

In this work, we analyze the learnability of reproducing kernel Hilbert spaces (RKHS) under the L^∞ norm, which is critical for understanding the performance of kernel methods and random feature models in safety- and security-critical applications. Specifically, we relate the L^∞ learnability of a RKHS to the spectrum decay of the associate kernel and both lower bounds and upper bounds of the sample complexity are established. In particular, for dot-product kernels on the sphere, we identify conditions when the L^∞ learning can be achieved with polynomial samples. Let d denote the input dimension and assume the kernel spectrum roughly decays as λ_k∼ k^-1-β with β>0. We prove that if β is independent of the input dimension d, then functions in the RKHS can be learned efficiently under the L^∞ norm, i.e., the sample complexity depends polynomially on d. In contrast, if β=1/poly(d), then the L^∞ learning requires exponentially many samples.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/27/2019

On the Risk of Minimum-Norm Interpolants and Restricted Lower Isometry of Kernels

We study the risk of minimum-norm interpolants of data in a Reproducing ...
research
05/09/2023

A duality framework for generalization analysis of random feature models and two-layer neural networks

We consider the problem of learning functions in the ℱ_p,π and Barron sp...
research
12/06/2012

Excess risk bounds for multitask learning with trace norm regularization

Trace norm regularization is a popular method of multitask learning. We ...
research
04/19/2021

Robust Uncertainty Bounds in Reproducing Kernel Hilbert Spaces: A Convex Optimization Approach

Let a labeled dataset be given with scattered samples and consider the h...
research
09/02/2018

On overcoming the Curse of Dimensionality in Neural Networks

Let H be a reproducing Kernel Hilbert space. For i=1,...,N, let x_i∈R^d ...
research
03/30/2021

Minimum complexity interpolation in random features models

Despite their many appealing properties, kernel methods are heavily affe...
research
03/04/2021

Function Approximation via Sparse Random Features

Random feature methods have been successful in various machine learning ...

Please sign up or login with your details

Forgot password? Click here to reset