On the Complexity of Learning with Kernels

11/05/2014
by   Nicolò Cesa-Bianchi, et al.
0

A well-recognized limitation of kernel learning is the requirement to handle a kernel matrix, whose size is quadratic in the number of training examples. Many methods have been proposed to reduce this computational cost, mostly by using a subset of the kernel matrix entries, or some form of low-rank matrix approximation, or a random projection method. In this paper, we study lower bounds on the error attainable by such methods as a function of the number of entries observed in the kernel matrix or the rank of an approximate kernel matrix. We show that there are kernel learning problems where no such method will lead to non-trivial computational savings. Our results also quantify how the problem difficulty depends on parameters such as the nature of the loss function, the regularization parameter, the norm of the desired predictor, and the kernel matrix rank. Our results also suggest cases where more efficient kernel learning might be possible.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/05/2017

Is Input Sparsity Time Possible for Kernel Low-Rank Approximation?

Low-rank approximation is a common tool used to accelerate kernel method...
research
08/02/2016

Hierarchically Compositional Kernels for Scalable Nonparametric Learning

We propose a novel class of kernels to alleviate the high computational ...
research
04/11/2023

An Adaptive Factorized Nyström Preconditioner for Regularized Kernel Matrices

The spectrum of a kernel matrix significantly depends on the parameter v...
research
01/21/2020

Face Verification via learning the kernel matrix

The kernel function is introduced to solve the nonlinear pattern recogni...
research
07/07/2023

Point spread function approximation of high rank Hessians with locally supported non-negative integral kernels

We present an efficient matrix-free point spread function (PSF) method f...
research
06/06/2017

Understanding and Eliminating the Large-kernel Effect in Blind Deconvolution

Blind deconvolution consists of recovering a clear version of an observe...
research
10/04/2009

Regularization Techniques for Learning with Matrices

There is growing body of learning problems for which it is natural to or...

Please sign up or login with your details

Forgot password? Click here to reset