New Generalization Bounds for Learning Kernels

12/17/2009
by   Corinna Cortes, et al.
0

This paper presents several novel generalization bounds for the problem of learning kernels based on the analysis of the Rademacher complexity of the corresponding hypothesis sets. Our bound for learning kernels with a convex combination of p base kernels has only a log(p) dependency on the number of kernels, p, which is considerably more favorable than the previous best bound given for the same problem. We also give a novel bound for learning with a linear combination of p base kernels with an L_2 regularization whose dependency on p is only in p^1/4.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/15/2022

Generalization Bounds on Multi-Kernel Learning with Mixed Datasets

This paper presents novel generalization bounds for the multi-kernel lea...
research
05/09/2012

L2 Regularization for Learning Kernels

The choice of the kernel is critical to the success of many learning alg...
research
02/14/2012

Ensembles of Kernel Predictors

This paper examines the problem of learning with a finite and possibly l...
research
04/08/2016

Finding Optimal Combination of Kernels using Genetic Programming

In Computer Vision, problem of identifying or classifying the objects pr...
research
11/17/2020

GPURepair: Automated Repair of GPU Kernels

This paper presents a tool for repairing errors in GPU kernels written i...
research
03/02/2012

Algorithms for Learning Kernels Based on Centered Alignment

This paper presents new and effective algorithms for learning kernels. I...
research
04/21/2022

Provably Efficient Kernelized Q-Learning

We propose and analyze a kernelized version of Q-learning. Although a ke...

Please sign up or login with your details

Forgot password? Click here to reset