Multi-task and Lifelong Learning of Kernels

02/21/2016
by   Anastasia Pentina, et al.
0

We consider a problem of learning kernels for use in SVM classification in the multi-task and lifelong scenarios and provide generalization bounds on the error of a large margin classifier. Our results show that, under mild conditions on the family of kernels used for learning, solving several related tasks simultaneously is beneficial over single task learning. In particular, as the number of observed tasks grows, assuming that in the considered family of kernels there exists one that yields low approximation error on all tasks, the overhead associated with learning such a kernel vanishes and the complexity converges to that of learning when this good kernel is given to the learner.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/17/2013

Stability of Multi-Task Kernel Regression Algorithms

We study the stability properties of nonlinear multi-task regression in ...
research
07/11/2017

Multi-Task Learning Using Neighborhood Kernels

This paper introduces a new and effective algorithm for learning kernels...
research
04/01/2021

Learning Rates for Multi-task Regularization Networks

Multi-task learning is an important trend of machine learning in facing ...
research
11/18/2015

Efficient Output Kernel Learning for Multiple Tasks

The paradigm of multi-task learning is that one can achieve better gener...
research
05/15/2022

Generalization Bounds on Multi-Kernel Learning with Mixed Datasets

This paper presents novel generalization bounds for the multi-kernel lea...
research
05/09/2012

L2 Regularization for Learning Kernels

The choice of the kernel is critical to the success of many learning alg...
research
11/10/2016

Multi-Task Multiple Kernel Relationship Learning

This paper presents a novel multitask multiple kernel learning framework...

Please sign up or login with your details

Forgot password? Click here to reset