Infinite-Task Learning with Vector-Valued RKHSs

05/22/2018
by   Romain Brault, et al.
0

Machine learning has witnessed the tremendous success of solving tasks depending on a hyperparameter. While multi-task learning is celebrated for its capacity to solve jointly a finite number of tasks, learning a continuum of tasks for various loss functions is still a challenge. A promising approach, called Parametric Task Learning, has paved the way in the case of piecewise-linear loss functions. We propose a generic approach, called Infinite-Task Learning, to solve jointly a continuum of tasks via vector-valued RKHSs. We provide generalization guarantees to the suggested scheme and illustrate its efficiency in cost-sensitive classification, quantile regression and density level set estimation.

READ FULL TEXT

page 7

page 8

research
06/05/2016

Bounds for Vector-Valued Function Estimation

We present a framework to derive risk bounds for vector-valued learning ...
research
04/01/2021

Learning Rates for Multi-task Regularization Networks

Multi-task learning is an important trend of machine learning in facing ...
research
05/19/2017

Multi-Task Learning Using Uncertainty to Weigh Losses for Scene Geometry and Semantics

Numerous deep learning applications benefit from multi-task learning wit...
research
02/09/2021

Emotion Transfer Using Vector-Valued Infinite Task Learning

Style transfer is a significant problem of machine learning with numerou...
research
06/06/2017

Classifying Documents within Multiple Hierarchical Datasets using Multi-Task Learning

Multi-task learning (MTL) is a supervised learning paradigm in which the...
research
08/13/2018

Multi-Task Learning for Sequence Tagging: An Empirical Study

We study three general multi-task learning (MTL) approaches on 11 sequen...
research
03/26/2014

Beyond L2-Loss Functions for Learning Sparse Models

Incorporating sparsity priors in learning tasks can give rise to simple,...

Please sign up or login with your details

Forgot password? Click here to reset