Bounds for Vector-Valued Function Estimation

06/05/2016
by   Andreas Maurer, et al.
0

We present a framework to derive risk bounds for vector-valued learning with a broad class of feature maps and loss functions. Multi-task learning and one-vs-all multi-category learning are treated as examples. We discuss in detail vector-valued functions with one hidden layer, and demonstrate that the conditions under which shared representations are beneficial for multi- task learning are equally applicable to multi-category learning.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/22/2018

Infinite-Task Learning with Vector-Valued RKHSs

Machine learning has witnessed the tremendous success of solving tasks d...
research
11/04/2011

Vector-valued Reproducing Kernel Banach Spaces with Applications to Multi-task Learning

Motivated by multi-task machine learning with Banach spaces, we propose ...
research
09/11/2019

Learning Vector-valued Functions with Local Rademacher Complexity

We consider a general family of problems of which the output space admit...
research
04/29/2021

Fine-grained Generalization Analysis of Vector-valued Learning

Many fundamental machine learning tasks can be formulated as a problem o...
research
07/12/2021

CatVRNN: Generating Category Texts via Multi-task Learning

Controlling the model to generate texts of different categories is a cha...
research
05/25/2023

Vector-Valued Variation Spaces and Width Bounds for DNNs: Insights on Weight Decay Regularization

Deep neural networks (DNNs) trained to minimize a loss term plus the sum...
research
05/01/2016

A vector-contraction inequality for Rademacher complexities

The contraction inequality for Rademacher averages is extended to Lipsch...

Please sign up or login with your details

Forgot password? Click here to reset