Excess risk bounds for multitask learning with trace norm regularization

12/06/2012
by   Andreas Maurer, et al.
0

Trace norm regularization is a popular method of multitask learning. We give excess risk bounds with explicit dependence on the number of tasks, the number of examples per task and properties of the data distribution. The bounds are independent of the dimension of the input space, which may be infinite as in the case of reproducing kernel Hilbert spaces. A byproduct of the proof are bounds on the expected norm of sums of random positive semidefinite matrices with subexponential moments.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/27/2019

On the Risk of Minimum-Norm Interpolants and Restricted Lower Isometry of Kernels

We study the risk of minimum-norm interpolants of data in a Reproducing ...
research
06/05/2023

The L^∞ Learnability of Reproducing Kernel Hilbert Spaces

In this work, we analyze the learnability of reproducing kernel Hilbert ...
research
03/02/2019

Leveraging Low-Rank Relations Between Surrogate Tasks in Structured Prediction

We study the interplay between surrogate methods for structured predicti...
research
04/16/2019

Risk Bounds for Learning Multiple Components with Permutation-Invariant Losses

This paper proposes a simple approach to derive efficient error bounds f...
research
05/29/2019

Improved Generalisation Bounds for Deep Learning Through L^∞ Covering Numbers

Using proof techniques involving L^∞ covering numbers, we show generalis...
research
09/22/2020

Risk upper bounds for RKHS ridge group sparse estimator in the regression model with non-Gaussian and non-bounded error

We consider the problem of estimating a meta-model of an unknown regress...
research
04/23/2015

Regularization-free estimation in trace regression with symmetric positive semidefinite matrices

Over the past few years, trace regression models have received considera...

Please sign up or login with your details

Forgot password? Click here to reset