On aggregation for heavy-tailed classes

02/25/2015
by   Shahar Mendelson, et al.
0

We introduce an alternative to the notion of `fast rate' in Learning Theory, which coincides with the optimal error rate when the given class happens to be convex and regular in some sense. While it is well known that such a rate cannot always be attained by a learning procedure (i.e., a procedure that selects a function in the given class), we introduce an aggregation procedure that attains that rate under rather minimal assumptions -- for example, that the L_q and L_2 norms are equivalent on the linear span of the class for some q>2, and the target random variable is square-integrable.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/16/2018

Structured Recovery with Heavy-tailed Measurements: A Thresholding Procedure and Optimal Rates

This paper introduces a general regularized thresholded least-square pro...
research
07/17/2017

An optimal unrestricted learning procedure

We study learning problems in the general setup, for arbitrary classes o...
research
10/13/2014

Learning without Concentration for General Loss Functions

We study prediction and estimation problems using empirical risk minimiz...
research
02/25/2021

Distribution-Free Robust Linear Regression

We study random design linear regression with no assumptions on the dist...
research
09/29/2016

Fast learning rates with heavy-tailed losses

We study fast learning rates when the losses are not necessarily bounded...
research
04/09/2015

`local' vs. `global' parameters -- breaking the gaussian complexity barrier

We show that if F is a convex class of functions that is L-subgaussian, ...
research
07/02/2018

Well-Scaling Procedure for Deciding Gammoid Class-Membership of Matroids

We introduce a procedure that solves the decision problem whether a give...

Please sign up or login with your details

Forgot password? Click here to reset