Learning without Concentration

01/01/2014
by   Shahar Mendelson, et al.
0

We obtain sharp bounds on the performance of Empirical Risk Minimization performed in a convex class and with respect to the squared loss, without assuming that class members and the target are bounded functions or have rapidly decaying tails. Rather than resorting to a concentration-based argument, the method used here relies on a `small-ball' assumption and thus holds for classes consisting of heavy-tailed functions and for heavy-tailed targets. The resulting estimates scale correctly with the `noise level' of the problem, and when applied to the classical, bounded scenario, always improve the known bounds.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/13/2014

Learning without Concentration for General Loss Functions

We study prediction and estimation problems using empirical risk minimiz...
research
01/04/2021

Benign overfitting without concentration

We obtain a sufficient condition for benign overfitting of linear regres...
research
02/04/2020

Learning bounded subsets of L_p

We study learning problems in which the underlying class is a bounded su...
research
09/04/2017

Extending the small-ball method

The small-ball method was introduced as a way of obtaining a high probab...
research
07/17/2017

An optimal unrestricted learning procedure

We study learning problems in the general setup, for arbitrary classes o...
research
03/05/2020

II. High Dimensional Estimation under Weak Moment Assumptions: Structured Recovery and Matrix Estimation

The purpose of this thesis is to develop new theories on high-dimensiona...
research
08/27/2019

Singletons for Simpletons: Revisiting Windowed Backoff using Chernoff Bounds

For the well-known problem of balls dropped uniformly at random into bin...

Please sign up or login with your details

Forgot password? Click here to reset