An optimal unrestricted learning procedure

07/17/2017
by   Shahar Mendelson, et al.
0

We study learning problems in the general setup, for arbitrary classes of functions F, distributions X and targets Y. Because proper learning procedures, i.e., procedures that are only allowed to select functions in F, tend to perform poorly unless the problem satisfies some additional structural property (e.g., that F is convex), we consider unrestricted learning procedures, that is, procedures that are free to choose functions outside the given class F. We present a new unrestricted procedure that is optimal in a very strong sense: it attains the best possible accuracy/confidence tradeoff for (almost) any triplet (F,X,Y), including in heavy-tailed problems. Moreover, the tradeoff the procedure attains coincides with what one would expect if F were convex, even when F is not; and when F happens to be convex, the procedure is proper; thus, the unrestricted procedure is actually optimal in both realms, for convex classes as a proper procedure and for arbitrary classes as an unrestricted procedure. The notion of optimality we consider is problem specific: our procedure performs with the best accuracy/confidence tradeoff one can hope to achieve for each individual problem. As such, it is a significantly stronger property than the standard `worst-case' notion, in which one considers optimality as the best uniform estimate that holds for a relatively large family of problems. Thanks to the sharp and problem-specific estimates we obtain, classical, worst-case bounds are immediate outcomes of our main result.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/25/2015

On aggregation for heavy-tailed classes

We introduce an alternative to the notion of `fast rate' in Learning The...
research
11/10/2022

Probabilistically Robust PAC Learning

Recently, Robey et al. propose a notion of probabilistic robustness, whi...
research
01/01/2014

Learning without Concentration

We obtain sharp bounds on the performance of Empirical Risk Minimization...
research
01/15/2017

Regularization, sparse recovery, and median-of-means tournaments

A regularized risk minimization procedure for regression function estima...
research
02/14/2019

Procrastinating with Confidence: Near-Optimal, Anytime, Adaptive Algorithm Configuration

Algorithm configuration methods optimize the performance of a parameteri...
research
03/14/2019

Distributionally Robust Selection of the Best

Specifying a proper input distribution is often a challenging task in si...
research
10/23/2019

How well can we learn large factor models without assuming strong factors?

In this paper, we consider the problem of learning models with a latent ...

Please sign up or login with your details

Forgot password? Click here to reset