Supervised Learning with General Risk Functionals

06/27/2022
by   Liu Leqi, et al.
0

Standard uniform convergence results bound the generalization gap of the expected loss over a hypothesis class. The emergence of risk-sensitive learning requires generalization guarantees for functionals of the loss distribution beyond the expectation. While prior works specialize in uniform convergence of particular functionals, our work provides uniform convergence for a general class of Hölder risk functionals for which the closeness in the Cumulative Distribution Function (CDF) entails closeness in risk. We establish the first uniform convergence results for estimating the CDF of the loss distribution, yielding guarantees that hold simultaneously both over all Hölder risk functionals and over all hypotheses. Thus licensed to perform empirical risk minimization, we develop practical gradient-based methods for minimizing distortion risks (widely studied subset of Hölder risks that subsumes the spectral risks, including the mean, conditional value at risk, cumulative prospect theory risks, and others) and provide convergence guarantees. In experiments, we demonstrate the efficacy of our learning procedure, both in settings where uniform convergence results hold and in high-dimensional settings with deep networks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/15/2020

Learning Bounds for Risk-sensitive Learning

In risk-sensitive learning, one aims to find a hypothesis that minimizes...
research
10/12/2022

Classification by estimating the cumulative distribution function for small data

In this paper, we study the classification problem by estimating the con...
research
05/07/2018

Computing the Shattering Coefficient of Supervised Learning Algorithms

The Statistical Learning Theory (SLT) provides the theoretical guarantee...
research
03/28/2022

Risk regularization through bidirectional dispersion

Many alternative notions of "risk" (e.g., CVaR, entropic risk, DRO risk)...
research
05/26/2020

Class-Weighted Classification: Trade-offs and Robust Approaches

We address imbalanced classification, the problem in which a label may h...
research
10/09/2015

Conditional Risk Minimization for Stochastic Processes

We study the task of learning from non-i.i.d. data. In particular, we ai...
research
12/18/2018

Uniform Convergence Bounds for Codec Selection

We frame the problem of selecting an optimal audio encoding scheme as a ...

Please sign up or login with your details

Forgot password? Click here to reset