Spectral risk-based learning using unbounded losses

05/11/2021
by   Matthew J. Holland, et al.
0

In this work, we consider the setting of learning problems under a wide class of spectral risk (or "L-risk") functions, where a Lipschitz-continuous spectral density is used to flexibly assign weight to extreme loss values. We obtain excess risk guarantees for a derivative-free learning procedure under unbounded heavy-tailed loss distributions, and propose a computationally efficient implementation which empirically outperforms traditional risk minimizers in terms of balancing spectral risk and misclassification error.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/14/2020

Risk Bounds for Robust Deep Learning

It has been observed that certain loss functions can render deep-learnin...
research
06/03/2019

Distribution oblivious, risk-aware algorithms for multi-armed bandits with unbounded rewards

Classical multi-armed bandit problems use the expected value of an arm a...
research
12/04/2020

Non-monotone risk functions for learning

In this paper we consider generalized classes of potentially non-monoton...
research
05/01/2016

Fast Rates for General Unbounded Loss Functions: from ERM to Generalized Bayes

We present new excess risk bounds for general unbounded loss functions i...
research
01/27/2023

Robust variance-regularized risk minimization with concomitant scaling

Under losses which are potentially heavy-tailed, we consider the task of...
research
06/07/2022

Risk Measures and Upper Probabilities: Coherence and Stratification

Machine learning typically presupposes classical probability theory whic...
research
05/07/2012

Risk estimation for matrix recovery with spectral regularization

In this paper, we develop an approach to recursively estimate the quadra...

Please sign up or login with your details

Forgot password? Click here to reset