Risk regularization through bidirectional dispersion

03/28/2022
by   Matthew J. Holland, et al.
0

Many alternative notions of "risk" (e.g., CVaR, entropic risk, DRO risk) have been proposed and studied, but these risks are all at least as sensitive as the mean to loss tails on the upside, and tend to ignore deviations on the downside. In this work, we study a complementary new risk class that penalizes loss deviations in a bidirectional manner, while having more flexibility in terms of tail sensitivity than is offered by classical mean-variance, without sacrificing computational or analytical tractability.

READ FULL TEXT
research
06/15/2020

Learning Bounds for Risk-sensitive Learning

In risk-sensitive learning, one aims to find a hypothesis that minimizes...
research
06/27/2022

Supervised Learning with General Risk Functionals

Standard uniform convergence results bound the generalization gap of the...
research
07/09/2019

Variance-Based Risk Estimations in Markov Processes via Transformation with State Lumping

Variance plays a crucial role in risk-sensitive reinforcement learning, ...
research
05/25/2018

Body and Tail - Separating the distribution function by an efficient tail-detecting procedure in risk management

In risk management, tail risks are of crucial importance. The quality of...
research
10/26/2021

Boosted CVaR Classification

Many modern machine learning tasks require models with high tail perform...
research
02/09/2022

Empirical Risk Minimization with Relative Entropy Regularization: Optimality and Sensitivity Analysis

The optimality and sensitivity of the empirical risk minimization proble...
research
08/05/2022

Tailoring to the Tails: Risk Measures for Fine-Grained Tail Sensitivity

Expected risk minimization (ERM) is at the core of machine learning syst...

Please sign up or login with your details

Forgot password? Click here to reset