Tailoring to the Tails: Risk Measures for Fine-Grained Tail Sensitivity

08/05/2022
by   Christian Fröhlich, et al.
0

Expected risk minimization (ERM) is at the core of machine learning systems. This means that the risk inherent in a loss distribution is summarized using a single number - its average. In this paper, we propose a general approach to construct risk measures which exhibit a desired tail sensitivity and may replace the expectation operator in ERM. Our method relies on the specification of a reference distribution with a desired tail behaviour, which is in a one-to-one correspondence to a coherent upper probability. Any risk measure, which is compatible with this upper probability, displays a tail sensitivity which is finely tuned to the reference distribution. As a concrete example, we focus on divergence risk measures based on f-divergence ambiguity sets, which are a widespread tool used to foster distributional robustness of machine learning systems. For instance, we show how ambiguity sets based on the Kullback-Leibler divergence are intricately tied to the class of subexponential random variables. We elaborate the connection of divergence risk measures and rearrangement invariant Banach norms.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/03/2019

Understanding Distributional Ambiguity via Non-robust Chance Constraint

The choice of the ambiguity radius is critical when an investor uses the...
research
06/07/2022

Risk Measures and Upper Probabilities: Coherence and Stratification

Machine learning typically presupposes classical probability theory whic...
research
02/09/2022

Empirical Risk Minimization with Relative Entropy Regularization: Optimality and Sensitivity Analysis

The optimality and sensitivity of the empirical risk minimization proble...
research
11/12/2022

Empirical Risk Minimization with Generalized Relative Entropy Regularization

The empirical risk minimization (ERM) problem with relative entropy regu...
research
01/26/2023

The Probability Conflation: A Reply

We respond to Tetlock et al. (2022) showing 1) how expert judgment fails...
research
04/26/2019

General risk measures for robust machine learning

A wide array of machine learning problems are formulated as the minimiza...
research
03/28/2022

Risk regularization through bidirectional dispersion

Many alternative notions of "risk" (e.g., CVaR, entropic risk, DRO risk)...

Please sign up or login with your details

Forgot password? Click here to reset