DeepAI
Log In Sign Up

Improved Concentration Bounds for Conditional Value-at-Risk and Cumulative Prospect Theory using Wasserstein distance

02/27/2019
by   Sanjay P. Bhat, et al.
0

Known finite-sample concentration bounds for the Wasserstein distance between the empirical and true distribution of a random variable are used to derive a two-sided concentration bound for the error between the true conditional value-at-risk (CVaR) of a (possibly unbounded) random variable and a standard estimate of its CVaR computed from an i.i.d. sample. The bound applies under fairly general assumptions on the random variable, and improves upon previous bounds which were either one sided, or applied only to bounded random variables. Specializations of the bound to sub-Gaussian and sub-exponential random variables are also derived. A similar procedure is followed to derive concentration bounds for the error between the true and estimated Cumulative Prospect Theory (CPT) value of a random variable, in cases where the random variable is bounded or sub-Gaussian. These bounds are shown to match a known bound in the bounded case, and improve upon the known bound in the sub-Gaussian case. The usefulness of the bounds is illustrated through an algorithm, and corresponding regret bound for a stochastic bandit problem, where the underlying risk measure to be optimized is CVaR.

READ FULL TEXT

page 1

page 2

page 3

page 4

08/06/2018

Concentration bounds for empirical conditional value-at-risk: The unbounded case

In several real-world applications involving decision making under uncer...
07/29/2021

Wasserstein Conditional Independence Testing

We introduce a test for the conditional independence of random variables...
02/04/2021

Sharper Sub-Weibull Concentrations: Non-asymptotic Bai-Yin Theorem

Arising in high-dimensional probability, non-asymptotic concentration in...
12/22/2019

Estimation of Spectral Risk Measures

We consider the problem of estimating a spectral risk measure (SRM) from...
07/01/2020

The Restricted Isometry of ReLU Networks: Generalization through Norm Concentration

While regression tasks aim at interpolating a relation on the entire inp...
07/11/2016

Minimum Description Length Principle in Supervised Learning with Application to Lasso

The minimum description length (MDL) principle in supervised learning is...
04/27/2018

Convergence and Concentration of Empirical Measures under Wasserstein Distance in Unbounded Functional Spaces

We provide upper bounds of the expected Wasserstein distance between a p...