Log In Sign Up

Improved Concentration Bounds for Conditional Value-at-Risk and Cumulative Prospect Theory using Wasserstein distance

by   Sanjay P. Bhat, et al.

Known finite-sample concentration bounds for the Wasserstein distance between the empirical and true distribution of a random variable are used to derive a two-sided concentration bound for the error between the true conditional value-at-risk (CVaR) of a (possibly unbounded) random variable and a standard estimate of its CVaR computed from an i.i.d. sample. The bound applies under fairly general assumptions on the random variable, and improves upon previous bounds which were either one sided, or applied only to bounded random variables. Specializations of the bound to sub-Gaussian and sub-exponential random variables are also derived. A similar procedure is followed to derive concentration bounds for the error between the true and estimated Cumulative Prospect Theory (CPT) value of a random variable, in cases where the random variable is bounded or sub-Gaussian. These bounds are shown to match a known bound in the bounded case, and improve upon the known bound in the sub-Gaussian case. The usefulness of the bounds is illustrated through an algorithm, and corresponding regret bound for a stochastic bandit problem, where the underlying risk measure to be optimized is CVaR.


page 1

page 2

page 3

page 4


Concentration bounds for empirical conditional value-at-risk: The unbounded case

In several real-world applications involving decision making under uncer...

Wasserstein Conditional Independence Testing

We introduce a test for the conditional independence of random variables...

Sharper Sub-Weibull Concentrations: Non-asymptotic Bai-Yin Theorem

Arising in high-dimensional probability, non-asymptotic concentration in...

Estimation of Spectral Risk Measures

We consider the problem of estimating a spectral risk measure (SRM) from...

The Restricted Isometry of ReLU Networks: Generalization through Norm Concentration

While regression tasks aim at interpolating a relation on the entire inp...

Minimum Description Length Principle in Supervised Learning with Application to Lasso

The minimum description length (MDL) principle in supervised learning is...

Convergence and Concentration of Empirical Measures under Wasserstein Distance in Unbounded Functional Spaces

We provide upper bounds of the expected Wasserstein distance between a p...