Finite-Sample Guarantees for Wasserstein Distributionally Robust Optimization: Breaking the Curse of Dimensionality

09/09/2020
by   Rui Gao, et al.
8

Wasserstein distributionally robust optimization (DRO) aims to find robust and generalizable solutions by hedging against data perturbations in Wasserstein distance. Despite its recent empirical success in operations research and machine learning, existing performance guarantees for generic loss functions are either overly conservative due to the curse of dimensionality, or plausible only in large sample asymptotics. In this paper, we develop a non-asymptotic framework for analyzing the out-of-sample performance for Wasserstein robust learning and the generalization bound for its related Lipschitz and gradient regularization problems. To the best of our knowledge, this gives the first finite-sample guarantee for generic Wasserstein DRO problems without suffering from the curse of dimensionality. Our results highlight the bias-variation trade-off intrinsic in the Wasserstein DRO, which automatically balances between the empirical mean of the loss and the variation of the loss, measured by the Lipschitz norm or the gradient norm of the loss. Our analysis is based on two novel methodological developments which are of independent interest: 1) a new concentration inequality characterizing the decay rate of large deviation probabilities by the variation of the loss and, 2) a localized Rademacher complexity theory based on the variation of the loss.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/22/2020

Two-sample Test using Projected Wasserstein Distance: Breaking the Curse of Dimensionality

We develop a projected Wasserstein distance for the two-sample test, a f...
research
12/12/2022

On Generalization and Regularization via Wasserstein Distributionally Robust Optimization

Wasserstein distributionally robust optimization (DRO) has found success...
research
09/29/2021

Exact Statistical Inference for the Wasserstein Distance by Selective Inference

In this paper, we study statistical inference for the Wasserstein distan...
research
12/17/2017

Wasserstein Distributional Robustness and Regularization in Statistical Learning

A central question in statistical learning is to design algorithms that ...
research
05/26/2023

Exact Generalization Guarantees for (Regularized) Wasserstein Distributionally Robust Models

Wasserstein distributionally robust estimators have emerged as powerful ...
research
06/27/2022

The Performance of Wasserstein Distributionally Robust M-Estimators in High Dimensions

Wasserstein distributionally robust optimization has recently emerged as...
research
08/20/2021

Distributionally Robust Learning

This monograph develops a comprehensive statistical learning framework t...

Please sign up or login with your details

Forgot password? Click here to reset