Smoothed f-Divergence Distributionally Robust Optimization: Exponential Rate Efficiency and Complexity-Free Calibration

06/24/2023
by   Zhenyuan Liu, et al.
0

In data-driven optimization, sample average approximation is known to suffer from the so-called optimizer's curse that causes optimistic bias in evaluating the solution performance. This can be tackled by adding a "margin" to the estimated objective value, or via distributionally robust optimization (DRO), a fast-growing approach based on worst-case analysis, which gives a protective bound on the attained objective value. However, in all these existing approaches, a statistically guaranteed bound on the true solution performance either requires restrictive conditions and knowledge on the objective function complexity, or otherwise exhibits an over-conservative rate that depends on the distribution dimension. We argue that a special type of DRO offers strong theoretical advantages in regard to these challenges: It attains a statistical bound on the true solution performance that is the tightest possible in terms of exponential decay rate, for a wide class of objective functions that notably does not hinge on function complexity. Correspondingly, its calibration also does not require any complexity information. This DRO uses an ambiguity set based on a KL-divergence smoothed by the Wasserstein or Levy-Prokhorov distance via a suitable distance optimization. Computationally, we also show that such a DRO, and its generalized version using smoothed f-divergence, is not much harder than standard DRO problems using the f-divergence or Wasserstein distance, thus supporting the strengths of such DRO as both statistically optimal and computationally viable.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/23/2017

Ambiguity set and learning via Bregman and Wasserstein

Construction of ambiguity set in robust optimization relies on the choic...
research
06/21/2021

Complexity-Free Generalization via Distributionally Robust Optimization

Established approaches to obtain generalization bounds in data-driven op...
research
03/16/2023

Distributionally Robust Optimization using Cost-Aware Ambiguity Sets

We present a novel class of ambiguity sets for distributionally robust o...
research
06/27/2018

Approximability of Discriminators Implies Diversity in GANs

While Generative Adversarial Networks (GANs) have empirically produced i...
research
03/22/2022

Bounds on Wasserstein distances between continuous distributions using independent samples

The plug-in estimator of the Wasserstein distance is known to be conserv...
research
09/15/2023

Wasserstein Distributionally Robust Policy Evaluation and Learning for Contextual Bandits

Off-policy evaluation and learning are concerned with assessing a given ...
research
06/02/2021

Partial Wasserstein Covering

We consider a general task called partial Wasserstein covering with the ...

Please sign up or login with your details

Forgot password? Click here to reset