Calibration of Distributionally Robust Empirical Optimization Models

11/17/2017
by   Jun-Ya Gotoh, et al.
0

In this paper, we study the out-of-sample properties of robust empirical optimization and develop a theory for data-driven calibration of the robustness parameter for worst-case maximization problems with concave reward functions. Building on the intuition that robust optimization reduces the sensitivity of the expected reward to errors in the model by controlling the spread of the reward distribution, we show that the first-order benefit of little bit of robustness is a significant reduction in the variance of the out-of-sample reward while the corresponding impact on the mean is almost an order of magnitude smaller. One implication is that a substantial reduction in the variance of the out-of-sample reward (i.e. sensitivity of the expected reward to model misspecification) is possible at little cost if the robustness parameter is properly calibrated. To this end, we introduce the notion of a robust mean-variance frontier to select the robustness parameter and show that it can be approximated using resampling methods like the bootstrap. Our examples also show that open loop calibration methods (e.g. selecting a 90 confidence level regardless of the data and objective function) can lead to solutions that are very conservative out-of-sample.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/26/2021

A data-driven approach to beating SAA out-of-sample

While solutions of Distributionally Robust Optimization (DRO) problems c...
research
10/21/2020

Worst-case sensitivity

We introduce the notion of Worst-Case Sensitivity, defined as the worst-...
research
12/14/2020

A One-Size-Fits-All Solution to Conservative Bandit Problems

In this paper, we study a family of conservative bandit problems (CBPs) ...
research
10/12/2022

Can Calibration Improve Sample Prioritization?

Calibration can reduce overconfident predictions of deep neural networks...
research
03/23/2022

Your Policy Regularizer is Secretly an Adversary

Policy regularization methods such as maximum entropy regularization are...
research
12/13/2019

Robustness and sensitivity analyses for stochastic volatility models under uncertain data structure

In this paper we perform robustness and sensitivity analysis of several ...

Please sign up or login with your details

Forgot password? Click here to reset