A data-driven approach to beating SAA out-of-sample

05/26/2021
by   Jun-Ya Gotoh, et al.
0

While solutions of Distributionally Robust Optimization (DRO) problems can sometimes have a higher out-of-sample expected reward than the Sample Average Approximation (SAA), there is no guarantee. In this paper, we introduce the class of Distributionally Optimistic Optimization (DOO) models, and show that it is always possible to "beat" SAA out-of-sample if we consider not just worst-case (DRO) models but also best-case (DOO) ones. We also show, however, that this comes at a cost: Optimistic solutions are more sensitive to model error than either worst-case or SAA optimizers, and hence are less robust.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/21/2020

Worst-case sensitivity

We introduce the notion of Worst-Case Sensitivity, defined as the worst-...
research
11/17/2017

Calibration of Distributionally Robust Empirical Optimization Models

In this paper, we study the out-of-sample properties of robust empirical...
research
07/28/2020

Distributionally Robust Losses for Latent Covariate Mixtures

While modern large-scale datasets often consist of heterogeneous subpopu...
research
07/04/2017

Robust Optimization for Non-Convex Objectives

We consider robust optimization problems, where the goal is to optimize ...
research
10/12/2021

Balancing Average and Worst-case Accuracy in Multitask Learning

When training and evaluating machine learning models on a large number o...
research
05/28/2023

HyperTime: Hyperparameter Optimization for Combating Temporal Distribution Shifts

In this work, we propose a hyperparameter optimization method named Hype...
research
12/15/2015

Data Driven Resource Allocation for Distributed Learning

In distributed machine learning, data is dispatched to multiple machines...

Please sign up or login with your details

Forgot password? Click here to reset