Risk Guarantees for End-to-End Prediction and Optimization Processes

12/30/2020
by   Nam Ho-Nguyen, et al.
0

Prediction models are often employed in estimating parameters of optimization models. Despite the fact that in an end-to-end view, the real goal is to achieve good optimization performance, the prediction performance is measured on its own. While it is usually believed that good prediction performance in estimating the parameters will result in good subsequent optimization performance, formal theoretical guarantees on this are notably lacking. In this paper, we explore conditions that allow us to explicitly describe how the prediction performance governs the optimization performance. Our weaker condition allows for an asymptotic convergence result, while our stronger condition allows for exact quantification of the optimization performance in terms of the prediction performance. In general, verification of these conditions is a non-trivial task. Nevertheless, we show that our weaker condition is equivalent to the well-known Fisher consistency concept from the learning theory literature. This then allows us to easily check our weaker condition for several loss functions. We also establish that the squared error loss function satisfies our stronger condition. Consequently, we derive the exact theoretical relationship between prediction performance measured with the squared loss, as well as a class of symmetric loss functions, and the subsequent optimization performance. In a computational study on portfolio optimization, fractional knapsack and multiclass classification problems, we compare the optimization performance of using of several prediction loss functions (some that are Fisher consistent and some that are not) and demonstrate that lack of consistency of the loss function can indeed have a detrimental effect on performance.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/15/2021

Generalized XGBoost Method

The XGBoost method has many advantages and is especially suitable for st...
research
10/05/2022

Spectral Regularization Allows Data-frugal Learning over Combinatorial Spaces

Data-driven machine learning models are being increasingly employed in s...
research
11/14/2017

Loss Functions for Multiset Prediction

We study the problem of multiset prediction. The goal of multiset predic...
research
10/28/2022

The Fisher-Rao Loss for Learning under Label Noise

Choosing a suitable loss function is essential when learning by empirica...
research
08/24/2020

Online Convex Optimization Perspective for Learning from Dynamically Revealed Preferences

We study the problem of online learning (OL) from revealed preferences: ...
research
01/10/2022

Bayesian Consistency with the Supremum Metric

We present simple conditions for Bayesian consistency in the supremum me...

Please sign up or login with your details

Forgot password? Click here to reset