Risk Guarantees for End-to-End Prediction and Optimization Processes

12/30/2020
by   Nam Ho-Nguyen, et al.
0

Prediction models are often employed in estimating parameters of optimization models. Despite the fact that in an end-to-end view, the real goal is to achieve good optimization performance, the prediction performance is measured on its own. While it is usually believed that good prediction performance in estimating the parameters will result in good subsequent optimization performance, formal theoretical guarantees on this are notably lacking. In this paper, we explore conditions that allow us to explicitly describe how the prediction performance governs the optimization performance. Our weaker condition allows for an asymptotic convergence result, while our stronger condition allows for exact quantification of the optimization performance in terms of the prediction performance. In general, verification of these conditions is a non-trivial task. Nevertheless, we show that our weaker condition is equivalent to the well-known Fisher consistency concept from the learning theory literature. This then allows us to easily check our weaker condition for several loss functions. We also establish that the squared error loss function satisfies our stronger condition. Consequently, we derive the exact theoretical relationship between prediction performance measured with the squared loss, as well as a class of symmetric loss functions, and the subsequent optimization performance. In a computational study on portfolio optimization, fractional knapsack and multiclass classification problems, we compare the optimization performance of using of several prediction loss functions (some that are Fisher consistent and some that are not) and demonstrate that lack of consistency of the loss function can indeed have a detrimental effect on performance.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

04/26/2022

Hybridised Loss Functions for Improved Neural Network Generalisation

Loss functions play an important role in the training of artificial neur...
09/15/2021

Generalized XGBoost Method

The XGBoost method has many advantages and is especially suitable for st...
11/14/2017

Loss Functions for Multiset Prediction

We study the problem of multiset prediction. The goal of multiset predic...
08/03/2012

On the Consistency of AUC Pairwise Optimization

AUC (area under ROC curve) is an important evaluation criterion, which h...
08/24/2020

Online Convex Optimization Perspective for Learning from Dynamically Revealed Preferences

We study the problem of online learning (OL) from revealed preferences: ...
01/10/2022

Bayesian Consistency with the Supremum Metric

We present simple conditions for Bayesian consistency in the supremum me...
03/08/2021

Program Synthesis Over Noisy Data with Guarantees

We explore and formalize the task of synthesizing programs over noisy da...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.