The Perils of Learning Before Optimizing

06/18/2021
by   Chris Cameron, et al.
10

Formulating real-world optimization problems often begins with making predictions from historical data (e.g., an optimizer that aims to recommend fast routes relies upon travel-time predictions). Typically, learning the prediction model used to generate the optimization problem and solving that problem are performed in two separate stages. Recent work has showed how such prediction models can be learned end-to-end by differentiating through the optimization task. Such methods often yield empirical improvements, which are typically attributed to end-to-end making better error tradeoffs than the standard loss function used in a two-stage solution. We refine this explanation and more precisely characterize when end-to-end can improve performance. When prediction targets are stochastic, a two-stage solution must make an a priori choice about which statistics of the target distribution to model – we consider expectations over prediction targets – while an end-to-end solution can make this choice adaptively. We show that the performance gap between a two-stage and end-to-end approach is closely related to the price of correlation concept in stochastic optimization and show the implications of some existing POC results for our predict-then-optimize problem. We then consider a novel and particularly practical setting, where coefficients in the objective function depend on multiple prediction targets. We give explicit constructions where (1) two-stage performs unboundedly worse than end-to-end; and (2) two-stage is optimal. We identify a large set of real-world applications whose objective functions rely on multiple prediction targets but which nevertheless deploy two-stage solutions. We also use simulations to experimentally quantify performance gaps.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/28/2022

PyEPO: A PyTorch-based End-to-End Predict-then-Optimize Library for Linear and Integer Programming

In deterministic optimization, it is typically assumed that all paramete...
research
05/31/2019

End to end learning and optimization on graphs

Real-world applications often combine learning and optimization problems...
research
06/07/2023

End-to-End Learning for Stochastic Optimization: A Bayesian Perspective

We develop a principled approach to end-to-end learning in stochastic op...
research
08/22/2019

A General Data Renewal Model for Prediction Algorithms in Industrial Data Analytics

In industrial data analytics, one of the fundamental problems is to util...
research
11/09/2022

A Note on Task-Aware Loss via Reweighing Prediction Loss by Decision-Regret

In this short technical note we propose a baseline for decision-aware le...
research
11/25/2022

End-to-End Stochastic Optimization with Energy-Based Model

Decision-focused learning (DFL) was recently proposed for stochastic opt...
research
06/10/2022

Distributionally Robust End-to-End Portfolio Construction

We propose an end-to-end distributionally robust system for portfolio co...

Please sign up or login with your details

Forgot password? Click here to reset