Plug-in Performative Optimization
When predictions are performative, the choice of which predictor to deploy influences the distribution of future observations. The overarching goal in learning under performativity is to find a predictor that has low performative risk, that is, good performance on its induced distribution. One family of solutions for optimizing the performative risk, including bandits and other derivative-free methods, is agnostic to any structure in the performative feedback, leading to exceedingly slow convergence rates. A complementary family of solutions makes use of explicit models for the feedback, such as best-response models in strategic classification, enabling significantly faster rates. However, these rates critically rely on the feedback model being well-specified. In this work we initiate a study of the use of possibly misspecified models in performative prediction. We study a general protocol for making use of models, called plug-in performative optimization, and prove bounds on its excess risk. We show that plug-in performative optimization can be far more efficient than model-agnostic strategies, as long as the misspecification is not too extreme. Altogether, our results support the hypothesis that models–even if misspecified–can indeed help with learning in performative settings.
READ FULL TEXT