Performative Prediction with Neural Networks

04/14/2023
by   Mehrnaz Mofakhami, et al.
0

Performative prediction is a framework for learning models that influence the data they intend to predict. We focus on finding classifiers that are performatively stable, i.e. optimal for the data distribution they induce. Standard convergence results for finding a performatively stable classifier with the method of repeated risk minimization assume that the data distribution is Lipschitz continuous to the model's parameters. Under this assumption, the loss must be strongly convex and smooth in these parameters; otherwise, the method will diverge for some problems. In this work, we instead assume that the data distribution is Lipschitz continuous with respect to the model's predictions, a more natural assumption for performative systems. As a result, we are able to significantly relax the assumptions on the loss function. In particular, we do not need to assume convexity with respect to the model's parameters. As an illustration, we introduce a resampling procedure that models realistic distribution shifts and show that it satisfies our assumptions. We support our theory by showing that one can learn performatively stable classifiers with neural networks making predictions about real data that shift according to our proposed procedure.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/17/2021

Outside the Echo Chamber: Optimizing the Performative Risk

In performative prediction, predictions guide decision-making and hence ...
research
09/02/2022

Optimizing the Performative Risk under Weak Convexity Assumptions

In performative prediction, a predictive model impacts the distribution ...
research
02/15/2021

How to Learn when Data Reacts to Your Model: Performative Gradient Descent

Performative distribution shift captures the setting where the choice of...
research
08/22/2023

Convergence guarantee for consistency models

We provide the first convergence guarantees for the Consistency Models (...
research
09/22/2021

Robust Generalization of Quadratic Neural Networks via Function Identification

A key challenge facing deep learning is that neural networks are often n...
research
06/10/2022

Lightweight Conditional Model Extrapolation for Streaming Data under Class-Prior Shift

We introduce LIMES, a new method for learning with non-stationary stream...
research
02/28/2019

Novel and Efficient Approximations for Zero-One Loss of Linear Classifiers

The predictive quality of machine learning models is typically measured ...

Please sign up or login with your details

Forgot password? Click here to reset