Stochastic approximation with decision-dependent distributions: asymptotic normality and optimality

07/09/2022
by   Joshua Cutler, et al.
0

We analyze a stochastic approximation algorithm for decision-dependent problems, wherein the data distribution used by the algorithm evolves along the iterate sequence. The primary examples of such problems appear in performative prediction and its multiplayer extensions. We show that under mild assumptions, the deviation between the average iterate of the algorithm and the solution is asymptotically normal, with a covariance that nicely decouples the effects of the gradient noise and the distributional shift. Moreover, building on the work of Hájek and Le Cam, we show that the asymptotic performance of the algorithm is locally minimax optimal.

READ FULL TEXT
research
01/16/2023

Asymptotic normality and optimality in nonsmooth stochastic approximation

In their seminal work, Polyak and Juditsky showed that stochastic approx...
research
12/28/2019

A Note on the Asymptotic Optimality of Work-Conserving Disciplines in Completion Time Minimization

In this paper, we prove that under mild stochastic assumptions, work-con...
research
03/16/2020

Is Temporal Difference Learning Optimal? An Instance-Dependent Analysis

We address the problem of policy evaluation in discounted Markov decisio...
research
05/04/2021

On the stability of the stochastic gradient Langevin algorithm with dependent data stream

We prove, under mild conditions, that the stochastic gradient Langevin d...
research
08/04/2015

Asynchronous stochastic convex optimization

We show that asymptotically, completely asynchronous stochastic gradient...
research
01/07/2022

Stochastic Saddle Point Problems with Decision-Dependent Distributions

This paper focuses on stochastic saddle point problems with decision-dep...
research
12/20/2018

Generalization error for decision problems

In this entry we review the generalization error for classification and ...

Please sign up or login with your details

Forgot password? Click here to reset