Predictor-corrector algorithms for stochastic optimization under gradual distribution shift

05/26/2022
by   Subha Maity, et al.
0

Time-varying stochastic optimization problems frequently arise in machine learning practice (e.g. gradual domain shift, object tracking, strategic classification). Although most problems are solved in discrete time, the underlying process is often continuous in nature. We exploit this underlying continuity by developing predictor-corrector algorithms for time-varying stochastic optimizations. We provide error bounds for the iterates, both in presence of pure and noisy access to the queries from the relevant derivatives of the loss function. Furthermore, we show (theoretically and empirically in several examples) that our method outperforms non-predictor corrector methods that do not exploit the underlying continuous process.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/05/2018

Continuous-time Models for Stochastic Optimization Algorithms

We propose a new continuous-time formulation for first-order stochastic ...
research
06/26/2015

ASOC: An Adaptive Parameter-free Stochastic Optimization Techinique for Continuous Variables

Stochastic optimization is an important task in many optimization proble...
research
06/22/2022

Projection-free Constrained Stochastic Nonconvex Optimization with State-dependent Markov Data

We study a projection-free conditional gradient-type algorithm for const...
research
11/01/2022

Optimal Complexity in Non-Convex Decentralized Learning over Time-Varying Networks

Decentralized optimization with time-varying networks is an emerging par...
research
06/12/2020

Stochastic Optimization for Performative Prediction

In performative prediction, the choice of a model influences the distrib...
research
04/14/2022

Towards URLLC with Proactive HARQ Adaptation

In this work, we propose a dynamic decision maker algorithm to improve t...

Please sign up or login with your details

Forgot password? Click here to reset