Tracking Performance of Online Stochastic Learners

by   Stefan Vlaski, et al.

The utilization of online stochastic algorithms is popular in large-scale learning settings due to their ability to compute updates on the fly, without the need to store and process data in large batches. When a constant step-size is used, these algorithms also have the ability to adapt to drifts in problem parameters, such as data or model properties, and track the optimal solution with reasonable accuracy. Building on analogies with the study of adaptive filters, we establish a link between steady-state performance derived under stationarity assumptions and the tracking performance of online learners under random walk models. The link allows us to infer the tracking performance from steady-state expressions directly and almost by inspection.


page 1

page 2

page 3

page 4


A Communication-Efficient Random-Walk Algorithm for Decentralized Optimization

This paper addresses consensus optimization problem in a multi-agent net...

Adaptive Social Learning

This work proposes a novel strategy for social learning by introducing t...

Finite-Time Error Bounds For Linear Stochastic Approximation and TD Learning

We consider the dynamics of a linear stochastic approximation algorithm ...

On the Online Frank-Wolfe Algorithms for Convex and Non-convex Optimizations

In this paper, the online variants of the classical Frank-Wolfe algorith...

Adaptation in Online Social Learning

This work studies social learning under non-stationary conditions. Altho...

Analysis of the (μ/μ_I,λ)-CSA-ES with Repair by Projection Applied to a Conically Constrained Problem

Theoretical analyses of evolution strategies are indispensable for gaini...

Please sign up or login with your details

Forgot password? Click here to reset