Ergodic Mirror Descent

05/24/2011
by   John C. Duchi, et al.
0

We generalize stochastic subgradient descent methods to situations in which we do not receive independent samples from the distribution over which we optimize, but instead receive samples that are coupled over time. We show that as long as the source of randomness is suitably ergodic---it converges quickly enough to a stationary distribution---the method enjoys strong convergence guarantees, both in expectation and with high probability. This result has implications for stochastic optimization in high-dimensional spaces, peer-to-peer distributed optimization schemes, decision problems with dependent data, and stochastic optimization problems over combinatorial spaces.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/13/2023

Near-Optimal High-Probability Convergence for Non-Convex Stochastic Optimization with Variance Reduction

Traditional analyses for non-convex stochastic optimization problems cha...
research
04/06/2013

Logical Stochastic Optimization

We present a logical framework to represent and reason about stochastic ...
research
07/06/2021

Distributed stochastic optimization with large delays

One of the most widely used methods for solving large-scale stochastic o...
research
08/29/2019

Stochastic Successive Convex Approximation for General Stochastic Optimization Problems

One key challenge for solving a general stochastic optimization problem ...
research
06/08/2019

Optimal Convergence for Stochastic Optimization with Multiple Expectation Constraints

In this paper, we focus on the problem of stochastic optimization where ...
research
02/10/2020

Stochastic Online Optimization using Kalman Recursion

We study the Extended Kalman Filter in constant dynamics, offering a bay...
research
12/14/2017

Stochastic Particle Gradient Descent for Infinite Ensembles

The superior performance of ensemble methods with infinite models are we...

Please sign up or login with your details

Forgot password? Click here to reset