Asynchronous stochastic convex optimization

08/04/2015
by   John C. Duchi, et al.
0

We show that asymptotically, completely asynchronous stochastic gradient procedures achieve optimal (even to constant factors) convergence rates for the solution of convex optimization problems under nearly the same conditions required for asymptotic optimality of standard stochastic gradient procedures. Roughly, the noise inherent to the stochastic approximation scheme dominates any noise from asynchrony. We also give empirical evidence demonstrating the strong performance of asynchronous, parallel stochastic optimization schemes, demonstrating that the robustness inherent to stochastic approximation problems allows substantially faster parallel and asynchronous solution methods.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/22/2011

Randomized Smoothing for Stochastic Optimization

We analyze convergence rates of stochastic optimization procedures for n...
research
02/27/2018

Accelerating Asynchronous Algorithms for Convex Optimization by Momentum Compensation

Asynchronous algorithms have attracted much attention recently due to th...
research
06/12/2020

Stochastic Gradient Langevin with Delayed Gradients

Stochastic Gradient Langevin Dynamics (SGLD) ensures strong guarantees w...
research
09/23/2019

Necessary and Sufficient Conditions for Adaptive, Mirror, and Standard Gradient Methods

We study the impact of the constraint set and gradient geometry on the c...
research
04/12/2018

Asynch-SGBDT: Asynchronous Parallel Stochastic Gradient Boosting Decision Tree based on Parameters Server

Gradient Boosting Decision Tree, i.e. GBDT, becomes one of the most impo...
research
07/09/2022

Stochastic approximation with decision-dependent distributions: asymptotic normality and optimality

We analyze a stochastic approximation algorithm for decision-dependent p...

Please sign up or login with your details

Forgot password? Click here to reset