Conditional Accelerated Lazy Stochastic Gradient Descent

03/16/2017
by   Guanghui Lan, et al.
0

In this work we introduce a conditional accelerated lazy stochastic gradient descent algorithm with optimal number of calls to a stochastic first-order oracle and convergence rate O(1/ε^2) improving over the projection-free, Online Frank-Wolfe based stochastic gradient descent of Hazan and Kale [2012] with convergence rate O(1/ε^4).

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/24/2020

A Sharp Convergence Rate for the Asynchronous Stochastic Gradient Descent

We give a sharp convergence rate for the asynchronous stochastic gradien...
research
05/09/2013

Stochastic gradient descent algorithms for strongly convex functions at O(1/T) convergence rates

With a weighting scheme proportional to t, a traditional stochastic grad...
research
09/24/2018

Asynchronous decentralized accelerated stochastic gradient descent

In this work, we introduce an asynchronous decentralized accelerated sto...
research
10/01/2018

Optimal Adaptive and Accelerated Stochastic Gradient Descent

Stochastic gradient descent (Sgd) methods are the most powerful optimiza...
research
06/09/2023

Asymptotically efficient one-step stochastic gradient descent

A generic, fast and asymptotically efficient method for parametric estim...
research
06/22/2020

Optimal Rates for Averaged Stochastic Gradient Descent under Neural Tangent Kernel Regime

We analyze the convergence of the averaged stochastic gradient descent f...
research
05/14/2019

Stochastic Gradient Coding for Straggler Mitigation in Distributed Learning

We consider distributed gradient descent in the presence of stragglers. ...

Please sign up or login with your details

Forgot password? Click here to reset