Parallel Stochastic Asynchronous Coordinate Descent: Tight Bounds on the Possible Parallelism

11/13/2018
by   Yun Kuen Cheung, et al.
0

Several works have shown linear speedup is achieved by an asynchronous parallel implementation of stochastic coordinate descent so long as there is not too much parallelism. More specifically, it is known that if all updates are of similar duration, then linear speedup is possible with up to Θ(√(n)/L_res) processors, where L_res is a suitable Lipschitz parameter. This paper shows the bound is tight for essentially all possible values of L_res.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/08/2018

(Near) Optimal Parallelism Bound for Fully Asynchronous Coordinate Descent with Linear Speedup

When solving massive optimization problems in areas such as machine lear...
research
08/15/2018

An Analysis of Asynchronous Stochastic Accelerated Coordinate Descent

Gradient descent, and coordinate descent in particular, are core tools i...
research
11/18/2019

SySCD: A System-Aware Parallel Coordinate Descent Algorithm

In this paper we propose a novel parallel stochastic coordinate descent ...
research
10/07/2013

Parallel coordinate descent for the Adaboost problem

We design a randomised parallel version of Adaboost based on previous st...
research
06/27/2018

Amortized Analysis of Asynchronous Price Dynamics

We extend a recently developed framework for analyzing asynchronous coor...
research
12/31/2020

Asynchronous Advantage Actor Critic: Non-asymptotic Analysis and Linear Speedup

Asynchronous and parallel implementation of standard reinforcement learn...
research
06/15/2016

A Class of Parallel Doubly Stochastic Algorithms for Large-Scale Learning

We consider learning problems over training sets in which both, the numb...

Please sign up or login with your details

Forgot password? Click here to reset