(Near) Optimal Parallelism Bound for Fully Asynchronous Coordinate Descent with Linear Speedup

11/08/2018
by   Yun Kuen Cheung, et al.
0

When solving massive optimization problems in areas such as machine learning, it is a common practice to seek speedup via massive parallelism. However, especially in an asynchronous environment, there are limits on the possible parallelism. Accordingly, we seek tight bounds on the viable parallelism in asynchronous implementations of coordinate descent. We focus on asynchronous coordinate descent (ACD) algorithms on convex functions F:R^n →R of the form F(x) = f(x) + ∑_k=1^n Ψ_k(x_k), where f:R^n →R is a smooth convex function, and each Ψ_k:R→R is a univariate and possibly non-smooth convex function. Our approach is to quantify the shortfall in progress compared to the standard sequential stochastic gradient descent. This leads to a truly simple yet optimal analysis of the standard stochastic ACD in a partially asynchronous environment, which already generalizes and improves on the bounds in prior work. We also give a considerably more involved analysis for general asynchronous environments in which the only constraint is that each update can overlap with at most q others, where q is at most the number of processors times the ratio in the lengths of the longest and shortest updates. The main technical challenge is to demonstrate linear speedup in the latter environment. This stems from the subtle interplay of asynchrony and randomization. This improves Liu and Wright's (SIOPT'15) lower bound on the maximum degree of parallelism almost quadratically, and we show that our new bound is almost optimal.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/13/2018

Parallel Stochastic Asynchronous Coordinate Descent: Tight Bounds on the Possible Parallelism

Several works have shown linear speedup is achieved by an asynchronous p...
research
08/15/2018

An Analysis of Asynchronous Stochastic Accelerated Coordinate Descent

Gradient descent, and coordinate descent in particular, are core tools i...
research
06/23/2015

On Variance Reduction in Stochastic Gradient Descent and its Asynchronous Variants

We study optimization algorithms based on variance reduction for stochas...
research
06/27/2015

Asynchronous Parallel Stochastic Gradient for Nonconvex Optimization

Asynchronous parallel implementations of stochastic gradient (SG) have b...
research
03/03/2021

Critical Parameters for Scalable Distributed Learning with Large Batches and Asynchronous Updates

It has been experimentally observed that the efficiency of distributed t...
research
12/04/2012

Parallel Coordinate Descent Methods for Big Data Optimization

In this work we show that randomized (block) coordinate descent methods ...
research
01/05/2016

Coordinate Friendly Structures, Algorithms and Applications

This paper focuses on coordinate update methods, which are useful for so...

Please sign up or login with your details

Forgot password? Click here to reset