An Accelerated Decentralized Stochastic Proximal Algorithm for Finite Sums

05/27/2019
by   Hadrien Hendrikx, et al.
0

Modern large-scale finite-sum optimization relies on two key aspects: distribution and stochastic updates. For smooth and strongly convex problems, existing decentralized algorithms are slower than modern accelerated variance-reduced stochastic algorithms when run on a single machine, and are therefore not efficient. Centralized algorithms are fast, but their scaling is limited by global aggregation steps that result in communication bottlenecks. In this work, we propose an efficient Accelerated Decentralized stochastic algorithm for Finite Sums named ADFS, which uses local stochastic proximal updates and randomized pairwise communications between nodes. On n machines, ADFS learns from nm samples in the same time it takes optimal algorithms to learn from m samples on one machine. This scaling holds until a critical network size is reached, which depends on communication delays, on the number of samples m, and on the network topology. We provide a theoretical analysis based on a novel augmented graph approach combined with a precise evaluation of synchronization times and an extension of the accelerated proximal coordinate gradient algorithm to arbitrary sampling. We illustrate the improvement of ADFS over state-of-the-art decentralized approaches with experiments.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/20/2020

An Optimal Algorithm for Decentralized Finite Sum Optimization

Modern large-scale finite-sum optimization relies on two key aspects: di...
research
10/05/2018

Accelerated Decentralized Optimization with Local Updates for Smooth and Strongly Convex Objectives

In this paper, we study the problem of minimizing a sum of smooth and st...
research
06/25/2020

Dual-Free Stochastic Decentralized Optimization with Variance Reduction

We consider the problem of training machine learning models on distribut...
research
01/28/2019

Asynchronous Accelerated Proximal Stochastic Gradient for Strongly Convex Distributed Finite Sums

In this work, we study the problem of minimizing the sum of strongly con...
research
10/13/2019

Improving the Sample and Communication Complexity for Decentralized Non-Convex Optimization: A Joint Gradient Estimation and Tracking Approach

Many modern large-scale machine learning problems benefit from decentral...
research
08/28/2017

An inexact subsampled proximal Newton-type method for large-scale machine learning

We propose a fast proximal Newton-type algorithm for minimizing regulari...
research
07/26/2022

DADAO: Decoupled Accelerated Decentralized Asynchronous Optimization for Time-Varying Gossips

DADAO is a novel decentralized asynchronous stochastic algorithm to mini...

Please sign up or login with your details

Forgot password? Click here to reset