Push-SAGA: A decentralized stochastic algorithm with variance reduction over directed graphs

08/13/2020
by   Muhammad I. Qureshi, et al.
0

In this paper, we propose Push-SAGA, a decentralized stochastic first-order method for finite-sum minimization over a directed network of nodes. Push-SAGA combines node-level variance reduction to remove the uncertainty caused by stochastic gradients, network-level gradient tracking to address the distributed nature of the data, and push-sum consensus to tackle the challenge of directed communication links. We show that Push-SAGA achieves linear convergence to the exact solution for smooth and strongly convex problems and is thus the first linearly-convergent stochastic algorithm over arbitrary strongly connected directed graphs. We also characterize the regimes in which Push-SAGA achieves a linear speed-up compared to its centralized counterpart and achieves a network-independent convergence rate. We illustrate the behavior and convergence properties of Push-SAGA with the help of numerical experiments on strongly convex and non-convex problems.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/07/2022

Variance reduced stochastic optimization over directed graphs with row and column stochastic weights

This paper proposes AB-SAGA, a first-order distributed stochastic optimi...
research
02/23/2020

Quantized Push-sum for Gossip and Decentralized Optimization over Directed Graphs

We consider a decentralized stochastic learning problem where data point...
research
05/15/2020

S-ADDOPT: Decentralized stochastic first-order optimization over directed graphs

In this report, we study decentralized stochastic optimization to minimi...
research
02/17/2023

On the convergence result of the gradient-push algorithm on directed graphs with constant stepsize

Gradient-push algorithm has been widely used for decentralized optimizat...
research
02/13/2020

Gradient tracking and variance reduction for decentralized optimization and machine learning

Decentralized methods to solve finite-sum minimization problems are impo...
research
11/07/2020

A fast randomized incremental gradient method for decentralized non-convex optimization

We study decentralized non-convex finite-sum minimization problems descr...
research
02/11/2022

Distributed saddle point problems for strongly concave-convex functions

In this paper, we propose GT-GDA, a distributed optimization method to s...

Please sign up or login with your details

Forgot password? Click here to reset