A near-optimal stochastic gradient method for decentralized non-convex finite-sum optimization

08/17/2020
by   Ran Xin, et al.
0

This paper describes a near-optimal stochastic first-order gradient method for decentralized finite-sum minimization of smooth non-convex functions. Specifically, we propose GT-SARAH that employs a local SARAH-type variance reduction and global gradient tracking to address the stochastic and decentralized nature of the problem. Considering a total number of N cost functions, equally divided over a directed network of n nodes, we show that GT-SARAH finds an ϵ-accurate first-order stationary point in 𝒪(N^1/2ϵ^-1) gradient computations across all nodes, independent of the network topology, when n≤𝒪(N^1/2(1-λ)^3), where (1-λ) is the spectral gap of the network weight matrix. In this regime, GT-SARAH is thus, to the best our knowledge, the first decentralized method that achieves the algorithmic lower bound for this class of problems. Moreover, GT-SARAH achieves a non-asymptotic linear speedup, in that, the total number of gradient computations at each node is reduced by a factor of 1/n compared to the near-optimal algorithms for this problem class that process all data at a single node. We also establish the convergence rate of GT-SARAH in other regimes, in terms of the relative sizes of the number of nodes n, total number of functions N, and the network spectral gap (1-λ). Over infinite time horizon, we establish the almost sure and mean-squared convergence of GT-SARAH to a first-order stationary point.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/07/2020

A fast randomized incremental gradient method for decentralized non-convex optimization

We study decentralized non-convex finite-sum minimization problems descr...
research
10/08/2019

Variance-Reduced Decentralized Stochastic Optimization with Gradient Tracking – Part II: GT-SVRG

Decentralized stochastic optimization has recently benefited from gradie...
research
02/12/2021

A hybrid variance-reduced method for decentralized stochastic non-convex optimization

This paper considers decentralized stochastic optimization over a networ...
research
08/10/2020

An improved convergence analysis for decentralized online stochastic non-convex optimization

In this paper, we study decentralized online stochastic non-convex optim...
research
10/13/2019

Improving the Sample and Communication Complexity for Decentralized Non-Convex Optimization: A Joint Gradient Estimation and Tracking Approach

Many modern large-scale machine learning problems benefit from decentral...
research
07/04/2018

SPIDER: Near-Optimal Non-Convex Optimization via Stochastic Path Integrated Differential Estimator

In this paper, we propose a new technique named Stochastic Path-Integrat...
research
03/16/2023

Orthogonal Directions Constrained Gradient Method: from non-linear equality constraints to Stiefel manifold

We consider the problem of minimizing a non-convex function over a smoot...

Please sign up or login with your details

Forgot password? Click here to reset