Improving the Sample and Communication Complexity for Decentralized Non-Convex Optimization: A Joint Gradient Estimation and Tracking Approach

10/13/2019
by   Haoran Sun, et al.
0

Many modern large-scale machine learning problems benefit from decentralized and stochastic optimization. Recent works have shown that utilizing both decentralized computing and local stochastic gradient estimates can outperform state-of-the-art centralized algorithms, in applications involving highly non-convex problems, such as training deep neural networks. In this work, we propose a decentralized stochastic algorithm to deal with certain smooth non-convex problems where there are m nodes in the system, and each node has a large number of samples (denoted as n). Differently from the majority of the existing decentralized learning algorithms for either stochastic or finite-sum problems, our focus is given to both reducing the total communication rounds among the nodes, while accessing the minimum number of local data samples. In particular, we propose an algorithm named D-GET (decentralized gradient estimation and tracking), which jointly performs decentralized gradient estimation (which estimates the local gradient using a subset of local samples) and gradient tracking (which tracks the global full gradient using local estimates). We show that, to achieve certain ϵ stationary solution of the deterministic finite sum problem, the proposed algorithm achieves an O(mn^1/2ϵ^-1) sample complexity and an O(ϵ^-1) communication complexity. These bounds significantly improve upon the best existing bounds of O(mnϵ^-1) and O(ϵ^-1), respectively. Similarly, for online problems, the proposed method achieves an O(m ϵ^-3/2) sample complexity and an O(ϵ^-1) communication complexity, while the best existing bounds are O(mϵ^-2) and O(ϵ^-2), respectively.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/04/2021

GT-STORM: Taming Sample, Communication, and Memory Complexities in Decentralized Non-Convex Learning

Decentralized nonconvex optimization has received increasing attention i...
research
02/12/2021

A hybrid variance-reduced method for decentralized stochastic non-convex optimization

This paper considers decentralized stochastic optimization over a networ...
research
06/20/2020

On the Divergence of Decentralized Non-Convex Optimization

We study a generic class of decentralized algorithms in which N agents j...
research
09/12/2020

A general framework for decentralized optimization with first-order methods

Decentralized optimization to minimize a finite sum of functions over a ...
research
05/20/2020

An Optimal Algorithm for Decentralized Finite Sum Optimization

Modern large-scale finite-sum optimization relies on two key aspects: di...
research
08/17/2020

A near-optimal stochastic gradient method for decentralized non-convex finite-sum optimization

This paper describes a near-optimal stochastic first-order gradient meth...
research
05/27/2019

An Accelerated Decentralized Stochastic Proximal Algorithm for Finite Sums

Modern large-scale finite-sum optimization relies on two key aspects: di...

Please sign up or login with your details

Forgot password? Click here to reset