A hybrid variance-reduced method for decentralized stochastic non-convex optimization

02/12/2021
by   Ran Xin, et al.
0

This paper considers decentralized stochastic optimization over a network of n nodes, where each node possesses a smooth non-convex local cost function and the goal of the networked nodes is to find an ϵ-accurate first-order stationary point of the sum of the local costs. We focus on an online setting, where each node accesses its local cost only by means of a stochastic first-order oracle that returns a noisy version of the exact gradient. In this context, we propose a novel single-loop decentralized hybrid variance-reduced stochastic gradient method, called , that outperforms the existing approaches in terms of both the oracle complexity and practical implementation. The algorithm implements specialized local hybrid stochastic gradient estimators that are fused over the network to track the global gradient. Remarkably, achieves a network-independent oracle complexity of O(n^-1ϵ^-3) when the required error tolerance ϵ is small enough, leading to a linear speedup with respect to the centralized optimal online variance-reduced approaches that operate on a single node. Numerical experiments are provided to illustrate our main technical results.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/07/2020

A fast randomized incremental gradient method for decentralized non-convex optimization

We study decentralized non-convex finite-sum minimization problems descr...
research
05/15/2020

S-ADDOPT: Decentralized stochastic first-order optimization over directed graphs

In this report, we study decentralized stochastic optimization to minimi...
research
10/13/2019

Improving the Sample and Communication Complexity for Decentralized Non-Convex Optimization: A Joint Gradient Estimation and Tracking Approach

Many modern large-scale machine learning problems benefit from decentral...
research
12/08/2020

A Primal-Dual Framework for Decentralized Stochastic Optimization

We consider the decentralized convex optimization problem, where multipl...
research
08/17/2020

A near-optimal stochastic gradient method for decentralized non-convex finite-sum optimization

This paper describes a near-optimal stochastic first-order gradient meth...
research
09/02/2023

Switch and Conquer: Efficient Algorithms By Switching Stochastic Gradient Oracles For Decentralized Saddle Point Problems

We consider a class of non-smooth strongly convex-strongly concave saddl...
research
09/06/2019

Decentralized Stochastic Gradient Tracking for Non-convex Empirical Risk Minimization

This paper studies a decentralized stochastic gradient tracking (DSGT) a...

Please sign up or login with your details

Forgot password? Click here to reset