A fast randomized incremental gradient method for decentralized non-convex optimization

11/07/2020
by   Ran Xin, et al.
0

We study decentralized non-convex finite-sum minimization problems described over a network of nodes, where each node possesses a local batch of data samples. We propose a single-timescale first-order randomized incremental gradient method, termed as GT-SAGA. GT-SAGA is computationally efficient since it evaluates only one component gradient per node per iteration and achieves provably fast and robust performance by leveraging node-level variance reduction and network-level gradient tracking. For general smooth non-convex problems, we show almost sure and mean-squared convergence to a first-order stationary point and describe regimes of practical significance where GT-SAGA achieves a network-independent convergence rate and outperforms the existing approaches respectively. When the global cost function further satisfies the Polyak-Lojaciewisz condition, we show that GT-SAGA exhibits global linear convergence to an optimal solution in expectation and describe regimes of practical interest where the performance is network-independent and improves upon the existing work. Numerical experiments based on real-world datasets are included to highlight the behavior and convergence aspects of the proposed method.

READ FULL TEXT
research
08/17/2020

A near-optimal stochastic gradient method for decentralized non-convex finite-sum optimization

This paper describes a near-optimal stochastic first-order gradient meth...
research
02/12/2021

A hybrid variance-reduced method for decentralized stochastic non-convex optimization

This paper considers decentralized stochastic optimization over a networ...
research
08/13/2020

Push-SAGA: A decentralized stochastic algorithm with variance reduction over directed graphs

In this paper, we propose Push-SAGA, a decentralized stochastic first-or...
research
05/15/2020

S-ADDOPT: Decentralized stochastic first-order optimization over directed graphs

In this report, we study decentralized stochastic optimization to minimi...
research
10/08/2019

Variance-Reduced Decentralized Stochastic Optimization with Gradient Tracking – Part II: GT-SVRG

Decentralized stochastic optimization has recently benefited from gradie...
research
01/03/2020

A Proximal Linearization-based Decentralized Method for Nonconvex Problems with Nonlinear Constraints

Decentralized optimization for non-convex problems are now demanding by ...
research
07/05/2019

A-priori error analysis of local incremental minimization schemes for rate-independent evolutions

This paper is concerned with a priori error estimates for the local incr...

Please sign up or login with your details

Forgot password? Click here to reset