Distributed stochastic gradient tracking algorithm with variance reduction for non-convex optimization

06/28/2021
by   Xia Jiang, et al.
0

This paper proposes a distributed stochastic algorithm with variance reduction for general smooth non-convex finite-sum optimization, which has wide applications in signal processing and machine learning communities. In distributed setting, large number of samples are allocated to multiple agents in the network. Each agent computes local stochastic gradient and communicates with its neighbors to seek for the global optimum. In this paper, we develop a modified variance reduction technique to deal with the variance introduced by stochastic gradients. Combining gradient tracking and variance reduction techniques, this paper proposes a distributed stochastic algorithm, GT-VR, to solve large-scale non-convex finite-sum optimization over multi-agent networks. A complete and rigorous proof shows that the GT-VR algorithm converges to first-order stationary points with O(1/k) convergence rate. In addition, we provide the complexity analysis of the proposed algorithm. Compared with some existing first-order methods, the proposed algorithm has a lower 𝒪(PMϵ^-1) gradient complexity under some mild condition. By comparing state-of-the-art algorithms and GT-VR in experimental simulations, we verify the efficiency of the proposed algorithm.

READ FULL TEXT
research
04/22/2022

Distributed stochastic projection-free solver for constrained optimization

This paper proposes a distributed stochastic projection-free algorithm f...
research
07/26/2021

Asynchronous Distributed Reinforcement Learning for LQR Control via Zeroth-Order Block Coordinate Descent

Recently introduced distributed zeroth-order optimization (ZOO) algorith...
research
03/15/2017

Riemannian stochastic quasi-Newton algorithm with variance reduction and its convergence analysis

Stochastic variance reduction algorithms have recently become popular fo...
research
02/22/2023

Stochastic Approximation Beyond Gradient for Signal Processing and Machine Learning

Stochastic approximation (SA) is a classical algorithm that has had sinc...
research
06/20/2020

On the Divergence of Decentralized Non-Convex Optimization

We study a generic class of decentralized algorithms in which N agents j...
research
05/30/2023

Blockwise Stochastic Variance-Reduced Methods with Parallel Speedup for Multi-Block Bilevel Optimization

In this paper, we consider non-convex multi-block bilevel optimization (...
research
01/31/2022

A framework for bilevel optimization that enables stochastic and global variance reduction algorithms

Bilevel optimization, the problem of minimizing a value function which i...

Please sign up or login with your details

Forgot password? Click here to reset