Distributed Stochastic Gradient Tracking Methods

05/25/2018
by   Shi Pu, et al.
0

In this paper, we study the problem of distributed multi-agent optimization over a network, where each agent possesses a local cost function that is smooth and strongly convex. The global objective is to find a common solution that minimizes the average of all cost functions. Assuming agents only have access to unbiased estimates of the gradients of their local cost functions, we consider a distributed stochastic gradient tracking method (DSGT) and a gossip-like stochastic gradient tracking method (GSGT). We show that, in expectation, the iterates generated by each agent are attracted to a neighborhood of the optimal solution, where they accumulate exponentially fast (under a constant stepsize choice). Under DSGT, the limiting (expected) error bounds on the distance of the iterates from the optimal solution decrease with the network size n, which is a comparable performance to a centralized stochastic gradient algorithm. Moreover, we show that when the network is well-connected, GSGT incurs lower communication cost than DSGT while maintaining a similar computational cost. Numerical example further demonstrates the effectiveness of the proposed methods.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/21/2018

A Distributed Stochastic Gradient Tracking Method

In this paper, we study the problem of distributed multi-agent optimizat...
research
03/31/2020

A Robust Gradient Tracking Method for Distributed Optimization over Directed Networks

In this paper, we consider the problem of distributed consensus optimiza...
research
02/09/2020

Linearly Convergent Algorithm with Variance Reduction for Distributed Stochastic Optimization

This paper considers a distributed stochastic strongly convex optimizati...
research
10/11/2022

Zero-Order One-Point Estimate with Distributed Stochastic Gradient-Tracking Technique

In this work, we consider a distributed multi-agent stochastic optimizat...
research
09/20/2019

Regularized Diffusion Adaptation via Conjugate Smoothing

The purpose of this work is to develop and study a distributed strategy ...
research
05/21/2018

Adaptive Neighborhood Resizing for Stochastic Reachability in Multi-Agent Systems

We present DAMPC, a distributed, adaptive-horizon and adaptive-neighborh...
research
03/04/2022

Triggered Gradient Tracking for Asynchronous Distributed Optimization

This paper proposes Asynchronous Triggered Gradient Tracking, i.e., a di...

Please sign up or login with your details

Forgot password? Click here to reset