Push-Pull Gradient Methods for Distributed Optimization in Networks

10/15/2018
by   Shi Pu, et al.
0

In this paper, we focus on solving a distributed convex optimization problem in a network, where each agent has its own convex cost function and the goal is to minimize the sum of the agents' cost functions while obeying the network connectivity structure. In order to minimize the sum of the cost functions, we consider new distributed gradient-based methods where each node maintains two estimates, namely, an estimate of the optimal decision variable and an estimate of the gradient for the average of the agents' objective functions. From the viewpoint of an agent, the information about the decision variable is pushed to the neighbors, while the information about the gradients is pulled from the neighbors hence giving the name "push-pull gradient methods". This name is also due to the consideration of the implementation aspect: the push-communication-protocol and the pull-communication-protocol are respectively employed to implement certain steps in the numerical schemes. The methods utilize two different graphs for the information exchange among agents, and as such, unify the algorithms with different types of distributed architecture, including decentralized (peer-to-peer), centralized (master-slave), and semi-centralized (leader-follower) architecture. We show that the proposed algorithms and their many variants converge linearly for strongly convex and smooth objective functions over a network (possibly with unidirectional data links) in both synchronous and asynchronous random-gossip settings. We numerically evaluate our proposed algorithm for both static and time-varying graphs, and find that the algorithms are competitive as compared to other linearly convergent schemes.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/20/2018

A Push-Pull Gradient Method for Distributed Optimization in Networks

In this paper, we focus on solving a distributed convex optimization pro...
research
05/26/2021

Distributed Zeroth-Order Stochastic Optimization in Time-varying Networks

We consider a distributed convex optimization problem in a network which...
research
07/26/2021

Provably Accelerated Decentralized Gradient Method Over Unbalanced Directed Graphs

In this work, we consider the decentralized optimization problem in whic...
research
06/08/2022

Push–Pull with Device Sampling

We consider decentralized optimization problems in which a number of age...
research
06/03/2020

How to Spread a Rumor: Call Your Neighbors or Take a Walk?

We study the problem of randomized information dissemination in networks...
research
08/14/2023

Self-Healing First-Order Distributed Optimization with Packet Loss

We describe SH-SVL, a parameterized family of first-order distributed op...
research
07/17/2019

Fast Distributed Coordination of Distributed Energy Resources Over Time-Varying Communication Networks

In this paper, we consider the problem of optimally coordinating the res...

Please sign up or login with your details

Forgot password? Click here to reset