Stability of Decentralized Gradient Descent in Open Multi-Agent Systems

09/11/2020
by   Julien M. Hendrickx, et al.
0

The aim of decentralized gradient descent (DGD) is to minimize a sum of n functions held by interconnected agents. We study the stability of DGD in open contexts where agents can join or leave the system, resulting each time in the addition or the removal of their function from the global objective. Assuming all functions are smooth, strongly convex, and their minimizers all lie in a given ball, we characterize the sensitivity of the global minimizer of the sum of these functions to the removal or addition of a new function and provide bounds in O(min(κ^0.5, κ/n^0.5,κ^1.5/n)) where κ is the condition number. We also show that the states of all agents can be eventually bounded independently of the sequence of arrivals and departures. The magnitude of the bound scales with the importance of the interconnection, which also determines the accuracy of the final solution in the absence of arrival and departure, exposing thus a potential trade-off between accuracy and sensitivity. Our analysis relies on the formulation of DGD as gradient descent on an auxiliary function. The tightness of our results is analyzed using the PESTO Toolbox.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/29/2021

Random coordinate descent algorithm for open multi-agent systems with complete topology and homogeneous agents

We study the convergence in expectation of the Random Coordinate Descent...
research
08/14/2018

Discrete gradient descent differs qualitatively from gradient flow

We consider gradient descent on functions of the form L_1 = |f| and L_2 ...
research
12/31/2019

A frequency-domain analysis of inexact gradient descent

We study robustness properties of inexact gradient descent for strongly ...
research
11/24/2020

Linear Convergence of Distributed Mirror Descent with Integral Feedback for Strongly Convex Problems

Distributed optimization often requires finding the minimum of a global ...
research
05/27/2021

Optimization in Open Networks via Dual Averaging

In networks of autonomous agents (e.g., fleets of vehicles, scattered se...
research
03/17/2019

DSPG: Decentralized Simultaneous Perturbations Gradient Descent Scheme

In this paper, we present an asynchronous approximate gradient method th...
research
05/08/2019

Optimal Statistical Rates for Decentralised Non-Parametric Regression with Linear Speed-Up

We analyse the learning performance of Distributed Gradient Descent in t...

Please sign up or login with your details

Forgot password? Click here to reset