Distributed gradient-based optimization in the presence of dependent aperiodic communication

01/27/2022
by   Adrian Redder, et al.
0

Iterative distributed optimization algorithms involve multiple agents that communicate with each other, over time, in order to minimize/maximize a global objective. In the presence of unreliable communication networks, the Age-of-Information (AoI), which measures the freshness of data received, may be large and hence hinder algorithmic convergence. In this paper, we study the convergence of general distributed gradient-based optimization algorithms in the presence of communication that neither happens periodically nor at stochastically independent points in time. We show that convergence is guaranteed provided the random variables associated with the AoI processes are stochastically dominated by a random variable with finite first moment. This improves on previous requirements of boundedness of more than the first moment. We then introduce stochastically strongly connected (SSC) networks, a new stochastic form of strong connectedness for time-varying networks. We show: If for any p ≥0 the processes that describe the success of communication between agents in a SSC network are α-mixing with n^p-1α(n) summable, then the associated AoI processes are stochastically dominated by a random variable with finite p-th moment. In combination with our first contribution, this implies that distributed stochastic gradient descend converges in the presence of AoI, if α(n) is summable.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/11/2023

Stability and Convergence of Distributed Stochastic Approximations with large Unbounded Stochastic Information Delays

We generalize the Borkar-Meyn stability Theorem (BMT) to distributed sto...
research
05/10/2021

Practical sufficient conditions for convergence of distributed optimisation algorithms over communication networks with interference

Information exchange over networks can be affected by various forms of d...
research
04/28/2011

Distributed Delayed Stochastic Optimization

We analyze the convergence of gradient-based optimization algorithms tha...
research
09/24/2018

A Canonical Form for First-Order Distributed Optimization Algorithms

We consider the distributed optimization problem in which a network of a...
research
05/03/2022

Average Age of Information Minimization in Reliable Covert Communication on Time-Varying Channels

In this letter, we propose reliable covert communications with the aim o...
research
10/14/2022

Hybrid Decentralized Optimization: First- and Zeroth-Order Optimizers Can Be Jointly Leveraged For Faster Convergence

Distributed optimization has become one of the standard ways of speeding...
research
04/05/2021

Self-Healing First-Order Distributed Optimization

In this paper we describe a parameterized family of first-order distribu...

Please sign up or login with your details

Forgot password? Click here to reset