DSPG: Decentralized Simultaneous Perturbations Gradient Descent Scheme

03/17/2019
by   Arunselvan Ramaswamy, et al.
0

In this paper, we present an asynchronous approximate gradient method that is easy to implement called DSPG (Decentralized Simultaneous Perturbation Stochastic Approximations, with Constant Sensitivity Parameters). It is obtained by modifying SPSA (Simultaneous Perturbation Stochastic Approximations) to allow for decentralized optimization in multi-agent learning and distributed control scenarios. SPSA is a popular approximate gradient method developed by Spall, that is used in Robotics and Learning. In the multi-agent learning setup considered herein, the agents are assumed to be asynchronous (agents abide by their local clocks) and communicate via a wireless medium, that is prone to losses and delays. We analyze the gradient estimation bias that arises from setting the sensitivity parameters to a single value, and the bias that arises from communication losses and delays. Specifically, we show that these biases can be countered through better and frequent communication and/or by choosing a small fixed value for the sensitivity parameters. We also discuss the variance of the gradient estimator and its effect on the rate of convergence. Finally, we present numerical results supporting DSPG and the aforementioned theories and discussions.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/30/2022

Online Multi-Agent Decentralized Byzantine-robust Gradient Estimation

In this paper, we propose an iterative scheme for distributed Byzantiner...
research
02/22/2018

Asynchronous stochastic approximations with asymptotically biased errors and deep multi-agent learning

Asynchronous stochastic approximations are an important class of model-f...
research
09/12/2019

Communication-Efficient Distributed Optimization in Networks with Gradient Tracking

There is a growing interest in large-scale machine learning and optimiza...
research
06/07/2021

Asynchronous Distributed Optimization with Redundancy in Cost Functions

This paper considers the problem of asynchronous distributed multi-agent...
research
02/02/2022

Asynchronous Decentralized Learning over Unreliable Wireless Networks

Decentralized learning enables edge users to collaboratively train model...
research
02/15/2021

Distributed Online Learning for Joint Regret with Communication Constraints

In this paper we consider a distributed online learning setting for jo...
research
09/11/2020

Stability of Decentralized Gradient Descent in Open Multi-Agent Systems

The aim of decentralized gradient descent (DGD) is to minimize a sum of ...

Please sign up or login with your details

Forgot password? Click here to reset