Asynchronous Distributed Optimization with Redundancy in Cost Functions

06/07/2021
by   Shuo Liu, et al.
0

This paper considers the problem of asynchronous distributed multi-agent optimization on server-based system architecture. In this problem, each agent has a local cost, and the goal for the agents is to collectively find a minimum of their aggregate cost. A standard algorithm to solve this problem is the iterative distributed gradient-descent (DGD) method being implemented collaboratively by the server and the agents. In the synchronous setting, the algorithm proceeds from one iteration to the next only after all the agents complete their expected communication with the server. However, such synchrony can be expensive and even infeasible in real-world applications. We show that waiting for all the agents is unnecessary in many applications of distributed optimization, including distributed machine learning, due to redundancy in the cost functions (or data). Specifically, we consider a generic notion of redundancy named (r,ϵ)-redundancy implying solvability of the original multi-agent optimization problem with ϵ accuracy, despite the removal of up to r (out of total n) agents from the system. We present an asynchronous DGD algorithm where in each iteration the server only waits for (any) n-r agents, instead of all the n agents. Assuming (r,ϵ)-redundancy, we show that our asynchronous algorithm converges to an approximate solution with error that is linear in ϵ and r. Moreover, we also present a generalization of our algorithm to tolerate some Byzantine faulty agents in the system. Finally, we demonstrate the improved communication efficiency of our algorithm through experiments on MNIST and Fashion-MNIST using the benchmark neural network LeNet.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/21/2021

Utilizing Redundancy in Cost Functions for Resilience in Distributed Optimization and Learning

This paper considers the problem of resilient distributed optimization a...
research
11/16/2022

Impact of Redundancy on Resilience in Distributed Optimization and Learning

This report considers the problem of resilient distributed optimization ...
research
09/24/2018

A Canonical Form for First-Order Distributed Optimization Algorithms

We consider the distributed optimization problem in which a network of a...
research
11/16/2022

Asynchronous Bayesian Learning over a Network

We present a practical asynchronous data fusion model for networked agen...
research
09/22/2020

Asynchronous Distributed Optimization with Randomized Delays

In this work, we study asynchronous finite sum minimization in a distrib...
research
03/17/2019

DSPG: Decentralized Simultaneous Perturbations Gradient Descent Scheme

In this paper, we present an asynchronous approximate gradient method th...
research
12/05/2022

Distributed Stochastic Gradient Descent with Cost-Sensitive and Strategic Agents

This study considers a federated learning setup where cost-sensitive and...

Please sign up or login with your details

Forgot password? Click here to reset