Utilizing Redundancy in Cost Functions for Resilience in Distributed Optimization and Learning

10/21/2021
by   Shuo Liu, et al.
0

This paper considers the problem of resilient distributed optimization and stochastic machine learning in a server-based architecture. The system comprises a server and multiple agents, where each agent has a local cost function. The agents collaborate with the server to find a minimum of their aggregate cost functions. We consider the case when some of the agents may be asynchronous and/or Byzantine faulty. In this case, the classical algorithm of distributed gradient descent (DGD) is rendered ineffective. Our goal is to design techniques improving the efficacy of DGD with asynchrony and Byzantine failures. To do so, we start by proposing a way to model the agents' cost functions by the generic notion of (f, r; ϵ)-redundancy where f and r are the parameters of Byzantine failures and asynchrony, respectively, and ϵ characterizes the closeness between agents' cost functions. This allows us to quantify the level of redundancy present amongst the agents' cost functions, for any given distributed optimization problem. We demonstrate, both theoretically and empirically, the merits of our proposed redundancy model in improving the robustness of DGD against asynchronous and Byzantine agents, and their extensions to distributed stochastic gradient descent (D-SGD) for robust distributed machine learning with asynchronous and Byzantine agents.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/07/2021

Asynchronous Distributed Optimization with Redundancy in Cost Functions

This paper considers the problem of asynchronous distributed multi-agent...
research
11/16/2022

Impact of Redundancy on Resilience in Distributed Optimization and Learning

This report considers the problem of resilient distributed optimization ...
research
07/29/2019

DETOX: A Redundancy-based Framework for Faster and More Robust Gradient Aggregation

To improve the resilience of distributed training to worst-case, or Byza...
research
03/20/2019

Byzantine Fault Tolerant Distributed Linear Regression

This paper considers the problem of Byzantine fault tolerant distributed...
research
12/05/2022

Distributed Stochastic Gradient Descent with Cost-Sensitive and Strategic Agents

This study considers a federated learning setup where cost-sensitive and...
research
05/18/2023

On the Geometric Convergence of Byzantine-Resilient Distributed Optimization Algorithms

The problem of designing distributed optimization algorithms that are re...
research
03/19/2020

Byzantine-Resilient Distributed Optimization of Multi-Dimensional Functions

The problem of distributed optimization requires a group of agents to re...

Please sign up or login with your details

Forgot password? Click here to reset