Robust and Efficient Aggregation for Distributed Learning

04/01/2022
by   Stefan Vlaski, et al.
1

Distributed learning paradigms, such as federated and decentralized learning, allow for the coordination of models across a collection of agents, and without the need to exchange raw data. Instead, agents compute model updates locally based on their available data, and subsequently share the update model with a parameter server or their peers. This is followed by an aggregation step, which traditionally takes the form of a (weighted) average. Distributed learning schemes based on averaging are known to be susceptible to outliers. A single malicious agent is able to drive an averaging-based distributed learning algorithm to an arbitrarily poor model. This has motivated the development of robust aggregation schemes, which are based on variations of the median and trimmed mean. While such procedures ensure robustness to outliers and malicious behavior, they come at the cost of significantly reduced sample efficiency. This means that current robust aggregation schemes require significantly higher agent participation rates to achieve a given level of performance than their mean-based counterparts in non-contaminated settings. In this work we remedy this drawback by developing statistically efficient and robust aggregation schemes for distributed learning.

READ FULL TEXT
research
04/27/2023

Attacks on Robust Distributed Learning Schemes via Sensitivity Curve Maximization

Distributed learning paradigms, such as federated or decentralized learn...
research
10/06/2021

Secure Byzantine-Robust Distributed Learning via Clustering

Federated learning systems that jointly preserve Byzantine robustness an...
research
12/16/2022

Robust Learning Protocol for Federated Tumor Segmentation Challenge

In this work, we devise robust and efficient learning protocols for orch...
research
12/31/2019

Robust Aggregation for Federated Learning

We present a robust aggregation approach to make federated learning robu...
research
03/27/2018

DRACO: Robust Distributed Training via Redundant Gradients

Distributed model training is vulnerable to worst-case system failures a...
research
11/10/2016

Distributed Estimation and Learning over Heterogeneous Networks

We consider several estimation and learning problems that networked agen...
research
11/15/2019

Information-Theoretic Perspective of Federated Learning

An approach to distributed machine learning is to train models on local ...

Please sign up or login with your details

Forgot password? Click here to reset