BROADCAST: Reducing Both Stochastic and Compression Noise to Robustify Communication-Efficient Federated Learning

04/14/2021
by   Heng Zhu, et al.
0

Communication between workers and the master node to collect local stochastic gradients is a key bottleneck in a large-scale federated learning system. Various recent works have proposed to compress the local stochastic gradients to mitigate the communication overhead. However, robustness to malicious attacks is rarely considered in such a setting. In this work, we investigate the problem of Byzantine-robust federated learning with compression, where the attacks from Byzantine workers can be arbitrarily malicious. We point out that a vanilla combination of compressed stochastic gradient descent (SGD) and geometric median-based robust aggregation suffers from both stochastic and compression noise in the presence of Byzantine attacks. In light of this observation, we propose to jointly reduce the stochastic and compression noise so as to improve the Byzantine-robustness. For the stochastic noise, we adopt the stochastic average gradient algorithm (SAGA) to gradually eliminate the inner variations of regular workers. For the compression noise, we apply the gradient difference compression and achieve compression for free. We theoretically prove that the proposed algorithm reaches a neighborhood of the optimal solution at a linear convergence rate, and the asymptotic learning error is in the same order as that of the state-of-the-art uncompressed method. Finally, numerical experiments demonstrate effectiveness of the proposed method.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/17/2020

Byzantine-Robust Variance-Reduced Federated Learning over Distributed Non-i.i.d. Data

We propose a Byzantine-robust variance-reduced stochastic gradient desce...
research
06/01/2022

Variance Reduction is an Antidote to Byzantines: Better Rates, Weaker Assumptions and Communication Compression as a Cherry on the Top

Byzantine-robustness has been gaining a lot of attention due to the grow...
research
07/26/2021

LEGATO: A LayerwisE Gradient AggregaTiOn Algorithm for Mitigating Byzantine Attacks in Federated Learning

Federated learning has arisen as a mechanism to allow multiple participa...
research
06/17/2020

Communication-Efficient Robust Federated Learning Over Heterogeneous Datasets

This work investigates fault-resilient federated learning when the data ...
research
06/04/2019

Distributed Training with Heterogeneous Data: Bridging Median and Mean Based Algorithms

Recently, there is a growing interest in the study of median-based algor...
research
06/13/2021

Stochastic Alternating Direction Method of Multipliers for Byzantine-Robust Distributed Learning

This paper aims to solve a distributed learning problem under Byzantine ...
research
10/18/2021

BEV-SGD: Best Effort Voting SGD for Analog Aggregation Based Federated Learning against Byzantine Attackers

As a promising distributed learning technology, analog aggregation based...

Please sign up or login with your details

Forgot password? Click here to reset