BASGD: Buffered Asynchronous SGD for Byzantine Learning

03/02/2020
by   Yi-Rui Yang, et al.
0

Distributed learning has become a hot research topic, due to its wide application in cluster-based large-scale learning, federated learning, edge computing and so on. Most distributed learning methods assume no error and attack on the workers. However, many unexpected cases, such as communication error and even malicious attack, may happen in real applications. Hence, Byzantine learning (BL), which refers to distributed learning with attack or error, has recently attracted much attention. Most existing BL methods are synchronous, which will result in slow convergence when there exist heterogeneous workers. Furthermore, in some applications like federated learning and edge computing, synchronization cannot even be performed most of the time due to the online workers (clients or edge servers). Hence, asynchronous BL (ABL) is more general and practical than synchronous BL (SBL). To the best of our knowledge, there exist only two ABL methods. One of them cannot resist malicious attack. The other needs to store some training instances on the server, which has the privacy leak problem. In this paper, we propose a novel method, called buffered asynchronous stochastic gradient descent (BASGD), for BL. BASGD is an asynchronous method. Furthermore, BASGD has no need to store any training instances on the server, and hence can preserve privacy in ABL. BASGD is theoretically proved to have the ability of resisting against error and malicious attack. Moreover, BASGD has a similar theoretical convergence rate to that of vanilla asynchronous SGD (ASGD), with an extra constant variance. Empirical results show that BASGD can significantly outperform vanilla ASGD and other ABL baselines, when there exists error or attack on workers.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/17/2019

Zeno++: robust asynchronous SGD with arbitrary number of Byzantine workers

We propose Zeno++, a new robust asynchronous synchronous Stochastic Grad...
research
06/16/2022

Sharper Convergence Guarantees for Asynchronous SGD for Distributed and Federated Learning

We study the asynchronous stochastic gradient descent algorithm for dist...
research
10/17/2019

Communication-Efficient Asynchronous Stochastic Frank-Wolfe over Nuclear-norm Balls

Large-scale machine learning training suffers from two prior challenges,...
research
05/31/2022

Asynchronous Hierarchical Federated Learning

Federated Learning is a rapidly growing area of research and with variou...
research
10/18/2021

BEV-SGD: Best Effort Voting SGD for Analog Aggregation Based Federated Learning against Byzantine Attackers

As a promising distributed learning technology, analog aggregation based...
research
09/17/2020

Byzantine-Robust Variance-Reduced Federated Learning over Distributed Non-i.i.d. Data

We propose a Byzantine-robust variance-reduced stochastic gradient desce...
research
02/17/2022

An Equivalence Between Data Poisoning and Byzantine Gradient Attacks

To study the resilience of distributed learning, the "Byzantine" literat...

Please sign up or login with your details

Forgot password? Click here to reset