On the Optimal Batch Size for Byzantine-Robust Distributed Learning

05/23/2023
by   Yi-Rui Yang, et al.
0

Byzantine-robust distributed learning (BRDL), in which computing devices are likely to behave abnormally due to accidental failures or malicious attacks, has recently become a hot research topic. However, even in the independent and identically distributed (i.i.d.) case, existing BRDL methods will suffer from a significant drop on model accuracy due to the large variance of stochastic gradients. Increasing batch sizes is a simple yet effective way to reduce the variance. However, when the total number of gradient computation is fixed, a too-large batch size will lead to a too-small iteration number (update number), which may also degrade the model accuracy. In view of this challenge, we mainly study the optimal batch size when the total number of gradient computation is fixed in this work. In particular, we theoretically and empirically show that when the total number of gradient computation is fixed, the optimal batch size in BRDL increases with the fraction of Byzantine workers. Therefore, compared to the case without attacks, the batch size should be set larger when under Byzantine attacks. However, for existing BRDL methods, large batch sizes will lead to a drop on model accuracy, even if there is no Byzantine attack. To deal with this problem, we propose a novel BRDL method, called Byzantine-robust stochastic gradient descent with normalized momentum (ByzSGDnm), which can alleviate the drop on model accuracy in large-batch cases. Moreover, we theoretically prove the convergence of ByzSGDnm for general non-convex cases under Byzantine attacks. Empirical results show that ByzSGDnm has a comparable performance to existing BRDL methods under bit-flipping failure, but can outperform existing BRDL methods under deliberately crafted attacks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/28/2020

Stochastic Normalized Gradient Descent with Momentum for Large Batch Training

Stochastic gradient descent (SGD) and its variants have been the dominat...
research
08/15/2021

Efficient Byzantine-Resilient Stochastic Gradient Desce

Distributed Learning often suffers from Byzantine failures, and there ha...
research
05/25/2018

Zeno: Byzantine-suspicious stochastic gradient descent

We propose Zeno, a new robust aggregation rule, for distributed synchron...
research
11/09/2018

RSA: Byzantine-Robust Stochastic Aggregation Methods for Distributed Learning from Heterogeneous Datasets

In this paper, we propose a class of robust stochastic subgradient metho...
research
05/16/2017

Distributed Statistical Machine Learning in Adversarial Settings: Byzantine Gradient Descent

We consider the problem of distributed statistical machine learning in a...
research
04/26/2018

Securing Distributed Machine Learning in High Dimensions

We consider securing a distributed machine learning system wherein the d...
research
06/13/2021

Stochastic Alternating Direction Method of Multipliers for Byzantine-Robust Distributed Learning

This paper aims to solve a distributed learning problem under Byzantine ...

Please sign up or login with your details

Forgot password? Click here to reset