Robust Distributed Learning Against Both Distributional Shifts and Byzantine Attacks

10/29/2022
by   Guanqiang Zhou, et al.
0

In distributed learning systems, robustness issues may arise from two sources. On one hand, due to distributional shifts between training data and test data, the trained model could exhibit poor out-of-sample performance. On the other hand, a portion of working nodes might be subject to byzantine attacks which could invalidate the learning result. Existing works mostly deal with these two issues separately. In this paper, we propose a new algorithm that equips distributed learning with robustness measures against both distributional shifts and byzantine attacks. Our algorithm is built on recent advances in distributionally robust optimization as well as norm-based screening (NBS), a robust aggregation scheme against byzantine attacks. We provide convergence proofs in three cases of the learning model being nonconvex, convex, and strongly convex for the proposed algorithm, shedding light on its convergence behaviors and endurability against byzantine attacks. In particular, we deduce that any algorithm employing NBS (including ours) cannot converge when the percentage of byzantine nodes is 1/3 or higher, instead of 1/2, which is the common belief in current literature. The experimental results demonstrate the effectiveness of our algorithm against both robustness issues. To the best of our knowledge, this is the first work to address distributional shifts and byzantine attacks simultaneously.

READ FULL TEXT
research
04/29/2022

Bridging Differential Privacy and Byzantine-Robustness via Model Aggregation

This paper aims at jointly addressing two seemly conflicting issues in f...
research
08/15/2021

Efficient Byzantine-Resilient Stochastic Gradient Desce

Distributed Learning often suffers from Byzantine failures, and there ha...
research
02/20/2020

Towards Byzantine-resilient Learning in Decentralized Systems

With the proliferation of IoT and edge computing, decentralized learning...
research
05/18/2021

Distributionally Robust Learning in Heterogeneous Contexts

We consider the problem of learning from training data obtained in diffe...
research
09/05/2021

Tolerating Adversarial Attacks and Byzantine Faults in Distributed Machine Learning

Adversarial attacks attempt to disrupt the training, retraining and util...
research
04/14/2022

Reputation and Audit Bit Based Distributed Detection in the Presence of Byzantine

In this paper, two reputation based algorithms called Reputation and aud...
research
10/04/2021

Solon: Communication-efficient Byzantine-resilient Distributed Training via Redundant Gradients

There has been a growing need to provide Byzantine-resilience in distrib...

Please sign up or login with your details

Forgot password? Click here to reset