Byzantine-Robust Distributed Learning: Towards Optimal Statistical Rates

03/05/2018
by   Dong Yin, et al.
0

In large-scale distributed learning, security issues have become increasingly important. Particularly in a decentralized environment, some computing units may behave abnormally, or even exhibit Byzantine failures---arbitrary and potentially adversarial behavior. In this paper, we develop distributed learning algorithms that are provably robust against such failures, with a focus on achieving optimal statistical performance. A main result of this work is a sharp analysis of two robust distributed gradient descent algorithms based on median and trimmed mean operations, respectively. We prove statistical error rates for three kinds of population loss functions: strongly convex, non-strongly convex, and smooth non-convex. In particular, these algorithms are shown to achieve order-optimal statistical error rates for strongly convex losses. To achieve better communication efficiency, we further propose a median-based distributed algorithm that is provably robust, and uses only one communication round. For strongly convex quadratic loss, we show that this algorithm achieves the same optimal error rate as the robust distributed gradient descent algorithms.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/28/2021

Communication-efficient Byzantine-robust distributed learning with statistical guarantee

Communication efficiency and robustness are two major issues in modern d...
research
11/21/2019

Communication-Efficient and Byzantine-Robust Distributed Learning

We develop a communication-efficient distributed learning algorithm that...
research
06/28/2021

Robust Distributed Optimization With Randomly Corrupted Gradients

In this paper, we propose a first-order distributed optimization algorit...
research
06/14/2018

Defending Against Saddle Point Attack in Byzantine-Robust Distributed Learning

In this paper, we study robust large-scale distributed learning in the p...
research
08/21/2019

BRIDGE: Byzantine-resilient Decentralized Gradient Descent

Decentralized optimization techniques are increasingly being used to lea...
research
07/25/2023

High Dimensional Distributed Gradient Descent with Arbitrary Number of Byzantine Attackers

Robust distributed learning with Byzantine failures has attracted extens...
research
07/19/2022

ReBoot: Distributed statistical learning via refitting Bootstrap samples

In this paper, we study a one-shot distributed learning algorithm via re...

Please sign up or login with your details

Forgot password? Click here to reset