Communication-efficient Byzantine-robust distributed learning with statistical guarantee

02/28/2021
by   Xingcai Zhou, et al.
0

Communication efficiency and robustness are two major issues in modern distributed learning framework. This is due to the practical situations where some computing nodes may have limited communication power or may behave adversarial behaviors. To address the two issues simultaneously, this paper develops two communication-efficient and robust distributed learning algorithms for convex problems. Our motivation is based on surrogate likelihood framework and the median and trimmed mean operations. Particularly, the proposed algorithms are provably robust against Byzantine failures, and also achieve optimal statistical rates for strong convex losses and convex (non-smooth) penalties. For typical statistical models such as generalized linear models, our results show that statistical errors dominate optimization errors in finite iterations. Simulated and real data experiments are conducted to demonstrate the numerical performance of our algorithms.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/05/2018

Byzantine-Robust Distributed Learning: Towards Optimal Statistical Rates

In large-scale distributed learning, security issues have become increas...
research
03/04/2021

Variance Reduced Median-of-Means Estimator for Byzantine-Robust Distributed Inference

This paper develops an efficient distributed inference algorithm, which ...
research
11/21/2019

Communication-Efficient and Byzantine-Robust Distributed Learning

We develop a communication-efficient distributed learning algorithm that...
research
07/19/2022

ReBoot: Distributed statistical learning via refitting Bootstrap samples

In this paper, we study a one-shot distributed learning algorithm via re...
research
02/12/2023

Flag Aggregator: Scalable Distributed Training under Failures and Augmented Losses using Convex Optimization

Modern ML applications increasingly rely on complex deep learning models...
research
06/28/2021

Robust Distributed Optimization With Randomly Corrupted Gradients

In this paper, we propose a first-order distributed optimization algorit...
research
10/04/2021

Solon: Communication-efficient Byzantine-resilient Distributed Training via Redundant Gradients

There has been a growing need to provide Byzantine-resilience in distrib...

Please sign up or login with your details

Forgot password? Click here to reset