Solon: Communication-efficient Byzantine-resilient Distributed Training via Redundant Gradients

10/04/2021
by   Lingjiao Chen, et al.
3

There has been a growing need to provide Byzantine-resilience in distributed model training. Existing robust distributed learning algorithms focus on developing sophisticated robust aggregators at the parameter servers, but pay less attention to balancing the communication cost and robustness. In this paper, we propose Solon, an algorithmic framework that exploits gradient redundancy to provide communication efficiency and Byzantine robustness simultaneously. Our theoretical analysis shows a fundamental trade-off among computational load, communication cost, and Byzantine robustness. We also develop a concrete algorithm to achieve the optimal trade-off, borrowing ideas from coding theory and sparse recovery. Empirical experiments on various datasets demonstrate that Solon provides significant speedups over existing methods to achieve the same accuracy, over 10 times faster than Bulyan and 80 faster than Draco. We also show that carefully designed Byzantine attacks break Signum and Bulyan, but do not affect the successful convergence of Solon.

READ FULL TEXT

page 2

page 9

page 25

page 26

research
07/29/2019

DETOX: A Redundancy-based Framework for Faster and More Robust Gradient Aggregation

To improve the resilience of distributed training to worst-case, or Byza...
research
03/18/2023

Byzantine-Resilient Federated Learning at Edge

Both Byzantine resilience and communication efficiency have attracted tr...
research
10/14/2019

Election Coding for Distributed Learning: Protecting SignSGD against Byzantine Attacks

Recent advances in large-scale distributed learning algorithms have enab...
research
12/17/2019

PIRATE: A Blockchain-based Secure Framework of Distributed Machine Learning in 5G Networks

In the fifth-generation (5G) networks and the beyond, communication late...
research
05/18/2023

On the Geometric Convergence of Byzantine-Resilient Distributed Optimization Algorithms

The problem of designing distributed optimization algorithms that are re...
research
02/28/2021

Communication-efficient Byzantine-robust distributed learning with statistical guarantee

Communication efficiency and robustness are two major issues in modern d...
research
10/29/2022

Robust Distributed Learning Against Both Distributional Shifts and Byzantine Attacks

In distributed learning systems, robustness issues may arise from two so...

Please sign up or login with your details

Forgot password? Click here to reset