Echo-CGC: A Communication-Efficient Byzantine-tolerant Distributed Machine Learning Algorithm in Single-Hop Radio Network

11/15/2020
by   Qinzi Zhang, et al.
0

In this paper, we focus on a popular DML framework – the parameter server computation paradigm and iterative learning algorithms that proceed in rounds. We aim to reduce the communication complexity of Byzantine-tolerant DML algorithms in the single-hop radio network. Inspired by the CGC filter developed by Gupta and Vaidya, PODC 2020, we propose a gradient descent-based algorithm, Echo-CGC. Our main novelty is a mechanism to utilize the broadcast properties of the radio network to avoid transmitting the raw gradients (full d-dimensional vectors). In the radio network, each worker is able to overhear previous gradients that were transmitted to the parameter server. Roughly speaking, in Echo-CGC, if a worker "agrees" with a combination of prior gradients, it will broadcast the "echo message" instead of the its raw local gradient. The echo message contains a vector of coefficients (of size at most n) and the ratio of the magnitude between two gradients (a float). In comparison, the traditional approaches need to send n local gradients in each round, where each gradient is typically a vector in an ultra-high dimensional space (d≫ n). The improvement on communication complexity of our algorithm depends on multiple factors, including number of nodes, number of faulty workers in an execution, and the cost function. We numerically analyze the improvement, and show that with a large number of nodes, Echo-CGC reduces 80% of the communication under standard assumptions.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/21/2019

Multi-hop Byzantine Reliable Broadcast Made Practical

We revisit Byzantine tolerant reliable broadcast algorithms in multi-hop...
research
04/14/2020

Round-Efficient Distributed Byzantine Computation

We present the first round efficient algorithms for several fundamental ...
research
02/09/2020

A Simple and Efficient Asynchronous Randomized Binary Byzantine Consensus Algorithm

This paper describes a simple and efficient asynchronous Binary Byzantin...
research
05/16/2017

Distributed Statistical Machine Learning in Adversarial Settings: Byzantine Gradient Descent

We consider the problem of distributed statistical machine learning in a...
research
04/26/2018

Securing Distributed Machine Learning in High Dimensions

We consider securing a distributed machine learning system wherein the d...
research
10/11/2018

signSGD with Majority Vote is Communication Efficient And Byzantine Fault Tolerant

Training neural networks on large datasets can be accelerated by distrib...
research
04/12/2019

Fast and Resource Competitive Broadcast in Multi-channel Radio Networks

Consider a single-hop, multi-channel, synchronous radio network in which...

Please sign up or login with your details

Forgot password? Click here to reset