Phocas: dimensional Byzantine-resilient stochastic gradient descent

05/23/2018
by   Cong Xie, et al.
0

We propose a novel robust aggregation rule for distributed synchronous Stochastic Gradient Descent (SGD) under a general Byzantine failure model. The attackers can arbitrarily manipulate the data transferred between the servers and the workers in the parameter server (PS) architecture. We prove the Byzantine resilience of the proposed aggregation rules. Empirical analysis shows that the proposed techniques outperform current approaches for realistic use cases and Byzantine attack scenarios.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/27/2018

Generalized Byzantine-tolerant SGD

We propose three new robust aggregation rules for distributed synchronou...
research
02/28/2020

Distributed Momentum for Byzantine-resilient Learning

Momentum is a variant of gradient descent that has been proposed for its...
research
08/25/2022

A simplified convergence theory for Byzantine resilient stochastic gradient descent

In distributed learning, a central server trains a model according to up...
research
02/22/2018

The Hidden Vulnerability of Distributed Learning in Byzantium

While machine learning is going through an era of celebrated success, co...
research
09/10/2019

Byzantine-Resilient Stochastic Gradient Descent for Distributed Learning: A Lipschitz-Inspired Coordinate-wise Median Approach

In this work, we consider the resilience of distributed algorithms based...
research
03/08/2017

Byzantine-Tolerant Machine Learning

The growth of data, the need for scalability and the complexity of model...
research
10/18/2021

BEV-SGD: Best Effort Voting SGD for Analog Aggregation Based Federated Learning against Byzantine Attackers

As a promising distributed learning technology, analog aggregation based...

Please sign up or login with your details

Forgot password? Click here to reset