Fast and Secure Distributed Learning in High Dimension

05/05/2019
by   El Mahdi El Mhamdi, et al.
0

Modern machine learning is distributed and the work of several machines is typically aggregated by averaging which is the optimal rule in terms of speed, offering a speedup of n (with respect to using a single machine) when n processes are learning together. Distributing data and models poses however fundamental vulnerabilities, be they to software bugs, asynchrony, or worse, to malicious attackers controlling some machines or injecting misleading data in the network. Such behavior is best modeled as Byzantine failures, and averaging does not tolerate a single one from a worker. Krum, the first provably Byzantine resilient aggregation rule for SGD only uses one worker per step, which hampers its speed of convergence, especially in best case conditions when none of the workers is actually Byzantine. An idea, coined multi-Krum, of using m different workers per step was mentioned, without however any proof neither on its Byzantine resilience nor on its slowdown. More recently, it was shown that in high dimensional machine learning, guaranteeing convergence is not a sufficient condition for strong Byzantine resilience. A improvement on Krum, coined Bulyan, was proposed and proved to guarantee stronger resilience. However, Bulyan suffers from the same weakness of Krum: using only one worker per step. This adds up to the aforementioned open problem and leaves the crucial need for both fast and strong Byzantine resilience unfulfilled. The present paper proposes using Bulyan over Multi-Krum (we call it Multi-Bulyan), a combination for which we provide proofs of strong Byzantine resilience, as well as an m/n slowdown, compared to averaging, the fastest (but non Byzantine resilient) rule for distributed machine learning, finally we prove that Multi-Bulyan inherits the O(d) merits of both multi-Krum and Bulyan.

READ FULL TEXT
research
03/08/2017

Byzantine-Tolerant Machine Learning

The growth of data, the need for scalability and the complexity of model...
research
05/24/2022

Byzantine Machine Learning Made Easy by Resilient Averaging of Momentums

Byzantine resilience emerged as a prominent topic within the distributed...
research
12/17/2019

PIRATE: A Blockchain-based Secure Framework of Distributed Machine Learning in 5G Networks

In the fifth-generation (5G) networks and the beyond, communication late...
research
10/25/2020

Byzantine Resilient Distributed Multi-Task Learning

Distributed multi-task learning provides significant advantages in multi...
research
05/05/2019

SGD: Decentralized Byzantine Resilience

The size of the datasets available today leads to distribute Machine Lea...
research
10/08/2021

Combining Differential Privacy and Byzantine Resilience in Distributed SGD

Privacy and Byzantine resilience (BR) are two crucial requirements of mo...
research
06/15/2020

Distributed Newton Can Communicate Less and Resist Byzantine Workers

We develop a distributed second order optimization algorithm that is com...

Please sign up or login with your details

Forgot password? Click here to reset