MixTailor: Mixed Gradient Aggregation for Robust Learning Against Tailored Attacks

07/16/2022
by   Ali Ramezani-Kebrya, et al.
0

Implementations of SGD on distributed and multi-GPU systems creates new vulnerabilities, which can be identified and misused by one or more adversarial agents. Recently, it has been shown that well-known Byzantine-resilient gradient aggregation schemes are indeed vulnerable to informed attackers that can tailor the attacks (Fang et al., 2020; Xie et al., 2020b). We introduce MixTailor, a scheme based on randomization of the aggregation strategies that makes it impossible for the attacker to be fully informed. Deterministic schemes can be integrated into MixTailor on the fly without introducing any additional hyperparameters. Randomization decreases the capability of a powerful adversary to tailor its attacks, while the resulting randomized aggregation scheme is still competitive in terms of performance. For both iid and non-iid settings, we establish almost sure convergence guarantees that are both stronger and more general than those available in the literature. Our empirical studies across various datasets, attacks, and settings, validate our hypothesis and show that MixTailor successfully defends when well-known Byzantine-tolerant schemes fail.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/18/2021

BEV-SGD: Best Effort Voting SGD for Analog Aggregation Based Federated Learning against Byzantine Attackers

As a promising distributed learning technology, analog aggregation based...
research
02/27/2018

Generalized Byzantine-tolerant SGD

We propose three new robust aggregation rules for distributed synchronou...
research
02/22/2018

The Hidden Vulnerability of Distributed Learning in Byzantium

While machine learning is going through an era of celebrated success, co...
research
12/18/2020

Learning from History for Byzantine Robust Optimization

Byzantine robustness has received significant attention recently given i...
research
03/10/2019

Fall of Empires: Breaking Byzantine-tolerant SGD by Inner Product Manipulation

Recently, new defense techniques have been developed to tolerate Byzanti...
research
03/13/2021

Simeon – Secure Federated Machine Learning Through Iterative Filtering

Federated learning enables a global machine learning model to be trained...
research
02/28/2020

Distributed Momentum for Byzantine-resilient Learning

Momentum is a variant of gradient descent that has been proposed for its...

Please sign up or login with your details

Forgot password? Click here to reset