Mitigating Sybil Attacks on Differential Privacy based Federated Learning

10/20/2020
by   Yupeng Jiang, et al.
0

In federated learning, machine learning and deep learning models are trained globally on distributed devices. The state-of-the-art privacy-preserving technique in the context of federated learning is user-level differential privacy. However, such a mechanism is vulnerable to some specific model poisoning attacks such as Sybil attacks. A malicious adversary could create multiple fake clients or collude compromised devices in Sybil attacks to mount direct model updates manipulation. Recent works on novel defense against model poisoning attacks are difficult to detect Sybil attacks when differential privacy is utilized, as it masks clients' model updates with perturbation. In this work, we implement the first Sybil attacks on differential privacy based federated learning architectures and show their impacts on model convergence. We randomly compromise some clients by manipulating different noise levels reflected by the local privacy budget epsilon of differential privacy on the local model updates of these Sybil clients such that the global model convergence rates decrease or even leads to divergence. We apply our attacks to two recent aggregation defense mechanisms, called Krum and Trimmed Mean. Our evaluation results on the MNIST and CIFAR-10 datasets show that our attacks effectively slow down the convergence of the global models. We then propose a method to keep monitoring the average loss of all participants in each round for convergence anomaly detection and defend our Sybil attacks based on the prediction cost reported from each client. Our empirical study demonstrates that our defense approach effectively mitigates the impact of our Sybil attacks on model convergence.

READ FULL TEXT
research
07/02/2023

FedDefender: Backdoor Attack Defense in Federated Learning

Federated Learning (FL) is a privacy-preserving distributed machine lear...
research
02/09/2022

ARIBA: Towards Accurate and Robust Identification of Backdoor Attacks in Federated Learning

The distributed nature and privacy-preserving characteristics of federat...
research
03/01/2023

Mitigating Backdoors in Federated Learning with FLD

Federated learning allows clients to collaboratively train a global mode...
research
10/14/2022

Close the Gate: Detecting Backdoored Models in Federated Learning based on Client-Side Deep Layer Output Analysis

Federated Learning (FL) is a scheme for collaboratively training Deep Ne...
research
05/09/2023

Turning Privacy-preserving Mechanisms against Federated Learning

Recently, researchers have successfully employed Graph Neural Networks (...
research
01/17/2023

Graph Topology Learning Under Privacy Constraints

Graph learning, which aims to infer the underlying topology behind high ...
research
09/14/2020

Private data sharing between decentralized users through the privGAN architecture

More data is almost always beneficial for analysis and machine learning ...

Please sign up or login with your details

Forgot password? Click here to reset