FLGUARD: Secure and Private Federated Learning

01/06/2021
by   Thien Duc Nguyen, et al.
0

Recently, a number of backdoor attacks against Federated Learning (FL) have been proposed. In such attacks, an adversary injects poisoned model updates into the federated model aggregation process with the goal of manipulating the aggregated model to provide false predictions on specific adversary-chosen inputs. A number of defenses have been proposed; but none of them can effectively protect the FL process also against so-called multi-backdoor attacks in which multiple different backdoors are injected by the adversary simultaneously without severely impacting the benign performance of the aggregated model. To overcome this challenge, we introduce FLGUARD, a poisoning defense framework that is able to defend FL against state-of-the-art backdoor attacks while simultaneously maintaining the benign performance of the aggregated model. Moreover, FL is also vulnerable to inference attacks, in which a malicious aggregator can infer information about clients' training data from their model updates. To thwart such attacks, we augment FLGUARD with state-of-the-art secure computation techniques that securely evaluate the FLGUARD algorithm. We provide formal argumentation for the effectiveness of our FLGUARD and extensively evaluate it against known backdoor attacks on several datasets and applications (including image classification, word prediction, and IoT intrusion detection), demonstrating that FLGUARD can entirely remove backdoors with a negligible effect on accuracy. We also show that private FLGUARD achieves practical runtimes.

READ FULL TEXT
research
01/03/2022

DeepSight: Mitigating Backdoor Attacks in Federated Learning Through Deep Model Inspection

Federated Learning (FL) allows multiple clients to collaboratively train...
research
06/06/2023

Avoid Adversarial Adaption in Federated Learning by Multi-Metric Investigations

Federated Learning (FL) trains machine learning models on data distribut...
research
08/01/2023

FLAIRS: FPGA-Accelerated Inference-Resistant Secure Federated Learning

Federated Learning (FL) has become very popular since it enables clients...
research
01/23/2023

BayBFed: Bayesian Backdoor Defense for Federated Learning

Federated learning (FL) allows participants to jointly train a machine l...
research
02/10/2021

Meta Federated Learning

Due to its distributed methodology alongside its privacy-preserving feat...
research
11/04/2020

BaFFLe: Backdoor detection via Feedback-based Federated Learning

Recent studies have shown that federated learning (FL) is vulnerable to ...
research
04/20/2023

Get Rid Of Your Trail: Remotely Erasing Backdoors in Federated Learning

Federated Learning (FL) enables collaborative deep learning training acr...

Please sign up or login with your details

Forgot password? Click here to reset