DeepAI
Log In Sign Up

FLGUARD: Secure and Private Federated Learning

01/06/2021
by   Thien Duc Nguyen, et al.
0

Recently, a number of backdoor attacks against Federated Learning (FL) have been proposed. In such attacks, an adversary injects poisoned model updates into the federated model aggregation process with the goal of manipulating the aggregated model to provide false predictions on specific adversary-chosen inputs. A number of defenses have been proposed; but none of them can effectively protect the FL process also against so-called multi-backdoor attacks in which multiple different backdoors are injected by the adversary simultaneously without severely impacting the benign performance of the aggregated model. To overcome this challenge, we introduce FLGUARD, a poisoning defense framework that is able to defend FL against state-of-the-art backdoor attacks while simultaneously maintaining the benign performance of the aggregated model. Moreover, FL is also vulnerable to inference attacks, in which a malicious aggregator can infer information about clients' training data from their model updates. To thwart such attacks, we augment FLGUARD with state-of-the-art secure computation techniques that securely evaluate the FLGUARD algorithm. We provide formal argumentation for the effectiveness of our FLGUARD and extensively evaluate it against known backdoor attacks on several datasets and applications (including image classification, word prediction, and IoT intrusion detection), demonstrating that FLGUARD can entirely remove backdoors with a negligible effect on accuracy. We also show that private FLGUARD achieves practical runtimes.

READ FULL TEXT
01/03/2022

DeepSight: Mitigating Backdoor Attacks in Federated Learning Through Deep Model Inspection

Federated Learning (FL) allows multiple clients to collaboratively train...
02/10/2021

Meta Federated Learning

Due to its distributed methodology alongside its privacy-preserving feat...
01/23/2023

BayBFed: Bayesian Backdoor Defense for Federated Learning

Federated learning (FL) allows participants to jointly train a machine l...
11/04/2020

BaFFLe: Backdoor detection via Feedback-based Federated Learning

Recent studies have shown that federated learning (FL) is vulnerable to ...
12/04/2022

Security Analysis of SplitFed Learning

Split Learning (SL) and Federated Learning (FL) are two prominent distri...
05/26/2022

PerDoor: Persistent Non-Uniform Backdoors in Federated Learning using Adversarial Perturbations

Federated Learning (FL) enables numerous participants to train deep lear...
08/23/2021

Back to the Drawing Board: A Critical Evaluation of Poisoning Attacks on Federated Learning

While recent works have indicated that federated learning (FL) is vulner...