Provably Secure Federated Learning against Malicious Clients

02/03/2021
by   Xiaoyu Cao, et al.
0

Federated learning enables clients to collaboratively learn a shared global model without sharing their local training data with a cloud server. However, malicious clients can corrupt the global model to predict incorrect labels for testing examples. Existing defenses against malicious clients leverage Byzantine-robust federated learning methods. However, these methods cannot provably guarantee that the predicted label for a testing example is not affected by malicious clients. We bridge this gap via ensemble federated learning. In particular, given any base federated learning algorithm, we use the algorithm to learn multiple global models, each of which is learnt using a randomly selected subset of clients. When predicting the label of a testing example, we take majority vote among the global models. We show that our ensemble federated learning with any base federated learning algorithm is provably secure against malicious clients. Specifically, the label predicted by our ensemble global model for a testing example is provably not affected by a bounded number of malicious clients. Moreover, we show that our derived bound is tight. We evaluate our method on MNIST and Human Activity Recognition datasets. For instance, our method can achieve a certified accuracy of 88 MNIST when 20 out of 1,000 clients are malicious.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/02/2022

FLCert: Provably Secure Federated Learning against Poisoning Attacks

Due to its distributed nature, federated learning is vulnerable to poiso...
research
12/27/2020

FLTrust: Byzantine-robust Federated Learning via Trust Bootstrapping

Byzantine-robust federated learning aims to enable a service provider to...
research
05/09/2023

Balancing Privacy and Security in Federated Learning with FedGT: A Group Testing Framework

We propose FedGT, a novel framework for identifying malicious clients in...
research
08/11/2020

Intrinsic Certified Robustness of Bagging against Data Poisoning Attacks

In a data poisoning attack, an attacker modifies, deletes, and/or insert...
research
08/14/2022

Long-Short History of Gradients is All You Need: Detecting Malicious and Unreliable Clients in Federated Learning

Federated learning offers a framework of training a machine learning mod...
research
10/28/2020

Mitigating Backdoor Attacks in Federated Learning

Malicious clients can attack federated learning systems by using malicio...
research
08/07/2021

The Effect of Training Parameters and Mechanisms on Decentralized Federated Learning based on MNIST Dataset

Federated Learning is an algorithm suited for training models on decentr...

Please sign up or login with your details

Forgot password? Click here to reset