FedRecover: Recovering from Poisoning Attacks in Federated Learning using Historical Information

10/20/2022
by   Xiaoyu Cao, et al.
4

Federated learning is vulnerable to poisoning attacks in which malicious clients poison the global model via sending malicious model updates to the server. Existing defenses focus on preventing a small number of malicious clients from poisoning the global model via robust federated learning methods and detecting malicious clients when there are a large number of them. However, it is still an open challenge how to recover the global model from poisoning attacks after the malicious clients are detected. A naive solution is to remove the detected malicious clients and train a new global model from scratch, which incurs large cost that may be intolerable for resource-constrained clients such as smartphones and IoT devices. In this work, we propose FedRecover, which can recover an accurate global model from poisoning attacks with small cost for the clients. Our key idea is that the server estimates the clients' model updates instead of asking the clients to compute and communicate them during the recovery process. In particular, the server stores the global models and clients' model updates in each round, when training the poisoned global model. During the recovery process, the server estimates a client's model update in each round using its stored historical information. Moreover, we further optimize FedRecover to recover a more accurate global model using warm-up, periodic correction, abnormality fixing, and final tuning strategies, in which the server asks the clients to compute and communicate their exact model updates. Theoretically, we show that the global model recovered by FedRecover is close to or the same as that recovered by train-from-scratch under some assumptions. Empirically, our evaluation on four datasets, three federated learning methods, as well as untargeted and targeted poisoning attacks (e.g., backdoor attacks) shows that FedRecover is both accurate and efficient.

READ FULL TEXT

page 1

page 9

page 17

research
07/19/2022

FLDetector: Defending Federated Learning Against Model Poisoning Attacks via Detecting Malicious Clients

Federated learning (FL) is vulnerable to model poisoning attacks, in whi...
research
02/01/2020

Learning to Detect Malicious Clients for Robust Federated Learning

Federated learning systems are vulnerable to attacks from malicious clie...
research
12/27/2020

FLTrust: Byzantine-robust Federated Learning via Trust Bootstrapping

Byzantine-robust federated learning aims to enable a service provider to...
research
06/28/2021

Chat Room Using HTML, PHP, CSS, JS, AJAX

Earlier there was no mode of online communication between users. In big ...
research
06/24/2022

Data Leakage in Federated Averaging

Recent attacks have shown that user data can be recovered from FedSGD up...
research
10/04/2022

Invariant Aggregator for Defending Federated Backdoor Attacks

Federated learning is gaining popularity as it enables training of high-...
research
04/12/2023

FedTrip: A Resource-Efficient Federated Learning Method with Triplet Regularization

In the federated learning scenario, geographically distributed clients c...

Please sign up or login with your details

Forgot password? Click here to reset