SPFL: A Self-purified Federated Learning Method Against Poisoning Attacks

09/19/2023
by   Zizhen Liu, et al.
0

While Federated learning (FL) is attractive for pulling privacy-preserving distributed training data, the credibility of participating clients and non-inspectable data pose new security threats, of which poisoning attacks are particularly rampant and hard to defend without compromising privacy, performance or other desirable properties of FL. To tackle this problem, we propose a self-purified FL (SPFL) method that enables benign clients to exploit trusted historical features of locally purified model to supervise the training of aggregated model in each iteration. The purification is performed by an attention-guided self-knowledge distillation where the teacher and student models are optimized locally for task loss, distillation loss and attention-based loss simultaneously. SPFL imposes no restriction on the communication protocol and aggregator at the server. It can work in tandem with any existing secure aggregation algorithms and protocols for augmented security and privacy guarantee. We experimentally demonstrate that SPFL outperforms state-of-the-art FL defenses against various poisoning attacks. The attack success rate of SPFL trained model is at most 3% above that of a clean model, even if the poisoning attack is launched in every iteration with all but one malicious clients in the system. Meantime, it improves the model quality on normal inputs compared to FedAvg, either under attack or in the absence of an attack.

READ FULL TEXT

page 1

page 12

page 13

research
12/12/2020

Achieving Security and Privacy in Federated Learning Systems: Survey, Research Challenges and Future Directions

Federated learning (FL) allows a server to learn a machine learning (ML)...
research
02/24/2023

Active Membership Inference Attack under Local Differential Privacy in Federated Learning

Federated learning (FL) was originally regarded as a framework for colla...
research
06/13/2023

SRATTA : Sample Re-ATTribution Attack of Secure Aggregation in Federated Learning

We consider a cross-silo federated learning (FL) setting where a machine...
research
06/19/2023

FSAR: Federated Skeleton-based Action Recognition with Adaptive Topology Structure and Knowledge Distillation

Existing skeleton-based action recognition methods typically follow a ce...
research
12/04/2022

Security Analysis of SplitFed Learning

Split Learning (SL) and Federated Learning (FL) are two prominent distri...
research
09/13/2023

Mitigating Adversarial Attacks in Federated Learning with Trusted Execution Environments

The main premise of federated learning (FL) is that machine learning mod...
research
06/08/2022

Dap-FL: Federated Learning flourishes by adaptive tuning and secure aggregation

Federated learning (FL), an attractive and promising distributed machine...

Please sign up or login with your details

Forgot password? Click here to reset