Secure Federated Learning against Model Poisoning Attacks via Client Filtering

03/31/2023
by   Duygu Nur Yaldiz, et al.
0

Given the distributed nature, detecting and defending against the backdoor attack under federated learning (FL) systems is challenging. In this paper, we observe that the cosine similarity of the last layer's weight between the global model and each local update could be used effectively as an indicator of malicious model updates. Therefore, we propose CosDefense, a cosine-similarity-based attacker detection algorithm. Specifically, under CosDefense, the server calculates the cosine similarity score of the last layer's weight between the global model and each client update, labels malicious clients whose score is much higher than the average, and filters them out of the model aggregation in each round. Compared to existing defense schemes, CosDefense does not require any extra information besides the received model updates to operate and is compatible with client sampling. Experiment results on three real-world datasets demonstrate that CosDefense could provide robust performance under the state-of-the-art FL poisoning attack.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/19/2022

FLDetector: Defending Federated Learning Against Model Poisoning Attacks via Detecting Malicious Clients

Federated learning (FL) is vulnerable to model poisoning attacks, in whi...
research
05/22/2022

Robust Quantity-Aware Aggregation for Federated Learning

Federated learning (FL) enables multiple clients to collaboratively trai...
research
01/23/2023

BayBFed: Bayesian Backdoor Defense for Federated Learning

Federated learning (FL) allows participants to jointly train a machine l...
research
03/29/2023

A Byzantine-Resilient Aggregation Scheme for Federated Learning via Matrix Autoregression on Client Updates

In this work, we propose FLANDERS, a novel federated learning (FL) aggre...
research
06/24/2022

Data Leakage in Federated Averaging

Recent attacks have shown that user data can be recovered from FedSGD up...
research
11/27/2022

Navigation as the Attacker Wishes? Towards Building Byzantine-Robust Embodied Agents under Federated Learning

Federated embodied agent learning protects the data privacy of individua...
research
10/31/2021

Efficient passive membership inference attack in federated learning

In cross-device federated learning (FL) setting, clients such as mobiles...

Please sign up or login with your details

Forgot password? Click here to reset