Robust Quantity-Aware Aggregation for Federated Learning

05/22/2022
by   Jingwei Yi, et al.
0

Federated learning (FL) enables multiple clients to collaboratively train models without sharing their local data, and becomes an important privacy-preserving machine learning framework. However, classical FL faces serious security and robustness problem, e.g., malicious clients can poison model updates and at the same time claim large quantities to amplify the impact of their model updates in the model aggregation. Existing defense methods for FL, while all handling malicious model updates, either treat all quantities benign or simply ignore/truncate the quantities of all clients. The former is vulnerable to quantity-enhanced attack, while the latter leads to sub-optimal performance since the local data on different clients is usually in significantly different sizes. In this paper, we propose a robust quantity-aware aggregation algorithm for federated learning, called FedRA, to perform the aggregation with awareness of local data quantities while being able to defend against quantity-enhanced attacks. More specifically, we propose a method to filter malicious clients by jointly considering the uploaded model updates and data quantities from different clients, and performing quantity-aware weighted averaging on model updates from remaining clients. Moreover, as the number of malicious clients participating in the federated learning may dynamically change in different rounds, we also propose a malicious client number estimator to predict how many suspicious clients should be filtered in each round. Experiments on four public datasets demonstrate the effectiveness of our FedRA method in defending FL against quantity-enhanced attacks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/28/2022

Shielding Federated Learning: Robust Aggregation with Adaptive Client Selection

Federated learning (FL) enables multiple clients to collaboratively trai...
research
03/29/2023

A Byzantine-Resilient Aggregation Scheme for Federated Learning via Matrix Autoregression on Client Updates

In this work, we propose FLANDERS, a novel federated learning (FL) aggre...
research
01/08/2022

LoMar: A Local Defense Against Poisoning Attack on Federated Learning

Federated learning (FL) provides a high efficient decentralized machine ...
research
07/03/2021

Byzantine-robust Federated Learning through Spatial-temporal Analysis of Local Model Updates

Federated Learning (FL) enables multiple distributed clients (e.g., mobi...
research
03/31/2023

Secure Federated Learning against Model Poisoning Attacks via Client Filtering

Given the distributed nature, detecting and defending against the backdo...
research
02/14/2022

UA-FedRec: Untargeted Attack on Federated News Recommendation

News recommendation is critical for personalized news distribution. Fede...
research
12/28/2022

XMAM:X-raying Models with A Matrix to Reveal Backdoor Attacks for Federated Learning

Federated Learning (FL) has received increasing attention due to its pri...

Please sign up or login with your details

Forgot password? Click here to reset