Recursive Euclidean Distance Based Robust Aggregation Technique For Federated Learning

03/20/2023
by   Charuka Herath, et al.
0

Federated learning has gained popularity as a solution to data availability and privacy challenges in machine learning. However, the aggregation process of local model updates to obtain a global model in federated learning is susceptible to malicious attacks, such as backdoor poisoning, label-flipping, and membership inference. Malicious users aim to sabotage the collaborative learning process by training the local model with malicious data. In this paper, we propose a novel robust aggregation approach based on recursive Euclidean distance calculation. Our approach measures the distance of the local models from the previous global model and assigns weights accordingly. Local models far away from the global model are assigned smaller weights to minimize the data poisoning effect during aggregation. Our experiments demonstrate that the proposed algorithm outperforms state-of-the-art algorithms by at least 5% in accuracy while reducing time complexity by less than 55%. Our contribution is significant as it addresses the critical issue of malicious attacks in federated learning while improving the accuracy of the global model.

READ FULL TEXT
research
11/10/2022

Secure Aggregation Is Not All You Need: Mitigating Privacy Attacks with Noise Tolerance in Federated Learning

Federated learning is a collaborative method that aims to preserve data ...
research
01/31/2022

Securing Federated Sensitive Topic Classification against Poisoning Attacks

We present a Federated Learning (FL) based solution for building a distr...
research
11/13/2020

An Exploratory Analysis on Users' Contributions in Federated Learning

Federated Learning is an emerging distributed collaborative learning par...
research
05/23/2022

FedSA: Accelerating Intrusion Detection in Collaborative Environments with Federated Simulated Annealing

Fast identification of new network attack patterns is crucial for improv...
research
08/14/2023

DISBELIEVE: Distance Between Client Models is Very Essential for Effective Local Model Poisoning Attacks

Federated learning is a promising direction to tackle the privacy issues...
research
10/13/2022

Dim-Krum: Backdoor-Resistant Federated Learning for NLP with Dimension-wise Krum-Based Aggregation

Despite the potential of federated learning, it is known to be vulnerabl...
research
04/27/2023

Attacks on Robust Distributed Learning Schemes via Sensitivity Curve Maximization

Distributed learning paradigms, such as federated or decentralized learn...

Please sign up or login with your details

Forgot password? Click here to reset