FedCC: Robust Federated Learning against Model Poisoning Attacks

12/05/2022
by   Hyejun Jeong, et al.
0

Federated Learning has emerged to cope with raising concerns about privacy breaches in using Machine or Deep Learning models. This new paradigm allows the leverage of deep learning models in a distributed manner, enhancing privacy preservation. However, the server's blindness to local datasets introduces its vulnerability to model poisoning attacks and data heterogeneity, tampering with the global model performance. Numerous works have proposed robust aggregation algorithms and defensive mechanisms, but the approaches are orthogonal to individual attacks or issues. FedCC, the proposed method, provides robust aggregation by comparing the Centered Kernel Alignment of Penultimate Layers Representations. The experiment results on FedCC demonstrate that it mitigates untargeted and targeted model poisoning or backdoor attacks while also being effective in non-Independently and Identically Distributed data environments. By applying FedCC against untargeted attacks, global model accuracy is recovered the most. Against targeted backdoor attacks, FedCC nullified attack confidence while preserving the test accuracy. Most of the experiment results outstand the baseline methods.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/18/2021

RobustFed: A Truth Inference Approach for Robust Federated Learning

Federated learning is a prominent framework that enables clients (e.g., ...
research
04/01/2022

Federated Learning Framework Coping with Hierarchical Heterogeneity in Cooperative ITS

In this paper, we introduce a federated learning framework coping with H...
research
05/02/2022

Performance Weighting for Robust Federated Learning Against Corrupted Sources

Federated Learning has emerged as a dominant computational paradigm for ...
research
10/28/2022

Local Model Reconstruction Attacks in Federated Learning and their Uses

In this paper, we initiate the study of local model reconstruction attac...
research
11/21/2022

SPIN: Simulated Poisoning and Inversion Network for Federated Learning-Based 6G Vehicular Networks

The applications concerning vehicular networks benefit from the vision o...
research
06/28/2022

Secure Forward Aggregation for Vertical Federated Neural Networks

Vertical federated learning (VFL) is attracting much attention because i...
research
01/17/2023

Surgical Aggregation: A Federated Learning Framework for Harmonizing Distributed Datasets with Diverse Tasks

AI-assisted characterization of chest x-rays (CXR) has the potential to ...

Please sign up or login with your details

Forgot password? Click here to reset