Anomaly Localization in Model Gradients Under Backdoor Attacks Against Federated Learning

11/29/2021
by   Zeki Bilgin, et al.
0

Inserting a backdoor into the joint model in federated learning (FL) is a recent threat raising concerns. Existing studies mostly focus on developing effective countermeasures against this threat, assuming that backdoored local models, if any, somehow reveal themselves by anomalies in their gradients. However, this assumption needs to be elaborated by identifying specifically which gradients are more likely to indicate an anomaly to what extent under which conditions. This is an important issue given that neural network models usually have huge parametric space and consist of a large number of weights. In this study, we make a deep gradient-level analysis on the expected variations in model gradients under several backdoor attack scenarios against FL. Our main novel finding is that backdoor-induced anomalies in local model updates (weights or gradients) appear in the final layer bias weights of the malicious local models. We support and validate our findings by both theoretical and experimental analysis in various FL settings. We also investigate the impact of the number of malicious clients, learning rate, and malicious data rate on the observed anomaly. Our implementation is publicly available[< https://github.com/ArcelikAcikKaynak/Federated_Learning.git>].

READ FULL TEXT
research
06/10/2022

Deep Leakage from Model in Federated Learning

Distributed machine learning has been widely used in recent years to tac...
research
05/31/2023

Surrogate Model Extension (SME): A Fast and Accurate Weight Update Attack on Federated Learning

In Federated Learning (FL) and many other distributed training framework...
research
04/11/2022

CXR-FL: Deep Learning-based Chest X-ray Image Analysis Using Federated Learning

Federated learning enables building a shared model from multicentre data...
research
11/18/2021

A Novel Optimized Asynchronous Federated Learning Framework

Federated Learning (FL) since proposed has been applied in many fields, ...
research
03/21/2023

STDLens: Model Hijacking-Resilient Federated Learning for Object Detection

Federated Learning (FL) has been gaining popularity as a collaborative l...
research
04/21/2023

Denial-of-Service or Fine-Grained Control: Towards Flexible Model Poisoning Attacks on Federated Learning

Federated learning (FL) is vulnerable to poisoning attacks, where advers...
research
04/30/2021

Federated Learning with Fair Averaging

Fairness has emerged as a critical problem in federated learning (FL). I...

Please sign up or login with your details

Forgot password? Click here to reset