Close the Gate: Detecting Backdoored Models in Federated Learning based on Client-Side Deep Layer Output Analysis

10/14/2022
by   Phillip Rieger, et al.
0

Federated Learning (FL) is a scheme for collaboratively training Deep Neural Networks (DNNs) with multiple data sources from different clients. Instead of sharing the data, each client trains the model locally, resulting in improved privacy. However, recently so-called targeted poisoning attacks have been proposed that allow individual clients to inject a backdoor into the trained model. Existing defenses against these backdoor attacks either rely on techniques like Differential Privacy to mitigate the backdoor, or analyze the weights of the individual models and apply outlier detection methods that restricts these defenses to certain data distributions. However, adding noise to the models' parameters or excluding benign outliers might also reduce the accuracy of the collaboratively trained model. Additionally, allowing the server to inspect the clients' models creates a privacy risk due to existing knowledge extraction methods. We propose CrowdGuard, a model filtering defense, that mitigates backdoor attacks by leveraging the clients' data to analyze the individual models before the aggregation. To prevent data leaks, the server sends the individual models to secure enclaves, running in client-located Trusted Execution Environments. To effectively distinguish benign and poisoned models, even if the data of different clients are not independently and identically distributed (non-IID), we introduce a novel metric called HLBIM to analyze the outputs of the DNN's hidden layers. We show that the applied significance-based detection algorithm combined can effectively detect poisoned models, even in non-IID scenarios.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/02/2023

FedDefender: Backdoor Attack Defense in Federated Learning

Federated Learning (FL) is a privacy-preserving distributed machine lear...
research
08/03/2022

A New Implementation of Federated Learning for Privacy and Security Enhancement

Motivated by the ever-increasing concerns on personal data privacy and t...
research
10/20/2020

Mitigating Sybil Attacks on Differential Privacy based Federated Learning

In federated learning, machine learning and deep learning models are tra...
research
10/29/2019

Shielding Collaborative Learning: Mitigating Poisoning Attacks through Client-Side Detection

Collaborative learning allows multiple clients to train a joint model wi...
research
05/03/2022

Privacy Amplification via Random Participation in Federated Learning

Running a randomized algorithm on a subsampled dataset instead of the en...
research
11/28/2019

Free-riders in Federated Learning: Attacks and Defenses

Federated learning is a recently proposed paradigm that enables multiple...
research
10/12/2021

Privacy-Preserving Phishing Email Detection Based on Federated Learning and LSTM

Phishing emails that appear legitimate lure people into clicking on the ...

Please sign up or login with your details

Forgot password? Click here to reset