Backdoor Defense in Federated Learning Using Differential Testing and Outlier Detection

02/21/2022
by   Yein Kim, et al.
0

The goal of federated learning (FL) is to train one global model by aggregating model parameters updated independently on edge devices without accessing users' private data. However, FL is susceptible to backdoor attacks where a small fraction of malicious agents inject a targeted misclassification behavior in the global model by uploading polluted model updates to the server. In this work, we propose DifFense, an automated defense framework to protect an FL system from backdoor attacks by leveraging differential testing and two-step MAD outlier detection, without requiring any previous knowledge of attack scenarios or direct access to local model parameters. We empirically show that our detection method prevents a various number of potential attackers while consistently achieving the convergence of the global model comparable to that trained under federated averaging (FedAvg). We further corroborate the effectiveness and generalizability of our method against prior defense techniques, such as Multi-Krum and coordinate-wise median aggregation. Our detection method reduces the average backdoor accuracy of the global model to below 4

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset