DeSMP: Differential Privacy-exploited Stealthy Model Poisoning Attacks in Federated Learning

09/21/2021
by   Md Tamjid Hossain, et al.
0

Federated learning (FL) has become an emerging machine learning technique lately due to its efficacy in safeguarding the client's confidential information. Nevertheless, despite the inherent and additional privacy-preserving mechanisms (e.g., differential privacy, secure multi-party computation, etc.), the FL models are still vulnerable to various privacy-violating and security-compromising attacks (e.g., data or model poisoning) due to their numerous attack vectors which in turn, make the models either ineffective or sub-optimal. Existing adversarial models focusing on untargeted model poisoning attacks are not enough stealthy and persistent at the same time because of their conflicting nature (large scale attacks are easier to detect and vice versa) and thus, remain an unsolved research problem in this adversarial learning paradigm. Considering this, in this paper, we analyze this adversarial learning process in an FL setting and show that a stealthy and persistent model poisoning attack can be conducted exploiting the differential noise. More specifically, we develop an unprecedented DP-exploited stealthy model poisoning (DeSMP) attack for FL models. Our empirical analysis on both the classification and regression tasks using two popular datasets reflects the effectiveness of the proposed DeSMP attack. Moreover, we develop a novel reinforcement learning (RL)-based defense strategy against such model poisoning attacks which can intelligently and dynamically select the privacy level of the FL models to minimize the DeSMP attack surface and facilitate the attack detection.

READ FULL TEXT

page 1

page 6

research
04/06/2022

Adversarial Analysis of the Differentially-Private Federated Learning in Cyber-Physical Critical Infrastructures

Differential privacy (DP) is considered to be an effective privacy-prese...
research
06/18/2022

Measuring Lower Bounds of Local Differential Privacy via Adversary Instantiations in Federated Learning

Local differential privacy (LDP) gives a strong privacy guarantee to be ...
research
06/13/2021

Understanding the Interplay between Privacy and Robustness in Federated Learning

Federated Learning (FL) is emerging as a promising paradigm of privacy-p...
research
10/22/2021

PRECAD: Privacy-Preserving and Robust Federated Learning via Crypto-Aided Differential Privacy

Federated Learning (FL) allows multiple participating clients to train m...
research
11/15/2020

Dynamic backdoor attacks against federated learning

Federated Learning (FL) is a new machine learning framework, which enabl...
research
05/26/2022

PerDoor: Persistent Non-Uniform Backdoors in Federated Learning using Adversarial Perturbations

Federated Learning (FL) enables numerous participants to train deep lear...

Please sign up or login with your details

Forgot password? Click here to reset