Uncovering the Connection Between Differential Privacy and Certified Robustness of Federated Learning against Poisoning Attacks

09/08/2022
by   Chulin Xie, et al.
11

Federated learning (FL) provides an efficient paradigm to jointly train a global model leveraging data from distributed users. As the local training data come from different users who may not be trustworthy, several studies have shown that FL is vulnerable to poisoning attacks. Meanwhile, to protect the privacy of local users, FL is always trained in a differentially private way (DPFL). Thus, in this paper, we ask: Can we leverage the innate privacy property of DPFL to provide certified robustness against poisoning attacks? Can we further improve the privacy of FL to improve such certification? We first investigate both user-level and instance-level privacy of FL and propose novel mechanisms to achieve improved instance-level privacy. We then provide two robustness certification criteria: certified prediction and certified attack cost for DPFL on both levels. Theoretically, we prove the certified robustness of DPFL under a bounded number of adversarial users or instances. Empirically, we conduct extensive experiments to verify our theories under a range of attacks on different datasets. We show that DPFL with a tighter privacy guarantee always provides stronger robustness certification in terms of certified attack cost, but the optimal certified prediction is achieved under a proper balance between privacy protection and utility loss.

READ FULL TEXT

page 1

page 10

research
09/08/2020

Toward Robustness and Privacy in Federated Learning: Experimenting with Local and Central Differential Privacy

Federated Learning (FL) allows multiple participants to collaboratively ...
research
01/29/2022

Decepticons: Corrupted Transformers Breach Privacy in Federated Learning for Language Models

A central tenet of Federated learning (FL), which trains models without ...
research
06/18/2022

Measuring Lower Bounds of Local Differential Privacy via Adversary Instantiations in Federated Learning

Local differential privacy (LDP) gives a strong privacy guarantee to be ...
research
10/09/2020

Voting-based Approaches For Differentially Private Federated Learning

While federated learning (FL) enables distributed agents to collaborativ...
research
03/18/2023

FedRight: An Effective Model Copyright Protection for Federated Learning

Federated learning (FL), an effective distributed machine learning frame...
research
06/06/2023

FedVal: Different good or different bad in federated learning

Federated learning (FL) systems are susceptible to attacks from maliciou...
research
04/05/2022

User-Level Differential Privacy against Attribute Inference Attack of Speech Emotion Recognition in Federated Learning

Many existing privacy-enhanced speech emotion recognition (SER) framewor...

Please sign up or login with your details

Forgot password? Click here to reset