Provable Defense against Privacy Leakage in Federated Learning from Representation Perspective

12/08/2020
by   Jingwei Sun, et al.
0

Federated learning (FL) is a popular distributed learning framework that can reduce privacy risks by not explicitly sharing private data. However, recent works demonstrated that sharing model updates makes FL vulnerable to inference attacks. In this work, we show our key observation that the data representation leakage from gradients is the essential cause of privacy leakage in FL. We also provide an analysis of this observation to explain how the data presentation is leaked. Based on this observation, we propose a defense against model inversion attack in FL. The key idea of our defense is learning to perturb data representation such that the quality of the reconstructed data is severely degraded, while FL performance is maintained. In addition, we derive certified robustness guarantee to FL and convergence guarantee to FedAvg, after applying our defense. To evaluate our defense, we conduct experiments on MNIST and CIFAR10 for defending against the DLG attack and GS attack. Without sacrificing accuracy, the results demonstrate that our proposed defense can increase the mean squared error between the reconstructed data and the raw data by as much as more than 160X for both DLG attack and GS attack, compared with baseline defense methods. The privacy of the FL system is significantly improved.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/26/2021

FL-WBC: Enhancing Robustness against Model Poisoning Attacks in Federated Learning from a Client Perspective

Federated learning (FL) is a popular distributed learning framework that...
research
09/13/2022

Defense against Privacy Leakage in Federated Learning

Federated Learning (FL) provides a promising distributed learning paradi...
research
02/14/2022

Do Gradient Inversion Attacks Make Federated Learning Unsafe?

Federated learning (FL) allows the collaborative training of AI models w...
research
10/26/2021

CAFE: Catastrophic Data Leakage in Vertical Federated Learning

Recent studies show that private training data can be leaked through the...
research
07/15/2022

PASS: Parameters Audit-based Secure and Fair Federated Learning Scheme against Free Rider

Federated Learning (FL) as a secure distributed learning frame gains int...
research
07/25/2022

Technical Report: Assisting Backdoor Federated Learning with Whole Population Knowledge Alignment

Due to the distributed nature of Federated Learning (FL), researchers ha...
research
07/07/2020

Defending Against Backdoors in Federated Learning with Robust Learning Rate

Federated Learning (FL) allows a set of agents to collaboratively train ...

Please sign up or login with your details

Forgot password? Click here to reset