Defense Against Gradient Leakage Attacks via Learning to Obscure Data

06/01/2022
by   Yuxuan Wan, et al.
5

Federated learning is considered as an effective privacy-preserving learning mechanism that separates the client's data and model training process. However, federated learning is still under the risk of privacy leakage because of the existence of attackers who deliberately conduct gradient leakage attacks to reconstruct the client data. Recently, popular strategies such as gradient perturbation methods and input encryption methods have been proposed to defend against gradient leakage attacks. Nevertheless, these defenses can either greatly sacrifice the model performance, or be evaded by more advanced attacks. In this paper, we propose a new defense method to protect the privacy of clients' data by learning to obscure data. Our defense method can generate synthetic samples that are totally distinct from the original samples, but they can also maximally preserve their predictive features and guarantee the model performance. Furthermore, our defense strategy makes the gradient leakage attack and its variants extremely difficult to reconstruct the client data. Through extensive experiments, we show that our proposed defense method obtains better privacy protection while preserving high accuracy compared with state-of-the-art methods.

READ FULL TEXT
research
08/12/2022

Dropout is NOT All You Need to Prevent Gradient Leakage

Gradient inversion attacks on federated learning systems reconstruct cli...
research
04/29/2021

Privacy-Preserving Federated Learning on Partitioned Attributes

Real-world data is usually segmented by attributes and distributed acros...
research
01/12/2022

Get your Foes Fooled: Proximal Gradient Split Learning for Defense against Model Inversion Attacks on IoMT data

The past decade has seen a rapid adoption of Artificial Intelligence (AI...
research
06/08/2022

Gradient Obfuscation Gives a False Sense of Security in Federated Learning

Federated learning has been proposed as a privacy-preserving machine lea...
research
08/18/2023

Defending Label Inference Attacks in Split Learning under Regression Setting

As a privacy-preserving method for implementing Vertical Federated Learn...
research
12/05/2022

Refiner: Data Refining against Gradient Leakage Attacks in Federated Learning

Federated Learning (FL) is pervasive in privacy-focused IoT environments...
research
05/25/2022

Breaking the Chain of Gradient Leakage in Vision Transformers

User privacy is of great concern in Federated Learning, while Vision Tra...

Please sign up or login with your details

Forgot password? Click here to reset