Refiner: Data Refining against Gradient Leakage Attacks in Federated Learning

12/05/2022
by   Mingyuan Fan, et al.
0

Federated Learning (FL) is pervasive in privacy-focused IoT environments since it enables avoiding privacy leakage by training models with gradients instead of data. Recent works show the uploaded gradients can be employed to reconstruct data, i.e., gradient leakage attacks, and several defenses are designed to alleviate the risk by tweaking the gradients. However, these defenses exhibit weak resilience against threatening attacks, as the effectiveness builds upon the unrealistic assumptions that deep neural networks are simplified as linear models. In this paper, without such unrealistic assumptions, we present a novel defense, called Refiner, instead of perturbing gradients, which refines ground-truth data to craft robust data that yields sufficient utility but with the least amount of privacy information, and then the gradients of robust data are uploaded. To craft robust data, Refiner promotes the gradients of critical parameters associated with robust data to close ground-truth ones while leaving the gradients of trivial parameters to safeguard privacy. Moreover, to exploit the gradients of trivial parameters, Refiner utilizes a well-designed evaluation network to steer robust data far away from ground-truth data, thereby alleviating privacy leakage risk. Extensive experiments across multiple benchmark datasets demonstrate the superior defense effectiveness of Refiner at defending against state-of-the-art threats.

READ FULL TEXT

page 1

page 5

page 9

research
09/13/2022

Defense against Privacy Leakage in Federated Learning

Federated Learning (FL) provides a promising distributed learning paradi...
research
11/08/2021

Bayesian Framework for Gradient Leakage

Federated learning is an established method for training machine learnin...
research
06/01/2022

Defense Against Gradient Leakage Attacks via Learning to Obscure Data

Federated learning is considered as an effective privacy-preserving lear...
research
12/28/2021

APRIL: Finding the Achilles' Heel on Privacy for Vision Transformers

Federated learning frameworks typically require collaborators to share t...
research
04/26/2022

Enhancing Privacy against Inversion Attacks in Federated Learning by using Mixing Gradients Strategies

Federated learning reduces the risk of information leakage, but remains ...
research
03/11/2021

TAG: Transformer Attack from Gradient

Although federated learning has increasingly gained attention in terms o...
research
10/08/2022

FedDef: Robust Federated Learning-based Network Intrusion Detection Systems Against Gradient Leakage

Deep learning methods have been widely applied to anomaly-based network ...

Please sign up or login with your details

Forgot password? Click here to reset