DeepAI AI Chat
Log In Sign Up

GRNN: Generative Regression Neural Network – A Data Leakage Attack for Federated Learning

by   Hanchi Ren, et al.

Data privacy has become an increasingly important issue in machine learning. Many approaches have been developed to tackle this issue, e.g., cryptography (Homomorphic Encryption, Differential Privacy, etc.) and collaborative training (Secure Multi-Party Computation, Distributed Learning and Federated Learning). These techniques have a particular focus on data encryption or secure local computation. They transfer the intermediate information to the third-party to compute the final result. Gradient exchanging is commonly considered to be a secure way of training a robust model collaboratively in deep learning. However, recent researches have demonstrated that sensitive information can be recovered from the shared gradient. Generative Adversarial Networks (GAN), in particular, have shown to be effective in recovering those information. However, GAN based techniques require additional information, such as class labels which are generally unavailable for privacy persevered learning. In this paper, we show that, in Federated Learning (FL) system, image-based privacy data can be easily recovered in full from the shared gradient only via our proposed Generative Regression Neural Network (GRNN). We formulate the attack to be a regression problem and optimise two branches of the generative model by minimising the distance between gradients. We evaluate our method on several image classification tasks. The results illustrate that our proposed GRNN outperforms state-of-the-art methods with better stability, stronger robustness, and higher accuracy. It also has no convergence requirement to the global FL model. Moreover, we demonstrate information leakage using face re-identification. Some defense strategies are also discussed in this work.


page 9

page 10

page 11

page 13


Mixed Precision Quantization to Tackle Gradient Leakage Attacks in Federated Learning

Federated Learning (FL) enables collaborative model building among a lar...

Auditing Privacy Defenses in Federated Learning via Generative Gradient Leakage

Federated Learning (FL) framework brings privacy benefits to distributed...

Mitigating Leakage in Federated Learning with Trusted Hardware

In federated learning, multiple parties collaborate in order to train a ...

GAN-based federated learning for label protection in binary classification

As an emerging technique, vertical federated learning collaborates with ...

Is Vertical Logistic Regression Privacy-Preserving? A Comprehensive Privacy Analysis and Beyond

We consider vertical logistic regression (VLR) trained with mini-batch g...

Distributed learning optimisation of Cox models can leak patient data: Risks and solutions

Medical data are often highly sensitive, and frequently there are missin...

GRAFFL: Gradient-free Federated Learning of a Bayesian Generative Model

Federated learning platforms are gaining popularity. One of the major be...