Auditing Privacy Defenses in Federated Learning via Generative Gradient Leakage

03/29/2022
by   Zhuohang Li, et al.
0

Federated Learning (FL) framework brings privacy benefits to distributed learning systems by allowing multiple clients to participate in a learning task under the coordination of a central server without exchanging their private data. However, recent studies have revealed that private information can still be leaked through shared gradient information. To further protect user's privacy, several defense mechanisms have been proposed to prevent privacy leakage via gradient information degradation methods, such as using additive noise or gradient compression before sharing it with the server. In this work, we validate that the private training data can still be leaked under certain defense settings with a new type of leakage, i.e., Generative Gradient Leakage (GGL). Unlike existing methods that only rely on gradient information to reconstruct data, our method leverages the latent space of generative adversarial networks (GAN) learned from public image datasets as a prior to compensate for the informational loss during gradient degradation. To address the nonlinearity caused by the gradient operator and the GAN model, we explore various gradient-free optimization methods (e.g., evolution strategies and Bayesian optimization) and empirically show their superiority in reconstructing high-quality images from gradients compared to gradient-based optimizers. We hope the proposed method can serve as a tool for empirically measuring the amount of privacy leakage to facilitate the design of more robust defense mechanisms.

READ FULL TEXT

page 5

page 7

page 8

page 9

page 11

page 12

page 13

research
05/06/2023

Gradient Leakage Defense with Key-Lock Module for Federated Learning

Federated Learning (FL) is a widely adopted privacy-preserving machine l...
research
09/13/2022

Defense against Privacy Leakage in Federated Learning

Federated Learning (FL) provides a promising distributed learning paradi...
research
04/11/2023

RecUP-FL: Reconciling Utility and Privacy in Federated Learning via User-configurable Privacy Defense

Federated learning (FL) provides a variety of privacy advantages by allo...
research
10/28/2021

Gradient Inversion with Generative Image Prior

Federated Learning (FL) is a distributed learning framework, in which th...
research
08/10/2021

PRECODE - A Generic Model Extension to Prevent Deep Gradient Leakage

Collaborative training of neural networks leverages distributed data by ...
research
02/21/2023

Speech Privacy Leakage from Shared Gradients in Distributed Learning

Distributed machine learning paradigms, such as federated learning, have...
research
08/12/2022

Dropout is NOT All You Need to Prevent Gradient Leakage

Gradient inversion attacks on federated learning systems reconstruct cli...

Please sign up or login with your details

Forgot password? Click here to reset