Quantifying Information Leakage from Gradients

05/28/2021
by   Fan Mo, et al.
0

Sharing deep neural networks' gradients instead of training data could facilitate data privacy in collaborative learning. In practice however, gradients can disclose both private latent attributes and original data. Mathematical metrics are needed to quantify both original and latent information leakages from gradients computed over the training data. In this work, we first use an adaptation of the empirical 𝒱-information to present an information-theoretic justification for the attack success rates in a layer-wise manner. We then move towards a deeper understanding of gradient leakages and propose more general and efficient metrics, using sensitivity and subspace distance to quantify the gradient changes w.r.t. original and latent information, respectively. Our empirical results, on six datasets and four models, reveal that gradients of the first layers contain the highest amount of original information, while the classifier/fully-connected layers placed after the feature extractor contain the highest latent information. Further, we show how training hyperparameters such as gradient aggregation can decrease information leakages. Our characterization provides a new understanding on gradient-based information leakages using the gradients' sensitivity w.r.t. changes in private information, and portends possible defenses such as layer-based protection or strong aggregation.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/17/2020

Layer-wise Characterization of Latent Information Leakage in Federated Learning

Training a deep neural network (DNN) via federated learning allows parti...
research
06/21/2019

Deep Leakage from Gradients

Exchanging gradients is a widely used method in modern multi-node machin...
research
11/19/2021

Understanding Training-Data Leakage from Gradients in Neural Networks for Image Classification

Federated learning of deep learning models for supervised tasks, e.g. im...
research
12/28/2021

APRIL: Finding the Achilles' Heel on Privacy for Vision Transformers

Federated learning frameworks typically require collaborators to share t...
research
08/09/2022

Combining Variational Modeling with Partial Gradient Perturbation to Prevent Deep Gradient Leakage

Exploiting gradient leakage to reconstruct supposedly private training d...
research
03/11/2021

TAG: Transformer Attack from Gradient

Although federated learning has increasingly gained attention in terms o...
research
05/23/2020

TIPRDC: Task-Independent Privacy-Respecting Data Crowdsourcing Framework with Anonymized Intermediate Representations

The success of deep learning partially benefits from the availability of...

Please sign up or login with your details

Forgot password? Click here to reset