See through Gradients: Image Batch Recovery via GradInversion

04/15/2021
by   Hongxu Yin, et al.
0

Training deep neural networks requires gradient estimation from data batches to update parameters. Gradients per parameter are averaged over a set of data and this has been presumed to be safe for privacy-preserving training in joint, collaborative, and federated learning applications. Prior work only showed the possibility of recovering input data given gradients under very restrictive conditions - a single input point, or a network with no non-linearities, or a small 32x32 px input batch. Therefore, averaging gradients over larger batches was thought to be safe. In this work, we introduce GradInversion, using which input images from a larger batch (8 - 48 images) can also be recovered for large networks such as ResNets (50 layers), on complex datasets such as ImageNet (1000 classes, 224x224 px). We formulate an optimization task that converts random noise into natural images, matching gradients while regularizing image fidelity. We also propose an algorithm for target class label recovery given gradients. We further propose a group consistency regularization framework, where multiple agents starting from different random seeds work together to find an enhanced reconstruction of original data batch. We show that gradients encode a surprisingly large amount of information, such that all the individual images can be recovered with high fidelity via GradInversion, even for complex datasets, deep networks, and large batch sizes.

READ FULL TEXT

page 1

page 6

page 7

page 8

page 11

page 12

page 13

research
03/31/2020

Inverting Gradients – How easy is it to break privacy in federated learning?

The idea of federated learning is to collaboratively train a neural netw...
research
05/17/2022

Recovering Private Text in Federated Learning of Language Models

Federated learning allows distributed users to collaboratively train a m...
research
03/22/2022

GradViT: Gradient Inversion of Vision Transformers

In this work we demonstrate the vulnerability of vision transformers (Vi...
research
02/25/2021

An introduction to distributed training of deep neural networks for segmentation tasks with large seismic datasets

Deep learning applications are drastically progressing in seismic proces...
research
05/22/2020

Arbitrary-sized Image Training and Residual Kernel Learning: Towards Image Fraud Identification

Preserving original noise residuals in images are critical to image frau...
research
02/17/2022

LAMP: Extracting Text from Gradients with Language Model Priors

Recent work shows that sensitive user data can be reconstructed from gra...
research
03/19/2023

Experimenting with Normalization Layers in Federated Learning on non-IID scenarios

Training Deep Learning (DL) models require large, high-quality datasets,...

Please sign up or login with your details

Forgot password? Click here to reset