GradViT: Gradient Inversion of Vision Transformers

03/22/2022
by   Ali Hatamizadeh, et al.
0

In this work we demonstrate the vulnerability of vision transformers (ViTs) to gradient-based inversion attacks. During this attack, the original data batch is reconstructed given model weights and the corresponding gradients. We introduce a method, named GradViT, that optimizes random noise into naturally looking images via an iterative process. The optimization objective consists of (i) a loss on matching the gradients, (ii) image prior in the form of distance to batch-normalization statistics of a pretrained CNN model, and (iii) a total variation regularization on patches to guide correct recovery locations. We propose a unique loss scheduling function to overcome local minima during optimization. We evaluate GadViT on ImageNet1K and MS-Celeb-1M datasets, and observe unprecedentedly high fidelity and closeness to the original (hidden) data. During the analysis we find that vision transformers are significantly more vulnerable than previously studied CNNs due to the presence of the attention mechanism. Our method demonstrates new state-of-the-art results for gradient inversion in both qualitative and quantitative metrics. Project page at https://gradvit.github.io/.

READ FULL TEXT

page 1

page 6

page 7

page 8

page 12

page 13

page 14

research
04/26/2022

Enhancing Privacy against Inversion Attacks in Federated Learning by using Mixing Gradients Strategies

Federated learning reduces the risk of information leakage, but remains ...
research
04/15/2021

See through Gradients: Image Batch Recovery via GradInversion

Training deep neural networks requires gradient estimation from data bat...
research
11/20/2021

Are Vision Transformers Robust to Patch Perturbations?

The recent advances in Vision Transformer (ViT) have demonstrated its im...
research
06/13/2023

Temporal Gradient Inversion Attacks with Robust Optimization

Federated Learning (FL) has emerged as a promising approach for collabor...
research
06/16/2022

Backdoor Attacks on Vision Transformers

Vision Transformers (ViT) have recently demonstrated exemplary performan...
research
01/31/2022

Plug-In Inversion: Model-Agnostic Inversion for Vision with Data Augmentations

Existing techniques for model inversion typically rely on hard-to-tune r...
research
05/25/2022

Breaking the Chain of Gradient Leakage in Vision Transformers

User privacy is of great concern in Federated Learning, while Vision Tra...

Please sign up or login with your details

Forgot password? Click here to reset