Breaking the Chain of Gradient Leakage in Vision Transformers

05/25/2022
by   Yahui Liu, et al.
7

User privacy is of great concern in Federated Learning, while Vision Transformers (ViTs) have been revealed to be vulnerable to gradient-based inversion attacks. We show that the learned low-dimensional spatial prior in position embeddings (PEs) accelerates the training of ViTs. As a side effect, it makes the ViTs tend to be position sensitive and at high risk of privacy leakage. We observe that enhancing the position-insensitive property of a ViT model is a promising way to protect data privacy against these gradient attacks. However, simply removing the PEs may not only harm the convergence and accuracy of ViTs but also places the model at more severe privacy risk. To deal with the aforementioned contradiction, we propose a simple yet efficient Masked Jigsaw Puzzle (MJP) method to break the chain of gradient leakage in ViTs. MJP can be easily plugged into existing ViTs and their derived variants. Extensive experiments demonstrate that our proposed MJP method not only boosts the performance on large-scale datasets (i.e., ImageNet-1K), but can also improve the privacy preservation capacity in the typical gradient attacks by a large margin. Our code is available at: https://github.com/yhlleo/MJP.

READ FULL TEXT

page 8

page 9

page 17

page 18

research
12/28/2021

APRIL: Finding the Achilles' Heel on Privacy for Vision Transformers

Federated learning frameworks typically require collaborators to share t...
research
02/21/2022

Privacy Leakage of Adversarial Training Models in Federated Learning Systems

Adversarial Training (AT) is crucial for obtaining deep neural networks ...
research
06/01/2022

Defense Against Gradient Leakage Attacks via Learning to Obscure Data

Federated learning is considered as an effective privacy-preserving lear...
research
08/09/2022

Combining Variational Modeling with Partial Gradient Perturbation to Prevent Deep Gradient Leakage

Exploiting gradient leakage to reconstruct supposedly private training d...
research
05/31/2023

Surrogate Model Extension (SME): A Fast and Accurate Weight Update Attack on Federated Learning

In Federated Learning (FL) and many other distributed training framework...
research
10/20/2022

How Does a Deep Learning Model Architecture Impact Its Privacy?

As a booming research area in the past decade, deep learning technologie...
research
03/22/2022

GradViT: Gradient Inversion of Vision Transformers

In this work we demonstrate the vulnerability of vision transformers (Vi...

Please sign up or login with your details

Forgot password? Click here to reset