APRIL: Finding the Achilles' Heel on Privacy for Vision Transformers

12/28/2021
by   Jiahao Lu, et al.
7

Federated learning frameworks typically require collaborators to share their local gradient updates of a common model instead of sharing training data to preserve privacy. However, prior works on Gradient Leakage Attacks showed that private training data can be revealed from gradients. So far almost all relevant works base their attacks on fully-connected or convolutional neural networks. Given the recent overwhelmingly rising trend of adapting Transformers to solve multifarious vision tasks, it is highly valuable to investigate the privacy risk of vision transformers. In this paper, we analyse the gradient leakage risk of self-attention based mechanism in both theoretical and practical manners. Particularly, we propose APRIL - Attention PRIvacy Leakage, which poses a strong threat to self-attention inspired models such as ViT. Showing how vision Transformers are at the risk of privacy leakage via gradients, we urge the significance of designing privacy-safer Transformer models and defending schemes.

READ FULL TEXT

page 8

page 9

page 10

page 11

page 18

page 19

page 20

page 21

research
05/25/2022

Breaking the Chain of Gradient Leakage in Vision Transformers

User privacy is of great concern in Federated Learning, while Vision Tra...
research
03/11/2021

TAG: Transformer Attack from Gradient

Although federated learning has increasingly gained attention in terms o...
research
12/05/2022

Refiner: Data Refining against Gradient Leakage Attacks in Federated Learning

Federated Learning (FL) is pervasive in privacy-focused IoT environments...
research
10/17/2020

Layer-wise Characterization of Latent Information Leakage in Federated Learning

Training a deep neural network (DNN) via federated learning allows parti...
research
05/28/2021

Quantifying Information Leakage from Gradients

Sharing deep neural networks' gradients instead of training data could f...
research
12/09/2022

Mitigation of Spatial Nonstationarity with Vision Transformers

Spatial nonstationarity, the location variance of features' statistical ...
research
12/30/2021

Stochastic Layers in Vision Transformers

We introduce fully stochastic layers in vision transformers, without cau...

Please sign up or login with your details

Forgot password? Click here to reset