TAG: Transformer Attack from Gradient

03/11/2021
by   Jieren Deng, et al.
0

Although federated learning has increasingly gained attention in terms of effectively utilizing local devices for data privacy enhancement, recent studies show that publicly shared gradients in the training process can reveal the private training images (gradient leakage) to a third-party in computer vision. We have, however, no systematic understanding of the gradient leakage mechanism on the Transformer based language models. In this paper, as the first attempt, we formulate the gradient attack problem on the Transformer-based language models and propose a gradient attack algorithm, TAG, to reconstruct the local training data. We develop a set of metrics to evaluate the effectiveness of the proposed attack algorithm quantitatively. Experimental results on Transformer, TinyBERT_4, TinyBERT_6, BERT_BASE, and BERT_LARGE using GLUE benchmark show that TAG works well on more weight distributions in reconstructing training data and achieves 1.5× recover rate and 2.5× ROUGE-2 over prior methods without the need of ground truth label. TAG can obtain up to 90% data by attacking gradients in CoLA dataset. In addition, TAG has a stronger adversary on large models, small dictionary size, and small input length. We hope the proposed TAG will shed some light on the privacy leakage problem in Transformer-based NLP models.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/28/2021

APRIL: Finding the Achilles' Heel on Privacy for Vision Transformers

Federated learning frameworks typically require collaborators to share t...
research
06/21/2019

Deep Leakage from Gradients

Exchanging gradients is a widely used method in modern multi-node machin...
research
06/15/2023

Your Room is not Private: Gradient Inversion Attack for Deep Q-Learning

The prominence of embodied Artificial Intelligence (AI), which empowers ...
research
09/14/2020

SAPAG: A Self-Adaptive Privacy Attack From Gradients

Distributed learning such as federated learning or collaborative learnin...
research
12/05/2022

Refiner: Data Refining against Gradient Leakage Attacks in Federated Learning

Federated Learning (FL) is pervasive in privacy-focused IoT environments...
research
11/08/2021

Bayesian Framework for Gradient Leakage

Federated learning is an established method for training machine learnin...
research
05/28/2021

Quantifying Information Leakage from Gradients

Sharing deep neural networks' gradients instead of training data could f...

Please sign up or login with your details

Forgot password? Click here to reset