Revealing and Protecting Labels in Distributed Training

10/31/2021
by   Trung Dang, et al.
0

Distributed learning paradigms such as federated learning often involve transmission of model updates, or gradients, over a network, thereby avoiding transmission of private data. However, it is possible for sensitive information about the training data to be revealed from such gradients. Prior works have demonstrated that labels can be revealed analytically from the last layer of certain models (e.g., ResNet), or they can be reconstructed jointly with model inputs by using Gradients Matching [Zhu et al'19] with additional knowledge about the current state of the model. In this work, we propose a method to discover the set of labels of training samples from only the gradient of the last layer and the id to label mapping. Our method is applicable to a wide variety of model architectures across multiple domains. We demonstrate the effectiveness of our method for model training in two domains - image classification, and automatic speech recognition. Furthermore, we show that existing reconstruction techniques improve their efficacy when used in conjunction with our method. Conversely, we demonstrate that gradient quantization and sparsification can significantly reduce the success of the attack.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/15/2021

A Method to Reveal Speaker Identity in Distributed ASR Training, and How to Counter It

End-to-end Automatic Speech Recognition (ASR) models are commonly traine...
research
01/18/2023

Label Inference Attack against Split Learning under Regression Setting

As a crucial building block in vertical Federated Learning (vFL), Split ...
research
02/17/2022

LAMP: Extracting Text from Gradients with Language Model Priors

Recent work shows that sensitive user data can be reconstructed from gra...
research
11/19/2021

Understanding Training-Data Leakage from Gradients in Neural Networks for Image Classification

Federated learning of deep learning models for supervised tasks, e.g. im...
research
09/14/2020

SAPAG: A Self-Adaptive Privacy Attack From Gradients

Distributed learning such as federated learning or collaborative learnin...
research
04/06/2023

Probing the Purview of Neural Networks via Gradient Analysis

We analyze the data-dependent capacity of neural networks and assess ano...

Please sign up or login with your details

Forgot password? Click here to reset