DeepAI AI Chat
Log In Sign Up

Towards General Deep Leakage in Federated Learning

10/18/2021
by   Jiahui Geng, et al.
University of Cologne
University of Stavanger
0

Unlike traditional central training, federated learning (FL) improves the performance of the global model by sharing and aggregating local models rather than local data to protect the users' privacy. Although this training approach appears secure, some research has demonstrated that an attacker can still recover private data based on the shared gradient information. This on-the-fly reconstruction attack deserves to be studied in depth because it can occur at any stage of training, whether at the beginning or at the end of model training; no relevant dataset is required and no additional models need to be trained. We break through some unrealistic assumptions and limitations to apply this reconstruction attack in a broader range of scenarios. We propose methods that can reconstruct the training data from shared gradients or weights, corresponding to the FedSGD and FedAvg usage scenarios, respectively. We propose a zero-shot approach to restore labels even if there are duplicate labels in the batch. We study the relationship between the label and image restoration. We find that image restoration fails even if there is only one incorrectly inferred label in the batch; we also find that when batch images have the same label, the corresponding image is restored as a fusion of that class of images. Our approaches are evaluated on classic image benchmarks, including CIFAR-10 and ImageNet. The batch size, image quality, and the adaptability of the label distribution of our approach exceed those of GradInversion, the state-of-the-art.

READ FULL TEXT

page 2

page 6

page 7

05/19/2021

User Label Leakage from Gradients in Federated Learning

Federated learning enables multiple users to build a joint model by shar...
10/26/2021

CAFE: Catastrophic Data Leakage in Vertical Federated Learning

Recent studies show that private training data can be leaked through the...
07/02/2018

How To Backdoor Federated Learning

Federated learning enables multiple participants to jointly construct a ...
12/07/2021

Location Leakage in Federated Signal Maps

We consider the problem of predicting cellular network performance (sign...
12/06/2021

When the Curious Abandon Honesty: Federated Learning Is Not Private

In federated learning (FL), data does not leave personal devices when th...
09/14/2020

SAPAG: A Self-Adaptive Privacy Attack From Gradients

Distributed learning such as federated learning or collaborative learnin...
01/18/2023

Label Inference Attack against Split Learning under Regression Setting

As a crucial building block in vertical Federated Learning (vFL), Split ...