Fidel: Reconstructing Private Training Samples from Weight Updates in Federated Learning

by   David Enthoven, et al.

With the increasing number of data collectors such as smartphones, immense amounts of data are available. Federated learning was developed to allow for distributed learning on a massive scale whilst still protecting each users' privacy. This privacy is claimed by the notion that the centralized server does not have any access to a client's data, solely the client's model update. In this paper, we evaluate a novel attack method within regular federated learning which we name the First Dense Layer Attack (Fidel). The methodology of using this attack is discussed, and as a proof of viability we show how this attack method can be used to great effect for densely connected networks and convolutional neural networks. We evaluate some key design decisions and show that the usage of ReLu and Dropout are detrimental to the privacy of a client's local dataset. We show how to recover on average twenty out of thirty private data samples from a client's model update employing a fully connected neural network with very little computational resources required. Similarly, we show that over thirteen out of twenty samples can be recovered from a convolutional neural network update.


page 3

page 4

page 5

page 6

page 9


A Framework for Evaluating Gradient Leakage Attacks in Federated Learning

Federated learning (FL) is an emerging distributed machine learning fram...

Federated Learning Based Multilingual Emoji Prediction In Clean and Attack Scenarios

Federated learning is a growing field in the machine learning community ...

Data Leakage in Federated Averaging

Recent attacks have shown that user data can be recovered from FedSGD up...

Local Model Reconstruction Attacks in Federated Learning and their Uses

In this paper, we initiate the study of local model reconstruction attac...

Multi-objective Evolutionary Federated Learning

Federated learning is an emerging technique used to prevent the leakage ...

WeiAvg: Federated Learning Model Aggregation Promoting Data Diversity

Federated learning provides a promising privacy-preserving way for utili...

The Resource Problem of Using Linear Layer Leakage Attack in Federated Learning

Secure aggregation promises a heightened level of privacy in federated l...

Please sign up or login with your details

Forgot password? Click here to reset