Data Leakage in Federated Averaging

06/24/2022
by   Dimitar I. Dimitrov, et al.
0

Recent attacks have shown that user data can be recovered from FedSGD updates, thus breaking privacy. However, these attacks are of limited practical relevance as federated learning typically uses the FedAvg algorithm. Compared to FedSGD, recovering data from FedAvg updates is much harder as: (i) the updates are computed at unobserved intermediate network weights, (ii) a large number of batches are used, and (iii) labels and network weights vary simultaneously across client steps. In this work, we propose a new optimization-based attack which successfully attacks FedAvg by addressing the above challenges. First, we solve the optimization problem using automatic differentiation that forces a simulation of the client's update that generates the unobserved parameters for the recovered labels and inputs to match the received client update. Second, we address the large number of batches by relating images from different epochs with a permutation invariant prior. Third, we recover the labels by estimating the parameters of existing FedSGD attacks at every FedAvg step. On the popular FEMNIST dataset, we demonstrate that on average we successfully recover >45 realistic FedAvg updates computed on 10 local epochs of 10 batches each with 5 images, compared to only <10 real-world federated learning implementations based on FedAvg are vulnerable.

READ FULL TEXT

page 13

page 20

page 23

research
04/22/2020

A Framework for Evaluating Gradient Leakage Attacks in Federated Learning

Federated learning (FL) is an emerging distributed machine learning fram...
research
01/01/2021

Fidel: Reconstructing Private Training Samples from Weight Updates in Federated Learning

With the increasing number of data collectors such as smartphones, immen...
research
10/20/2022

FedRecover: Recovering from Poisoning Attacks in Federated Learning using Historical Information

Federated learning is vulnerable to poisoning attacks in which malicious...
research
03/31/2023

Secure Federated Learning against Model Poisoning Attacks via Client Filtering

Given the distributed nature, detecting and defending against the backdo...
research
06/05/2023

Hiding in Plain Sight: Disguising Data Stealing Attacks in Federated Learning

Malicious server (MS) attacks have enabled the scaling of data stealing ...
research
10/17/2022

Thinking Two Moves Ahead: Anticipating Other Users Improves Backdoor Attacks in Federated Learning

Federated learning is particularly susceptible to model poisoning and ba...

Please sign up or login with your details

Forgot password? Click here to reset