Hiding in Plain Sight: Disguising Data Stealing Attacks in Federated Learning

06/05/2023
by   Kostadin Garov, et al.
0

Malicious server (MS) attacks have enabled the scaling of data stealing in federated learning to large batch sizes and secure aggregation, settings previously considered private. However, many concerns regarding client-side detectability of MS attacks were raised, questioning their practicality once they are publicly known. In this work, for the first time, we thoroughly study the problem of client-side detectability.We demonstrate that most prior MS attacks, which fundamentally rely on one of two key principles, are detectable by principled client-side checks. Further, we formulate desiderata for practical MS attacks and propose SEER, a novel attack framework that satisfies all desiderata, while stealing user data from gradients of realistic networks, even for large batch sizes (up to 512 in our experiments) and under secure aggregation. The key insight of SEER is the use of a secret decoder, which is jointly trained with the shared model. Our work represents a promising first step towards more principled treatment of MS attacks, paving the way for realistic data stealing that can compromise user privacy in real-world deployments.

READ FULL TEXT

page 5

page 8

research
03/21/2023

Secure Aggregation in Federated Learning is not Private: Leaking User Data at Large Scale through Model Modification

Security and privacy are important concerns in machine learning. End use...
research
11/10/2022

Secure Aggregation Is Not All You Need: Mitigating Privacy Attacks with Noise Tolerance in Federated Learning

Federated learning is a collaborative method that aims to preserve data ...
research
07/07/2021

RoFL: Attestable Robustness for Secure Federated Learning

Federated Learning is an emerging decentralized machine learning paradig...
research
03/07/2023

Client-specific Property Inference against Secure Aggregation in Federated Learning

Federated learning has become a widely used paradigm for collaboratively...
research
05/17/2022

Recovering Private Text in Federated Learning of Language Models

Federated learning allows distributed users to collaboratively train a m...
research
06/24/2022

Data Leakage in Federated Averaging

Recent attacks have shown that user data can be recovered from FedSGD up...
research
09/12/2022

Cocktail Party Attack: Breaking Aggregation-Based Privacy in Federated Learning using Independent Component Analysis

Federated learning (FL) aims to perform privacy-preserving machine learn...

Please sign up or login with your details

Forgot password? Click here to reset