Surrogate Model Extension (SME): A Fast and Accurate Weight Update Attack on Federated Learning

05/31/2023
by   Junyi Zhu, et al.
0

In Federated Learning (FL) and many other distributed training frameworks, collaborators can hold their private data locally and only share the network weights trained with the local data after multiple iterations. Gradient inversion is a family of privacy attacks that recovers data from its generated gradients. Seemingly, FL can provide a degree of protection against gradient inversion attacks on weight updates, since the gradient of a single step is concealed by the accumulation of gradients over multiple local iterations. In this work, we propose a principled way to extend gradient inversion attacks to weight updates in FL, thereby better exposing weaknesses in the presumed privacy protection inherent in FL. In particular, we propose a surrogate model method based on the characteristic of two-dimensional gradient flow and low-rank property of local updates. Our method largely boosts the ability of gradient inversion attacks on weight updates containing many iterations and achieves state-of-the-art (SOTA) performance. Additionally, our method runs up to 100× faster than the SOTA baseline in the common FL scenario. Our work re-evaluates and highlights the privacy risk of sharing network weights. Our code is available at https://github.com/JunyiZhu-AI/surrogate_model_extension.

READ FULL TEXT

page 2

page 4

page 8

page 27

page 28

page 29

page 30

research
10/19/2022

Learning to Invert: Simple Adaptive Attacks for Gradient Inversion in Federated Learning

Gradient inversion attack enables recovery of training samples from mode...
research
10/15/2020

R-GAP: Recursive Gradient Attack on Privacy

Federated learning frameworks have been regarded as a promising approach...
research
04/28/2022

AGIC: Approximate Gradient Inversion Attack on Federated Learning

Federated learning is a private-by-design distributed learning paradigm ...
research
11/29/2021

Anomaly Localization in Model Gradients Under Backdoor Attacks Against Federated Learning

Inserting a backdoor into the joint model in federated learning (FL) is ...
research
09/12/2022

Cocktail Party Attack: Breaking Aggregation-Based Privacy in Federated Learning using Independent Component Analysis

Federated learning (FL) aims to perform privacy-preserving machine learn...
research
09/13/2023

Mitigating Adversarial Attacks in Federated Learning with Trusted Execution Environments

The main premise of federated learning (FL) is that machine learning mod...
research
05/25/2022

Breaking the Chain of Gradient Leakage in Vision Transformers

User privacy is of great concern in Federated Learning, while Vision Tra...

Please sign up or login with your details

Forgot password? Click here to reset