Deep Leakage from Model in Federated Learning

06/10/2022
by   Zihao Zhao, et al.
0

Distributed machine learning has been widely used in recent years to tackle the large and complex dataset problem. Therewith, the security of distributed learning has also drawn increasing attentions from both academia and industry. In this context, federated learning (FL) was developed as a "secure" distributed learning by maintaining private training data locally and only public model gradients are communicated between. However, to date, a variety of gradient leakage attacks have been proposed for this procedure and prove that it is insecure. For instance, a common drawback of these attacks is shared: they require too much auxiliary information such as model weights, optimizers, and some hyperparameters (e.g., learning rate), which are difficult to obtain in real situations. Moreover, many existing algorithms avoid transmitting model gradients in FL and turn to sending model weights, such as FedAvg, but few people consider its security breach. In this paper, we present two novel frameworks to demonstrate that transmitting model weights is also likely to leak private local data of clients, i.e., (DLM and DLM+), under the FL scenario. In addition, a number of experiments are performed to illustrate the effect and generality of our attack frameworks. At the end of this paper, we also introduce two defenses to the proposed attacks and evaluate their protection effects. Comprehensively, the proposed attack and defense schemes can be applied to the general distributed learning scenario as well, just with some appropriate customization.

READ FULL TEXT

page 3

page 14

page 17

research
09/13/2022

Defense against Privacy Leakage in Federated Learning

Federated Learning (FL) provides a promising distributed learning paradi...
research
11/29/2021

Anomaly Localization in Model Gradients Under Backdoor Attacks Against Federated Learning

Inserting a backdoor into the joint model in federated learning (FL) is ...
research
10/26/2021

CAFE: Catastrophic Data Leakage in Vertical Federated Learning

Recent studies show that private training data can be leaked through the...
research
07/15/2022

PASS: Parameters Audit-based Secure and Fair Federated Learning Scheme against Free Rider

Federated Learning (FL) as a secure distributed learning frame gains int...
research
08/13/2023

Approximate and Weighted Data Reconstruction Attack in Federated Learning

Federated Learning (FL) is a distributed learning paradigm that enables ...
research
08/11/2022

Shielding Federated Learning Systems against Inference Attacks with ARM TrustZone

Federated Learning (FL) opens new perspectives for training machine lear...
research
07/25/2022

Technical Report: Assisting Backdoor Federated Learning with Whole Population Knowledge Alignment

Due to the distributed nature of Federated Learning (FL), researchers ha...

Please sign up or login with your details

Forgot password? Click here to reset