Dropout is NOT All You Need to Prevent Gradient Leakage

08/12/2022
by   Daniel Scheliga, et al.
9

Gradient inversion attacks on federated learning systems reconstruct client training data from exchanged gradient information. To defend against such attacks, a variety of defense mechanisms were proposed. However, they usually lead to an unacceptable trade-off between privacy and model utility. Recent observations suggest that dropout could mitigate gradient leakage and improve model utility if added to neural networks. Unfortunately, this phenomenon has not been systematically researched yet. In this work, we thoroughly analyze the effect of dropout on iterative gradient inversion attacks. We find that state of the art attacks are not able to reconstruct the client data due to the stochasticity induced by dropout during model training. Nonetheless, we argue that dropout does not offer reliable protection if the dropout induced stochasticity is adequately modeled during attack optimization. Consequently, we propose a novel Dropout Inversion Attack (DIA) that jointly optimizes for client data and dropout masks to approximate the stochastic client model. We conduct an extensive systematic evaluation of our attack on four seminal model architectures and three image classification datasets of increasing complexity. We find that our proposed attack bypasses the protection seemingly induced by dropout and reconstructs client data with high fidelity. Our work demonstrates that privacy inducing changes to model architectures alone cannot be assumed to reliably protect from gradient leakage and therefore should be combined with complementary defense mechanisms.

READ FULL TEXT

page 12

page 14

page 15

page 21

page 22

page 23

page 24

page 25

research
06/01/2022

Defense Against Gradient Leakage Attacks via Learning to Obscure Data

Federated learning is considered as an effective privacy-preserving lear...
research
08/09/2022

Combining Variational Modeling with Partial Gradient Perturbation to Prevent Deep Gradient Leakage

Exploiting gradient leakage to reconstruct supposedly private training d...
research
08/10/2021

PRECODE - A Generic Model Extension to Prevent Deep Gradient Leakage

Collaborative training of neural networks leverages distributed data by ...
research
06/08/2022

Gradient Obfuscation Gives a False Sense of Security in Federated Learning

Federated learning has been proposed as a privacy-preserving machine lea...
research
03/29/2022

Auditing Privacy Defenses in Federated Learning via Generative Gradient Leakage

Federated Learning (FL) framework brings privacy benefits to distributed...
research
06/11/2022

Bilateral Dependency Optimization: Defending Against Model-inversion Attacks

Through using only a well-trained classifier, model-inversion (MI) attac...
research
09/11/2020

Improving Robustness to Model Inversion Attacks via Mutual Information Regularization

This paper studies defense mechanisms against model inversion (MI) attac...

Please sign up or login with your details

Forgot password? Click here to reset