Gradient Leakage Attack Resilient Deep Learning

by   Wenqi Wei, et al.
Georgia Institute of Technology

Gradient leakage attacks are considered one of the wickedest privacy threats in deep learning as attackers covertly spy gradient updates during iterative training without compromising model training quality, and yet secretly reconstruct sensitive training data using leaked gradients with high attack success rate. Although deep learning with differential privacy is a defacto standard for publishing deep learning models with differential privacy guarantee, we show that differentially private algorithms with fixed privacy parameters are vulnerable against gradient leakage attacks. This paper investigates alternative approaches to gradient leakage resilient deep learning with differential privacy (DP). First, we analyze existing implementation of deep learning with differential privacy, which use fixed noise variance to injects constant noise to the gradients in all layers using fixed privacy parameters. Despite the DP guarantee provided, the method suffers from low accuracy and is vulnerable to gradient leakage attacks. Second, we present a gradient leakage resilient deep learning approach with differential privacy guarantee by using dynamic privacy parameters. Unlike fixed-parameter strategies that result in constant noise variance, different dynamic parameter strategies present alternative techniques to introduce adaptive noise variance and adaptive noise injection which are closely aligned to the trend of gradient updates during differentially private model training. Finally, we describe four complementary metrics to evaluate and compare alternative approaches.


page 1

page 2

page 5

page 8


Securing Distributed SGD against Gradient Leakage Threats

This paper presents a holistic approach to gradient leakage resilient di...

In Differential Privacy, There is Truth: On Vote Leakage in Ensemble Private Learning

When learning from sensitive data, care must be taken to ensure that tra...

Differentially Private Decentralized Deep Learning with Consensus Algorithms

Cooperative decentralized deep learning relies on direct information exc...

Do Not Let Privacy Overbill Utility: Gradient Embedding Perturbation for Private Learning

The privacy leakage of the model about the training data can be bounded ...

Large Scale Private Learning via Low-rank Reparametrization

We propose a reparametrization scheme to address the challenges of apply...

A Survey on Differential Privacy with Machine Learning and Future Outlook

Nowadays, machine learning models and applications have become increasin...

Please sign up or login with your details

Forgot password? Click here to reset