Gradient Leakage Attack Resilient Deep Learning

12/25/2021
by   Wenqi Wei, et al.
2

Gradient leakage attacks are considered one of the wickedest privacy threats in deep learning as attackers covertly spy gradient updates during iterative training without compromising model training quality, and yet secretly reconstruct sensitive training data using leaked gradients with high attack success rate. Although deep learning with differential privacy is a defacto standard for publishing deep learning models with differential privacy guarantee, we show that differentially private algorithms with fixed privacy parameters are vulnerable against gradient leakage attacks. This paper investigates alternative approaches to gradient leakage resilient deep learning with differential privacy (DP). First, we analyze existing implementation of deep learning with differential privacy, which use fixed noise variance to injects constant noise to the gradients in all layers using fixed privacy parameters. Despite the DP guarantee provided, the method suffers from low accuracy and is vulnerable to gradient leakage attacks. Second, we present a gradient leakage resilient deep learning approach with differential privacy guarantee by using dynamic privacy parameters. Unlike fixed-parameter strategies that result in constant noise variance, different dynamic parameter strategies present alternative techniques to introduce adaptive noise variance and adaptive noise injection which are closely aligned to the trend of gradient updates during differentially private model training. Finally, we describe four complementary metrics to evaluate and compare alternative approaches.

READ FULL TEXT

page 1

page 2

page 5

page 8

research
05/10/2023

Securing Distributed SGD against Gradient Leakage Threats

This paper presents a holistic approach to gradient leakage resilient di...
research
09/22/2022

In Differential Privacy, There is Truth: On Vote Leakage in Ensemble Private Learning

When learning from sensitive data, care must be taken to ensure that tra...
research
06/24/2023

Differentially Private Decentralized Deep Learning with Consensus Algorithms

Cooperative decentralized deep learning relies on direct information exc...
research
02/25/2021

Do Not Let Privacy Overbill Utility: Gradient Embedding Perturbation for Private Learning

The privacy leakage of the model about the training data can be bounded ...
research
06/17/2021

Large Scale Private Learning via Low-rank Reparametrization

We propose a reparametrization scheme to address the challenges of apply...
research
11/19/2022

A Survey on Differential Privacy with Machine Learning and Future Outlook

Nowadays, machine learning models and applications have become increasin...

Please sign up or login with your details

Forgot password? Click here to reset