Directional Privacy for Deep Learning

11/09/2022
by   Pedro Faustini, et al.
0

Differentially Private Stochastic Gradient Descent (DP-SGD) is a key method for applying privacy in the training of deep learning models. This applies isotropic Gaussian noise to gradients during training, which can perturb these gradients in any direction, damaging utility. Metric DP, however, can provide alternative mechanisms based on arbitrary metrics that might be more suitable. In this paper we apply directional privacy, via a mechanism based on the von Mises-Fisher (VMF) distribution, to perturb gradients in terms of angular distance so that gradient direction is broadly preserved. We show that this provides ϵ d-privacy for deep learning training, rather than the (ϵ, δ)-privacy of the Gaussian mechanism; and that experimentally, on key datasets, the VMF mechanism can outperform the Gaussian in the utility-privacy trade-off.

READ FULL TEXT

page 4

page 10

page 11

page 16

research
07/25/2023

Spectral-DP: Differentially Private Deep Learning through Spectral Perturbation and Filtering

Differential privacy is a widely accepted measure of privacy in the cont...
research
09/29/2018

Directional Analysis of Stochastic Gradient Descent via von Mises-Fisher Distributions in Deep learning

Although stochastic gradient descent (SGD) is a driving force behind the...
research
10/12/2021

Not all noise is accounted equally: How differentially private learning benefits from large sampling rates

Learning often involves sensitive data and as such, privacy preserving e...
research
06/09/2023

Differentially Private Sharpness-Aware Training

Training deep learning models with differential privacy (DP) results in ...
research
07/21/2023

Batch Clipping and Adaptive Layerwise Clipping for Differential Private Stochastic Gradient Descent

Each round in Differential Private Stochastic Gradient Descent (DPSGD) t...
research
12/15/2018

A General Approach to Adding Differential Privacy to Iterative Training Procedures

In this work we address the practical challenges of training machine lea...
research
02/26/2020

On the Effectiveness of Mitigating Data Poisoning Attacks with Gradient Shaping

Machine learning algorithms are vulnerable to data poisoning attacks. Pr...

Please sign up or login with your details

Forgot password? Click here to reset