SA-DPSGD: Differentially Private Stochastic Gradient Descent based on Simulated Annealing

11/14/2022
by   Jie Fu, et al.
0

Differential privacy (DP) provides a formal privacy guarantee that prevents adversaries with access to machine learning models from extracting information about individual training points. Differentially private stochastic gradient descent (DPSGD) is the most popular training method with differential privacy in image recognition. However, existing DPSGD schemes lead to significant performance degradation, which prevents the application of differential privacy. In this paper, we propose a simulated annealing-based differentially private stochastic gradient descent scheme (SA-DPSGD) which accepts a candidate update with a probability that depends both on the update quality and on the number of iterations. Through this random update screening, we make the differentially private gradient descent proceed in the right direction in each iteration, and result in a more accurate model finally. In our experiments, under the same hyperparameters, our scheme achieves test accuracies 98.35 87.41 compared to the state-of-the-art result of 98.12 freely adjusted hyperparameters, our scheme achieves even higher accuracies, 98.89 for closing the accuracy gap between private and non-private image classification.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/28/2022

Unlocking High-Accuracy Differentially Private Image Classification through Scale

Differential Privacy (DP) provides a formal privacy guarantee preventing...
research
07/19/2023

The importance of feature preprocessing for differentially private linear optimization

Training machine learning models with differential privacy (DP) has rece...
research
08/28/2018

Concentrated Differentially Private Gradient Descent with Adaptive per-Iteration Privacy Budget

Iterative algorithms, like gradient descent, are common tools for solvin...
research
05/09/2019

Differentially Private Learning with Adaptive Clipping

We introduce a new adaptive clipping technique for training learning mod...
research
06/12/2020

Differentially Private Stochastic Coordinate Descent

In this paper we tackle the challenge of making the stochastic coordinat...
research
06/15/2022

Disparate Impact in Differential Privacy from Gradient Misalignment

As machine learning becomes more widespread throughout society, aspects ...
research
06/14/2023

Augment then Smooth: Reconciling Differential Privacy with Certified Robustness

Machine learning models are susceptible to a variety of attacks that can...

Please sign up or login with your details

Forgot password? Click here to reset