Differentially Private Sharpness-Aware Training

06/09/2023
by   Jinseong Park, et al.
0

Training deep learning models with differential privacy (DP) results in a degradation of performance. The training dynamics of models with DP show a significant difference from standard training, whereas understanding the geometric properties of private learning remains largely unexplored. In this paper, we investigate sharpness, a key factor in achieving better generalization, in private learning. We show that flat minima can help reduce the negative effects of per-example gradient clipping and the addition of Gaussian noise. We then verify the effectiveness of Sharpness-Aware Minimization (SAM) for seeking flat minima in private learning. However, we also discover that SAM is detrimental to the privacy budget and computational time due to its two-step optimization. Thus, we propose a new sharpness-aware training method that mitigates the privacy-optimization trade-off. Our experimental results demonstrate that the proposed method improves the performance of deep learning models with DP from both scratch and fine-tuning. Code is available at https://github.com/jinseongP/DPSAT.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/28/2019

DP-LSSGD: A Stochastic Optimization Method to Lift the Utility in Privacy-Preserving ERM

Machine learning (ML) models trained by differentially private stochasti...
research
09/12/2023

Exploring the Benefits of Differentially Private Pre-training and Parameter-Efficient Fine-tuning for Table Transformers

For machine learning with tabular data, Table Transformer (TabTransforme...
research
11/09/2022

Directional Privacy for Deep Learning

Differentially Private Stochastic Gradient Descent (DP-SGD) is a key met...
research
06/19/2020

Robust Differentially Private Training of Deep Neural Networks

Differentially private stochastic gradient descent (DPSGD) is a variatio...
research
05/17/2021

Gradient Masking and the Underestimated Robustness Threats of Differential Privacy in Deep Learning

An important problem in deep learning is the privacy and security of neu...
research
05/28/2023

Training Private Models That Know What They Don't Know

Training reliable deep learning models which avoid making overconfident ...
research
05/21/2022

Scalable and Efficient Training of Large Convolutional Neural Networks with Differential Privacy

Large convolutional neural networks (CNN) can be difficult to train in t...

Please sign up or login with your details

Forgot password? Click here to reset