On Dynamic Noise Influence in Differentially Private Learning

01/19/2021
by   Junyuan Hong, et al.
0

Protecting privacy in learning while maintaining the model performance has become increasingly critical in many applications that involve sensitive data. Private Gradient Descent (PGD) is a commonly used private learning framework, which noises gradients based on the Differential Privacy protocol. Recent studies show that dynamic privacy schedules of decreasing noise magnitudes can improve loss at the final iteration, and yet theoretical understandings of the effectiveness of such schedules and their connections to optimization algorithms remain limited. In this paper, we provide comprehensive analysis of noise influence in dynamic privacy schedules to answer these critical questions. We first present a dynamic noise schedule minimizing the utility upper bound of PGD, and show how the noise influence from each optimization step collectively impacts utility of the final model. Our study also reveals how impacts from dynamic noise influence change when momentum is used. We empirically show the connection exists for general non-convex losses, and the influence is greatly impacted by the loss curvature.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/21/2020

Low Influence, Utility, and Independence in Differential Privacy: A Curious Case of 3 2

We study the relationship between randomized low influence functions and...
research
02/02/2023

Convergence of Gradient Descent with Linearly Correlated Noise and Applications to Differentially Private Learning

We study stochastic optimization with linearly correlated noise. Our stu...
research
11/26/2019

Gradient Perturbation is Underrated for Differentially Private Convex Optimization

Gradient perturbation, widely used for differentially private optimizati...
research
04/03/2022

A Differentially Private Framework for Deep Learning with Convexified Loss Functions

Differential privacy (DP) has been applied in deep learning for preservi...
research
08/28/2018

Concentrated Differentially Private Gradient Descent with Adaptive per-Iteration Privacy Budget

Iterative algorithms, like gradient descent, are common tools for solvin...
research
02/12/2022

Private Adaptive Optimization with Side Information

Adaptive optimization methods have become the default solvers for many m...

Please sign up or login with your details

Forgot password? Click here to reset