Stability Enhanced Privacy and Applications in Private Stochastic Gradient Descent

06/25/2020
by   Lauren Watson, et al.
2

Private machine learning involves addition of noise while training, resulting in lower accuracy. Intuitively, greater stability can imply greater privacy and improve this privacy-utility tradeoff. We study this role of stability in private empirical risk minimization, where differential privacy is achieved by output perturbation, and establish a corresponding theoretical result showing that for strongly-convex loss functions, an algorithm with uniform stability of β implies a bound of O(√(β)) on the scale of noise required for differential privacy. The result applies to both explicit regularization and to implicitly stabilized ERM, such as adaptations of Stochastic Gradient Descent that are known to be stable. Thus, it generalizes recent results that improve privacy through modifications to SGD, and establishes stability as the unifying perspective. It implies new privacy guarantees for optimizations with uniform stability guarantees, where a corresponding differential privacy guarantee was previously not known. Experimental results validate the utility of stability enhanced privacy in several problems, including application of elastic nets and feature selection.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/16/2020

A Better Bound Gives a Hundred Rounds: Enhanced Privacy Guarantees via f-Divergences

We derive the optimal differential privacy (DP) parameters of a mechanis...
research
02/23/2015

Learning with Differential Privacy: Stability, Learnability and the Sufficiency and Necessity of ERM Principle

While machine learning has proven to be a powerful data-driven solution ...
research
03/07/2022

Continual and Sliding Window Release for Private Empirical Risk Minimization

It is difficult to continually update private machine learning models wi...
research
02/01/2023

Privacy Risk for anisotropic Langevin dynamics using relative entropy bounds

The privacy preserving properties of Langevin dynamics with additive iso...
research
08/20/2018

Privacy Amplification by Iteration

Many commonly used learning algorithms work by iteratively updating an i...
research
12/05/2019

On the Intrinsic Privacy of Stochastic Gradient Descent

Private learning algorithms have been proposed that ensure strong differ...
research
02/08/2023

DIFF2: Differential Private Optimization via Gradient Differences for Nonconvex Distributed Learning

Differential private optimization for nonconvex smooth objective is cons...

Please sign up or login with your details

Forgot password? Click here to reset