Langevin Diffusion: An Almost Universal Algorithm for Private Euclidean (Convex) Optimization

04/04/2022
by   Arun Ganesh, et al.
0

In this paper we revisit the problem of differentially private empirical risk minimization (DP-ERM) and stochastic convex optimization (DP-SCO). We show that a well-studied continuous time algorithm from statistical physics called Langevin diffusion (LD) simultaneously provides optimal privacy/utility tradeoffs for both DP-ERM and DP-SCO under ϵ-DP and (ϵ,δ)-DP. Using the uniform stability properties of LD, we provide the optimal excess population risk guarantee for ℓ_2-Lipschitz convex losses under ϵ-DP (even up to log n factors), thus improving on Asi et al. Along the way we provide various technical tools which can be of independent interest: i) A new Rényi divergence bound for LD when run on loss functions over two neighboring data sets, ii) Excess empirical risk bounds for last-iterate LD analogous to that of Shamir and Zhang for noisy stochastic gradient descent (SGD), and iii) A two phase excess risk analysis of LD, where the first phase is when the diffusion has not converged in any reasonable sense to a stationary distribution, and in the second phase when the diffusion has converged to a variant of Gibbs distribution. Our universality results crucially rely on the dynamics of LD. When it has converged to a stationary distribution, we obtain the optimal bounds under ϵ-DP. When it is run only for a very short time ∝ 1/p, we obtain the optimal bounds under (ϵ,δ)-DP. Here, p is the dimensionality of the model space. Our work initiates a systematic study of DP continuous time optimization. We believe this may have ramifications in the design of discrete time DP optimization algorithms analogous to that in the non-private setting, where continuous time dynamical viewpoints have helped in designing new algorithms, including the celebrated mirror-descent and Polyak's momentum method.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/27/2019

Private Stochastic Convex Optimization with Optimal Rates

We study differentially private (DP) algorithms for stochastic convex op...
research
05/22/2023

Faster Differentially Private Convex Optimization via Second-Order Methods

Differentially private (stochastic) gradient descent is the workhorse of...
research
03/01/2021

Non-Euclidean Differentially Private Stochastic Convex Optimization

Differentially private (DP) stochastic convex optimization (SCO) is a fu...
research
06/16/2022

On Private Online Convex Optimization: Optimal Algorithms in ℓ_p-Geometry and High Dimensional Contextual Bandits

Differentially private (DP) stochastic convex optimization (SCO) is ubiq...
research
12/24/2022

Concentration of the Langevin Algorithm's Stationary Distribution

A canonical algorithm for log-concave sampling is the Langevin Algorithm...
research
03/29/2017

Efficient Private ERM for Smooth Objectives

In this paper, we consider efficient differentially private empirical ri...
research
06/24/2020

Private Stochastic Non-Convex Optimization: Adaptive Algorithms and Tighter Generalization Bounds

We study differentially private (DP) algorithms for stochastic non-conve...

Please sign up or login with your details

Forgot password? Click here to reset