Private Stochastic Non-Convex Optimization: Adaptive Algorithms and Tighter Generalization Bounds

06/24/2020
by   Yingxue Zhou, et al.
0

We study differentially private (DP) algorithms for stochastic non-convex optimization. In this problem, the goal is to minimize the population loss over a p-dimensional space given n i.i.d. samples drawn from a distribution. We improve upon the population gradient bound of √(p)/√(n) from prior work and obtain a sharper rate of √(p)/√(n). We obtain this rate by providing the first analyses on a collection of private gradient-based methods, including adaptive algorithms DP RMSProp and DP Adam. Our proof technique leverages the connection between differential privacy and adaptive data analysis to bound gradient estimation error at every iterate, which circumvents the worse generalization bound from the standard uniform convergence argument. Finally, we evaluate the proposed algorithms on two popular deep learning tasks and demonstrate the empirical advantages of DP adaptive gradient methods over standard DP SGD.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/27/2019

Private Stochastic Convex Optimization with Optimal Rates

We study differentially private (DP) algorithms for stochastic convex op...
research
06/27/2022

Normalized/Clipped SGD with Perturbation for Differentially Private Non-Convex Optimization

By ensuring differential privacy in the learning algorithms, one can rig...
research
02/23/2021

Learning with User-Level Privacy

We propose and analyze algorithms to solve a range of learning tasks und...
research
05/10/2020

Private Stochastic Convex Optimization: Optimal Rates in Linear Time

We study differentially private (DP) algorithms for stochastic convex op...
research
04/04/2022

Langevin Diffusion: An Almost Universal Algorithm for Private Euclidean (Convex) Optimization

In this paper we revisit the problem of differentially private empirical...
research
06/16/2022

On Private Online Convex Optimization: Optimal Algorithms in ℓ_p-Geometry and High Dimensional Contextual Bandits

Differentially private (DP) stochastic convex optimization (SCO) is ubiq...
research
04/07/2022

What You See is What You Get: Distributional Generalization for Algorithm Design in Deep Learning

We investigate and leverage a connection between Differential Privacy (D...

Please sign up or login with your details

Forgot password? Click here to reset