Differentially Private Stochastic Optimization: New Results in Convex and Non-Convex Settings

07/12/2021
by   Raef Bassily, et al.
0

We study differentially private stochastic optimization in convex and non-convex settings. For the convex case, we focus on the family of non-smooth generalized linear losses (GLLs). Our algorithm for the ℓ_2 setting achieves optimal excess population risk in near-linear time, while the best known differentially private algorithms for general convex losses run in super-linear time. Our algorithm for the ℓ_1 setting has nearly-optimal excess population risk Õ(√(logd/n)), and circumvents the dimension dependent lower bound of [AFKT21] for general non-smooth convex losses. In the differentially private non-convex setting, we provide several new algorithms for approximating stationary points of the population risk. For the ℓ_1-case with smooth losses and polyhedral constraint, we provide the first nearly dimension independent rate, Õ(log^2/3d/n^1/3) in linear time. For the constrained ℓ_2-case, with smooth losses, we obtain a linear-time algorithm with rate Õ(1/n^3/10d^1/10+(d/n^2)^1/5). Finally, for the ℓ_2-case we provide the first method for non-smooth weakly convex stochastic optimization with rate Õ(1/n^1/4+(d/n^2)^1/6) which matches the best existing non-private algorithm when d= O(√(n)). We also extend all our results above for the non-convex ℓ_2 setting to the ℓ_p setting, where 1 < p ≤ 2, with only polylogarithmic (in the dimension) overhead in the rates.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/12/2022

Differentially Private Online-to-Batch for Smooth Losses

We develop a new reduction that converts any online convex optimization ...
research
06/02/2022

Faster Rates of Convergence to Stationary Points in Differentially Private Optimization

We study the problem of approximating stationary points of Lipschitz and...
research
05/06/2022

Differentially Private Generalized Linear Models Revisited

We study the problem of (ϵ,δ)-differentially private learning of linear ...
research
06/17/2021

Stochastic Bias-Reduced Gradient Methods

We develop a new primitive for stochastic optimization: a low-bias, low-...
research
12/04/2020

Non-monotone risk functions for learning

In this paper we consider generalized classes of potentially non-monoton...
research
09/15/2022

Private Stochastic Optimization in the Presence of Outliers: Optimal Rates for (Non-Smooth) Convex Losses and Extension to Non-Convex Losses

We study differentially private (DP) stochastic optimization (SO) with d...
research
10/17/2018

Uniform Graphical Convergence of Subgradients in Nonconvex Optimization and Learning

We investigate the stochastic optimization problem of minimizing populat...

Please sign up or login with your details

Forgot password? Click here to reset