Private Stochastic Convex Optimization: Optimal Rates in ℓ_1 Geometry

03/02/2021
by   Hilal Asi, et al.
0

Stochastic convex optimization over an ℓ_1-bounded domain is ubiquitous in machine learning applications such as LASSO but remains poorly understood when learning with differential privacy. We show that, up to logarithmic factors the optimal excess population loss of any (ε,δ)-differentially private optimizer is √(log(d)/n) + √(d)/ε n. The upper bound is based on a new algorithm that combines the iterative localization approach of <cit.> with a new analysis of private regularized mirror descent. It applies to ℓ_p bounded domains for p∈ [1,2] and queries at most n^3/2 gradients improving over the best previously known algorithm for the ℓ_2 case which needs n^2 gradients. Further, we show that when the loss functions satisfy additional smoothness assumptions, the excess loss is upper bounded (up to logarithmic factors) by √(log(d)/n) + (log(d)/ε n)^2/3. This bound is achieved by a new variance-reduced version of the Frank-Wolfe algorithm that requires just a single pass over the data. We also show that the lower bound in this case is the minimum of the two rates mentioned above.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/02/2021

Improved Rates for Differentially Private Stochastic Convex Optimization with Heavy-Tailed Data

We study stochastic convex optimization with heavy-tailed data under the...
research
07/31/2021

Faster Rates of Differentially Private Stochastic Convex Optimization

In this paper, we revisit the problem of Differentially Private Stochast...
research
07/18/2022

Private Convex Optimization in General Norms

We propose a new framework for differentially private optimization of co...
research
05/10/2020

Private Stochastic Convex Optimization: Optimal Rates in Linear Time

We study differentially private (DP) algorithms for stochastic convex op...
research
02/22/2020

Private Stochastic Convex Optimization: Efficient Algorithms for Non-smooth Objectives

In this paper, we revisit the problem of private stochastic convex optim...
research
05/06/2022

Differentially Private Generalized Linear Models Revisited

We study the problem of (ϵ,δ)-differentially private learning of linear ...
research
06/17/2021

Shuffle Private Stochastic Convex Optimization

In shuffle privacy, each user sends a collection of randomized messages ...

Please sign up or login with your details

Forgot password? Click here to reset