ReSQueing Parallel and Private Stochastic Convex Optimization

01/01/2023
by   Yair Carmon, et al.
0

We introduce a new tool for stochastic convex optimization (SCO): a Reweighted Stochastic Query (ReSQue) estimator for the gradient of a function convolved with a (Gaussian) probability density. Combining ReSQue with recent advances in ball oracle acceleration [CJJJLST20, ACJJS21], we develop algorithms achieving state-of-the-art complexities for SCO in parallel and private settings. For a SCO objective constrained to the unit ball in ℝ^d, we obtain the following results (up to polylogarithmic factors). We give a parallel algorithm obtaining optimization error ϵ_opt with d^1/3ϵ_opt^-2/3 gradient oracle query depth and d^1/3ϵ_opt^-2/3 + ϵ_opt^-2 gradient queries in total, assuming access to a bounded-variance stochastic gradient estimator. For ϵ_opt∈ [d^-1, d^-1/4], our algorithm matches the state-of-the-art oracle depth of [BJLLS19] while maintaining the optimal total work of stochastic gradient descent. We give an (ϵ_dp, δ)-differentially private algorithm which, given n samples of Lipschitz loss functions, obtains near-optimal optimization error and makes min(n, n^2ϵ_dp^2 d^-1) + min(n^4/3ϵ_dp^1/3, (nd)^2/3ϵ_dp^-1) queries to the gradients of these functions. In the regime d ≤ n ϵ_dp^2, where privacy comes at no cost in terms of the optimal loss up to constants, our algorithm uses n + (nd)^2/3ϵ_dp^-1 queries and improves recent advancements of [KLL21, AFKT21]. In the moderately low-dimensional setting d ≤√(n)ϵ_dp^3/2, our query complexity is near-linear.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/22/2020

Private Stochastic Convex Optimization: Efficient Algorithms for Non-smooth Objectives

In this paper, we revisit the problem of private stochastic convex optim...
research
03/29/2021

Private Non-smooth Empirical Risk Minimization and Stochastic Convex Optimization in Subquadratic Steps

We study the differentially private Empirical Risk Minimization (ERM) an...
research
03/24/2022

Distributionally Robust Optimization via Ball Oracle Acceleration

We develop and analyze algorithms for distributionally robust optimizati...
research
06/17/2021

Stochastic Bias-Reduced Gradient Methods

We develop a new primitive for stochastic optimization: a low-bias, low-...
research
05/04/2021

Thinking Inside the Ball: Near-Optimal Minimization of the Maximal Loss

We characterize the complexity of minimizing max_i∈[N] f_i(x) for convex...
research
06/17/2021

Shuffle Private Stochastic Convex Optimization

In shuffle privacy, each user sends a collection of randomized messages ...
research
03/29/2022

Efficient Convex Optimization Requires Superlinear Memory

We show that any memory-constrained, first-order algorithm which minimiz...

Please sign up or login with your details

Forgot password? Click here to reset