Stochastic Bias-Reduced Gradient Methods

06/17/2021
by   Hilal Asi, et al.
0

We develop a new primitive for stochastic optimization: a low-bias, low-cost estimator of the minimizer x_⋆ of any Lipschitz strongly-convex function. In particular, we use a multilevel Monte-Carlo approach due to Blanchet and Glynn to turn any optimal stochastic gradient method into an estimator of x_⋆ with bias δ, variance O(log(1/δ)), and an expected sampling cost of O(log(1/δ)) stochastic gradient evaluations. As an immediate consequence, we obtain cheap and nearly unbiased gradient estimators for the Moreau-Yoshida envelope of any Lipschitz convex function, allowing us to perform dimension-free randomized smoothing. We demonstrate the potential of our estimator through four applications. First, we develop a method for minimizing the maximum of N functions, improving on recent results and matching a lower bound up logarithmic factors. Second and third, we recover state-of-the-art rates for projection-efficient and gradient-efficient optimization using simple algorithms with a transparent analysis. Finally, we show that an improved version of our estimator would yield a nearly linear-time, optimal-utility, differentially-private non-smooth stochastic optimization method.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/12/2021

Differentially Private Stochastic Optimization: New Results in Convex and Non-Convex Settings

We study differentially private stochastic optimization in convex and no...
research
08/03/2023

Quantum speedups for stochastic optimization

We consider the problem of minimizing a continuous function given quantu...
research
02/13/2018

Stochastic Variance-Reduced Hamilton Monte Carlo Methods

We propose a fast stochastic Hamilton Monte Carlo (HMC) method, for samp...
research
01/01/2023

ReSQueing Parallel and Private Stochastic Convex Optimization

We introduce a new tool for stochastic convex optimization (SCO): a Rewe...
research
09/15/2022

Private Stochastic Optimization in the Presence of Outliers: Optimal Rates for (Non-Smooth) Convex Losses and Extension to Non-Convex Losses

We study differentially private (DP) stochastic optimization (SO) with d...
research
12/20/2022

Generalized Simultaneous Perturbation Stochastic Approximation with Reduced Estimator Bias

We present in this paper a family of generalized simultaneous perturbati...
research
02/09/2021

A New Framework for Variance-Reduced Hamiltonian Monte Carlo

We propose a new framework of variance-reduced Hamiltonian Monte Carlo (...

Please sign up or login with your details

Forgot password? Click here to reset