Private (Stochastic) Non-Convex Optimization Revisited: Second-Order Stationary Points and Excess Risks

02/20/2023
by   Arun Ganesh, et al.
0

We consider the problem of minimizing a non-convex objective while preserving the privacy of the examples in the training data. Building upon the previous variance-reduced algorithm SpiderBoost, we introduce a new framework that utilizes two different kinds of gradient oracles. The first kind of oracles can estimate the gradient of one point, and the second kind of oracles, less precise and more cost-effective, can estimate the gradient difference between two points. SpiderBoost uses the first kind periodically, once every few steps, while our framework proposes using the first oracle whenever the total drift has become large and relies on the second oracle otherwise. This new framework ensures the gradient estimations remain accurate all the time, resulting in improved rates for finding second-order stationary points. Moreover, we address a more challenging task of finding the global minima of a non-convex objective using the exponential mechanism. Our findings indicate that the regularized exponential mechanism can closely match previous empirical and population risk bounds, without requiring smoothness assumptions for algorithms with polynomial running time. Furthermore, by disregarding running time considerations, we show that the exponential mechanism can achieve a good population risk bound and provide a nearly matching lower bound.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/24/2020

Second-Order Information in Non-Convex Stochastic Optimization: Power and Limitations

We design an algorithm which finds an ϵ-approximate stationary point (wi...
research
05/01/2019

Stabilized SVRG: Simple Variance Reduction for Nonconvex Optimization

Variance reduction techniques like SVRG provide simple and fast algorith...
research
07/04/2018

SPIDER: Near-Optimal Non-Convex Optimization via Stochastic Path Integrated Differential Estimator

In this paper, we propose a new technique named Stochastic Path-Integrat...
research
02/13/2019

The Complexity of Making the Gradient Small in Stochastic Convex Optimization

We give nearly matching upper and lower bounds on the oracle complexity ...
research
03/25/2018

Minimizing Nonconvex Population Risk from Rough Empirical Risk

Population risk---the expectation of the loss over the sampling mechanis...
research
06/21/2023

Optimal Algorithms for Stochastic Bilevel Optimization under Relaxed Smoothness Conditions

Stochastic Bilevel optimization usually involves minimizing an upper-lev...
research
10/30/2019

Linear Speedup in Saddle-Point Escape for Decentralized Non-Convex Optimization

Under appropriate cooperation protocols and parameter choices, fully dec...

Please sign up or login with your details

Forgot password? Click here to reset