Simple and Optimal Stochastic Gradient Methods for Nonsmooth Nonconvex Optimization

08/22/2022
by   Zhize Li, et al.
23

We propose and analyze several stochastic gradient algorithms for finding stationary points or local minimum in nonconvex, possibly with nonsmooth regularizer, finite-sum and online optimization problems. First, we propose a simple proximal stochastic gradient algorithm based on variance reduction called ProxSVRG+. We provide a clean and tight analysis of ProxSVRG+, which shows that it outperforms the deterministic proximal gradient descent (ProxGD) for a wide range of minibatch sizes, hence solves an open problem proposed in Reddi et al. (2016b). Also, ProxSVRG+ uses much less proximal oracle calls than ProxSVRG (Reddi et al., 2016b) and extends to the online setting by avoiding full gradient computations. Then, we further propose an optimal algorithm, called SSRGD, based on SARAH (Nguyen et al., 2017) and show that SSRGD further improves the gradient complexity of ProxSVRG+ and achieves the optimal upper bound, matching the known lower bound of (Fang et al., 2018; Li et al., 2021). Moreover, we show that both ProxSVRG+ and SSRGD enjoy automatic adaptation with local structure of the objective function such as the Polyak-Łojasiewicz (PL) condition for nonconvex functions in the finite-sum case, i.e., we prove that both of them can automatically switch to faster global linear convergence without any restart performed in prior work ProxSVRG (Reddi et al., 2016b). Finally, we focus on the more challenging problem of finding an (ϵ, δ)-local minimum instead of just finding an ϵ-approximate (first-order) stationary point (which may be some bad unstable saddle points). We show that SSRGD can find an (ϵ, δ)-local minimum by simply adding some random perturbations. Our algorithm is almost as simple as its counterpart for finding stationary points, and achieves similar optimal rates.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/13/2018

A Simple Proximal Stochastic Gradient Method for Nonsmooth Nonconvex Optimization

We analyze stochastic gradient algorithms for optimizing nonconvex, nons...
research
04/19/2019

SSRGD: Simple Stochastic Recursive Gradient Descent for Escaping Saddle Points

We analyze stochastic gradient algorithms for optimizing nonconvex probl...
research
01/22/2019

Optimal Finite-Sum Smooth Non-Convex Optimization with SARAH

The total complexity (measured as the total number of gradient computati...
research
06/22/2018

Finding Local Minima via Stochastic Nested Variance Reduction

We propose two algorithms that can find local minima faster than the sta...
research
03/21/2021

ANITA: An Optimal Loopless Accelerated Variance-Reduced Gradient Method

We propose a novel accelerated variance-reduced gradient method called A...
research
09/04/2023

On Penalty Methods for Nonconvex Bilevel Optimization and First-Order Stochastic Approximation

In this work, we study first-order algorithms for solving Bilevel Optimi...
research
06/04/2021

An Even More Optimal Stochastic Optimization Algorithm: Minibatching and Interpolation Learning

We present and analyze an algorithm for optimizing smooth and convex or ...

Please sign up or login with your details

Forgot password? Click here to reset