Linear Convergence of Accelerated Stochastic Gradient Descent for Nonconvex Nonsmooth Optimization

04/26/2017
by   Feihu Huang, et al.
0

In this paper, we study the stochastic gradient descent (SGD) method for the nonconvex nonsmooth optimization, and propose an accelerated SGD method by combining the variance reduction technique with Nesterov's extrapolation technique. Moreover, based on the local error bound condition, we establish the linear convergence of our method to obtain a stationary point of the nonconvex optimization. In particular, we prove that not only the sequence generated linearly converges to a stationary point of the problem, but also the corresponding sequence of objective values is linearly convergent. Finally, some numerical experiments demonstrate the effectiveness of our method. To the best of our knowledge, it is first proved that the accelerated SGD method converges linearly to the local minimum of the nonconvex optimization.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/19/2016

Stochastic Variance Reduction for Nonconvex Optimization

We study nonconvex finite-sum problems and analyze stochastic variance r...
research
12/31/2015

Linear Convergence of Proximal Gradient Algorithm with Extrapolation for a Class of Nonconvex Nonsmooth Minimization Problems

In this paper, we study the proximal gradient algorithm with extrapolati...
research
05/22/2017

Batch Size Matters: A Diffusion Approximation Framework on Nonconvex Stochastic Gradient Descent

In this paper, we study the stochastic gradient descent method in analyz...
research
08/25/2020

PAGE: A Simple and Optimal Probabilistic Gradient Estimator for Nonconvex Optimization

In this paper, we propose a novel stochastic gradient estimator—ProbAbil...
research
09/12/2017

A convergence frame for inexact nonconvex and nonsmooth algorithms and its applications to several iterations

In this paper, we consider the convergence of an abstract inexact noncon...
research
03/19/2016

Fast Incremental Method for Nonconvex Optimization

We analyze a fast incremental aggregated gradient method for optimizing ...
research
08/29/2018

Online ICA: Understanding Global Dynamics of Nonconvex Optimization via Diffusion Processes

Solving statistical learning problems often involves nonconvex optimizat...

Please sign up or login with your details

Forgot password? Click here to reset