Stochastic Recursive Gradient Algorithm for Nonconvex Optimization

05/20/2017
by   Lam M. Nguyen, et al.
0

In this paper, we study and analyze the mini-batch version of StochAstic Recursive grAdient algoritHm (SARAH), a method employing the stochastic recursive gradient, for solving empirical loss minimization for the case of nonconvex losses. We provide a sublinear convergence rate (to stationary points) for general nonconvex functions and a linear convergence rate for gradient dominated functions, both of which have some advantages compared to other modern stochastic gradient algorithms for nonconvex losses.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/08/2018

Mini-Batch Stochastic ADMMs for Nonconvex Nonsmooth Optimization

In the paper, we study the mini-batch stochastic ADMMs (alternating dire...
research
05/21/2018

Stochastic Gradient Descent for Stochastic Doubly-Nonconvex Composite Optimization

The stochastic gradient descent has been widely used for solving composi...
research
03/01/2017

SARAH: A Novel Method for Machine Learning Problems Using Stochastic Recursive Gradient

In this paper, we propose a StochAstic Recursive grAdient algoritHm (SAR...
research
01/21/2011

A fast and recursive algorithm for clustering large datasets with k-medians

Clustering with fast algorithms large samples of high dimensional data i...
research
08/16/2018

On the Convergence of Adaptive Gradient Methods for Nonconvex Optimization

Adaptive gradient methods are workhorses in deep learning. However, the ...
research
07/20/2017

Global Convergence of Langevin Dynamics Based Algorithms for Nonconvex Optimization

We present a unified framework to analyze the global convergence of Lang...
research
11/03/2017

Analysis of Approximate Stochastic Gradient Using Quadratic Constraints and Sequential Semidefinite Programs

We present convergence rate analysis for the approximate stochastic grad...

Please sign up or login with your details

Forgot password? Click here to reset