Stochastic gradient-free descents

12/31/2019
by   Xiaopeng Luo, et al.
0

In this paper we propose stochastic gradient-free methods and gradient-free methods with momentum for solving stochastic optimization problems. Our fundamental idea is not to directly evaluate and apply gradients but to indirectly learn information about gradients through stochastic directions and corresponding output feedbacks of the objective function, so that it might be possible to further broaden the scope of applications. Without using gradients, these methods still maintain the sublinear convergence rate O(1/k) with a decaying stepsize α_k= O(1/k) for the strongly convex objectives with Lipschitz gradients, and converge to a solution with a zero expected gradient norm when the objective function is nonconvex, twice differentiable and bounded below. In addition, we provide a theoretical analysis about the inclusion of momentum in stochastic settings, which shows that the momentum term introduces extra biases but reduces variances for stochastic directions.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/01/2020

Convergence of Gradient Algorithms for Nonconvex C^1+α Cost Functions

This paper is concerned with convergence of stochastic gradient algorith...
research
05/30/2022

Last-iterate convergence analysis of stochastic momentum methods for neural networks

The stochastic momentum method is a commonly used acceleration technique...
research
02/13/2020

Convergence of a Stochastic Gradient Method with Momentum for Nonsmooth Nonconvex Optimization

Stochastic gradient methods with momentum are widely used in application...
research
11/26/2021

Random-reshuffled SARAH does not need a full gradient computations

The StochAstic Recursive grAdient algoritHm (SARAH) algorithm is a varia...
research
02/25/2020

Can speed up the convergence rate of stochastic gradient methods to O(1/k^2) by a gradient averaging strategy?

In this paper we consider the question of whether it is possible to appl...
research
04/28/2023

A Stochastic-Gradient-based Interior-Point Algorithm for Solving Smooth Bound-Constrained Optimization Problems

A stochastic-gradient-based interior-point algorithm for minimizing a co...
research
01/31/2019

Improving SGD convergence by tracing multiple promising directions and estimating distance to minimum

Deep neural networks are usually trained with stochastic gradient descen...

Please sign up or login with your details

Forgot password? Click here to reset