A theoretical and empirical study of new adaptive algorithms with additional momentum steps and shifted updates for stochastic non-convex optimization

10/16/2021
by   Cristian Daniel Alecsa, et al.
0

In the following paper we introduce new adaptive algorithms endowed with momentum terms for stochastic non-convex optimization problems. We investigate the almost sure convergence to stationary points, along with a finite-time horizon analysis with respect to a chosen final iteration, and we also inspect the worst-case iteration complexity. An estimate for the expectation of the squared Euclidean norm of the gradient is given and the theoretical analysis that we perform is assisted by various computational simulations for neural network training.

READ FULL TEXT

page 20

page 21

research
06/09/2019

Stochastic In-Face Frank-Wolfe Methods for Non-Convex Optimization and Sparse Neural Network Training

The Frank-Wolfe method and its extensions are well-suited for delivering...
research
06/06/2021

Minibatch and Momentum Model-based Methods for Stochastic Non-smooth Non-convex Optimization

Stochastic model-based methods have received increasing attention lately...
research
08/11/2020

Riemannian stochastic recursive momentum method for non-convex optimization

We propose a stochastic recursive momentum method for Riemannian non-con...
research
08/30/2018

A Unified Analysis of Stochastic Momentum Methods for Deep Learning

Stochastic momentum methods have been widely adopted in training deep ne...
research
08/20/2015

AdaDelay: Delay Adaptive Distributed Stochastic Convex Optimization

We study distributed stochastic convex optimization under the delayed gr...
research
02/15/2021

A Momentum-Assisted Single-Timescale Stochastic Approximation Algorithm for Bilevel Optimization

This paper proposes a new algorithm – the Momentum-assisted Single-times...

Please sign up or login with your details

Forgot password? Click here to reset