Complexity Analysis of a Stochastic Cubic Regularisation Method under Inexact Gradient Evaluations and Dynamic Hessian Accuracy

01/29/2020
by   Stefania Bellavia, et al.
0

We here adapt an extended version of the adaptive cubic regularisation method with dynamic inexact Hessian information for nonconvex optimisation in [2] to the stochastic optimisation setting. While exact function evaluations are still considered, this novel variant inherits the innovative use of adaptive accuracy requirements for Hessian approximations introduced in [2] and additionally employs inexact computations of the gradient. Without restrictions on the variance of the errors, we assume that these approximations are available within a sufficiently large, but fixed, probability and we extend, in the spirit of [13], the deterministic analysis of the framework to its stochastic counterpart, showing that the expected number of iterations to reach a first-order stationary point matches the well known worst-case optimal complexity. This is, in fact, still given by O(epsilon^(-3/2)), with respect to the first-order epsilon tolerance.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/30/2021

Quadratic and Cubic Regularisation Methods with Inexact function and Random Derivatives for Finite-Sum Minimisation

This paper focuses on regularisation methods using models up to the thir...
research
01/31/2019

Stochastic Recursive Variance-Reduced Cubic Regularization Methods

Stochastic Variance-Reduced Cubic regularization (SVRC) algorithms have ...
research
11/29/2018

Sample Efficient Stochastic Variance-Reduced Cubic Regularization Method

We propose a sample efficient stochastic variance-reduced cubic regulari...
research
09/05/2023

First and zeroth-order implementations of the regularized Newton method with lazy approximated Hessians

In this work, we develop first-order (Hessian-free) and zero-order (deri...
research
07/05/2016

Stochastic Quasi-Newton Methods for Nonconvex Stochastic Optimization

In this paper we study stochastic quasi-Newton methods for nonconvex sto...
research
09/28/2020

Escaping Saddle-Points Faster under Interpolation-like Conditions

In this paper, we show that under over-parametrization several standard ...
research
12/10/2020

Stochastic Damped L-BFGS with Controlled Norm of the Hessian Approximation

We propose a new stochastic variance-reduced damped L-BFGS algorithm, wh...

Please sign up or login with your details

Forgot password? Click here to reset