LSOS: Line-search Second-Order Stochastic optimization methods

07/31/2020
by   Daniela di Serafino, et al.
0

We develop a line-search second-order algorithmic framework for optimization problems in noisy environments, i.e., assuming that only noisy values are available for the objective function and its gradient and Hessian. In the general noisy case, almost sure convergence of the methods fitting into the framework is proved when line searches and suitably decaying step lengths are combined. When the objective function is a finite sum, such as in machine learning applications, our framework is specialized as a stochastic L-BFGS method with line search only, with almost sure convergence to the solution. In this case, linear convergence rate of the expected function error is also proved, along with a worst-case 𝒪 (log(ε^-1)) complexity bound. Numerical experiments, including comparisons with state-of-the art first- and second-order stochastic optimization methods, show the efficiency of our approach.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/12/2016

Second-Order Stochastic Optimization for Machine Learning in Linear Time

First-order stochastic methods are the state-of-the-art in large-scale m...
research
05/16/2023

The Average Rate of Convergence of the Exact Line Search Gradient Descent Method

It is very well-known that when the exact line search gradient descent m...
research
04/11/2022

Hermite-type modifications of BOBYQA for optimization with some partial derivatives

In this work we propose two Hermite-type optimization methods, Hermite l...
research
11/15/2016

Oracle Complexity of Second-Order Methods for Finite-Sum Problems

Finite-sum optimization problems are ubiquitous in machine learning, and...
research
10/14/2022

A Multistep Frank-Wolfe Method

The Frank-Wolfe algorithm has regained much interest in its use in struc...
research
03/23/2022

Spectral Projected Subgradient Method for Nonsmooth Convex Optimization Problems

We consider constrained optimization problems with a nonsmooth objective...
research
01/15/2013

Pushing Stochastic Gradient towards Second-Order Methods -- Backpropagation Learning with Transformations in Nonlinearities

Recently, we proposed to transform the outputs of each hidden neuron in ...

Please sign up or login with your details

Forgot password? Click here to reset