Fast Linear Convergence of Randomized BFGS

02/26/2020
by   Dmitry Kovalev, et al.
0

Since the late 1950's when quasi-Newton methods first appeared, they have become one of the most widely used and efficient algorithmic paradigms for unconstrained optimization. Despite their immense practical success, there is little theory that shows why these methods are so efficient. We provide a semi-local rate of convergence for the randomized BFGS method which can be significantly better than that of gradient descent, finally giving theoretical evidence supporting the superior empirical performance of the method.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/02/2020

Asynchronous Parallel Stochastic Quasi-Newton Methods

Although first-order stochastic algorithms, such as stochastic gradient ...
research
05/18/2023

Modified Gauss-Newton Algorithms under Noise

Gauss-Newton methods and their stochastic version have been widely used ...
research
07/29/2013

A new approach in dynamic traveling salesman problem: a hybrid of ant colony optimization and descending gradient

Nowadays swarm intelligence-based algorithms are being used widely to op...
research
05/19/2014

On the saddle point problem for non-convex optimization

A central challenge to many fields of science and engineering involves m...
research
05/27/2022

Incorporating the Barzilai-Borwein Adaptive Step Size into Sugradient Methods for Deep Network Training

In this paper, we incorporate the Barzilai-Borwein step size into gradie...
research
07/05/2021

The q-Levenberg-Marquardt method for unconstrained nonlinear optimization

A q-Levenberg-Marquardt method is an iterative procedure that blends a q...
research
07/06/2023

Convergence Properties of Newton's Method for Globally Optimal Free Flight Trajectory Optimization

The algorithmic efficiency of Newton-based methods for Free Flight Traje...

Please sign up or login with your details

Forgot password? Click here to reset