The q-Levenberg-Marquardt method for unconstrained nonlinear optimization

07/05/2021
by   Danijela Protic, et al.
0

A q-Levenberg-Marquardt method is an iterative procedure that blends a q-steepest descent and q-Gauss-Newton methods. When the current solution is far from the correct one the algorithm acts as the q-steepest descent method. Otherwise the algorithm acts as the q-Gauss-Newton method. A damping parameter is used to interpolate between these two methods. The q-parameter is used to escape from local minima and to speed up the search process near the optimal solution.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/27/2021

The q-Gauss-Newton method for unconstrained nonlinear optimization

A q-Gauss-Newton algorithm is an iterative procedure that solves nonline...
research
03/22/2021

On the nonlinear Dirichlet-Neumann method and preconditioner for Newton's method

The Dirichlet-Neumann (DN) method has been extensively studied for linea...
research
09/12/2022

Backtracking New Q-Newton's method: a good algorithm for optimization and solving systems of equations

In this paper, by combining the algorithm New Q-Newton's method - develo...
research
02/14/2021

A new search direction for full-Newton step infeasible interior-point method in linear optimization

In this paper, we study an infeasible interior-point method for linear o...
research
12/23/2020

A note on overrelaxation in the Sinkhorn algorithm

We derive an a-priori parameter range for overrelaxation of the Sinkhorn...
research
08/27/2017

A Conservation Law Method in Optimization

We propose some algorithms to find local minima in nonconvex optimizatio...
research
02/26/2020

Fast Linear Convergence of Randomized BFGS

Since the late 1950's when quasi-Newton methods first appeared, they hav...

Please sign up or login with your details

Forgot password? Click here to reset