The q-Levenberg-Marquardt method for unconstrained nonlinear optimization

07/05/2021
by   Danijela Protic, et al.
0

A q-Levenberg-Marquardt method is an iterative procedure that blends a q-steepest descent and q-Gauss-Newton methods. When the current solution is far from the correct one the algorithm acts as the q-steepest descent method. Otherwise the algorithm acts as the q-Gauss-Newton method. A damping parameter is used to interpolate between these two methods. The q-parameter is used to escape from local minima and to speed up the search process near the optimal solution.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

05/27/2021

The q-Gauss-Newton method for unconstrained nonlinear optimization

A q-Gauss-Newton algorithm is an iterative procedure that solves nonline...
02/14/2021

A new search direction for full-Newton step infeasible interior-point method in linear optimization

In this paper, we study an infeasible interior-point method for linear o...
03/10/2022

Parallel inexact Newton-Krylov and quasi-Newton solvers for nonlinear elasticity

In this work, we address the implementation and performance of inexact N...
10/18/2019

Projected Newton Method for noise constrained Tikhonov regularization

Tikhonov regularization is a popular approach to obtain a meaningful sol...
12/23/2020

A note on overrelaxation in the Sinkhorn algorithm

We derive an a-priori parameter range for overrelaxation of the Sinkhorn...
08/27/2017

A Conservation Law Method in Optimization

We propose some algorithms to find local minima in nonconvex optimizatio...
02/26/2020

Fast Linear Convergence of Randomized BFGS

Since the late 1950's when quasi-Newton methods first appeared, they hav...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.