Robust and Efficient Optimization Using a Marquardt-Levenberg Algorithm with R Package marqLevAlg

09/08/2020
by   Viviane Philipps, et al.
0

Optimization is an essential task in many computational problems. In statistical modelling for instance, in the absence of analytical solution, maximum likelihood estimators are often retrieved using iterative optimization algorithms. R software already includes a variety of optimizers from general-purpose optimization algorithms to more specific ones. Among Newton-like methods which have good convergence properties, the Marquardt-Levenberg algorithm (MLA) provides a particularly robust algorithm for solving optimization problems. Newton-like methods generally have two major limitations: (i) convergence criteria that are a little too loose, and do not ensure convergence towards a maximum, (ii) a calculation time that is often too long, which makes them unusable in complex problems. We propose in the marqLevAlg package an efficient and general implementation of a modified MLA combined with strict convergence criteria and parallel computations. Convergence to saddle points is avoided by using the relative distance to minimum/maximum criterion (RDM) in addition to the stability of the parameters and of the objective function. RDM exploits the first and second derivatives to compute the distance to a true local maximum. The independent multiple evaluations of the objective function at each iteration used for computing either first or second derivatives are called in parallel to allow a theoretical speed up to the square of the number of parameters. We show through the estimation of 7 relatively complex statistical models how parallel implementation can largely reduce computational time. We also show through the estimation of the same model using 3 different algorithms (BFGS of optim routine, an E-M, and MLA) the superior efficiency of MLA to correctly and consistently reach the maximum.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/27/2021

The q-Gauss-Newton method for unconstrained nonlinear optimization

A q-Gauss-Newton algorithm is an iterative procedure that solves nonline...
research
08/21/2019

A tree-based radial basis function method for noisy parallel surrogate optimization

Parallel surrogate optimization algorithms have proven to be efficient m...
research
11/27/2019

On the choice of initial guesses for the Newton-Raphson algorithm

The initialization of equation-based differential-algebraic system model...
research
11/28/2015

Newton-Stein Method: An optimization method for GLMs via Stein's Lemma

We consider the problem of efficiently computing the maximum likelihood ...
research
04/11/2021

Alternating cyclic extrapolation methods for optimization algorithms

This article introduces new acceleration methods for fixed point iterati...
research
06/25/2018

Accelerating likelihood optimization for ICA on real signals

We study optimization methods for solving the maximum likelihood formula...
research
04/26/2019

Efficient Computation of Expected Hypervolume Improvement Using Box Decomposition Algorithms

In the field of multi-objective optimization algorithms, multi-objective...

Please sign up or login with your details

Forgot password? Click here to reset