A Levenberg-Marquardt Method for Nonsmooth Regularized Least Squares

01/06/2023
by   Aleksandr Y. Aravkin, et al.
0

We develop a Levenberg-Marquardt method for minimizing the sum of a smooth nonlinear least-squar es term f(x) = 12F(x)_2^2 and a nonsmooth term h. Both f and h may be nonconvex. Steps are computed by minimizing the sum of a regularized linear least-squares model and a model of h using a first-order method such as the proximal gradient method. We establish global convergence to a first-order stationary point of both a trust-region and a regularization variant of the Levenberg-Marquardt method under the assumptions that F and its Jacobian are Lipschitz continuous and h is proper and lower semi-continuous. In the worst case, both methods perform O(ϵ^-2) iterations to bring a measure of stationarity below ϵ∈ (0, 1). We report numerical results on three examples: a group-lasso basis-pursuit denoise example, a nonlinear support vector machine, and parameter estimation in neuron firing. For those examples to be implementable, we describe in detail how to evaluate proximal operators for separable h and for the group lasso with trust-region constraint. In all cases, the Levenberg-Marquardt methods perform fewer outer iterations than a proximal-gradient method with adaptive step length and a quasi-Newton trust-region method, neither of which exploit the least-squares structure of the problem. Our results also highlight the need for more sophisticated subproblem solvers than simple first-order methods.

READ FULL TEXT
research
07/17/2022

SPIRAL: A Superlinearly Convergent Incremental Proximal Algorithm for Nonconvex Finite Sum Minimization

We introduce SPIRAL, a SuPerlinearly convergent Incremental pRoximal ALg...
research
07/02/2021

A geometric proximal gradient method for sparse least squares regression with probabilistic simplex constraint

In this paper, we consider the sparse least squares regression problem w...
research
06/25/2013

A Randomized Nonmonotone Block Proximal Gradient Method for a Class of Structured Nonlinear Programming

We propose a randomized nonmonotone block proximal gradient (RNBPG) meth...
research
06/14/2022

A Stochastic Proximal Method for Nonsmooth Regularized Finite Sum Optimization

We consider the problem of training a deep neural network with nonsmooth...
research
07/03/2014

Global convergence of splitting methods for nonconvex composite optimization

We consider the problem of minimizing the sum of a smooth function h wit...
research
08/28/2017

An inexact subsampled proximal Newton-type method for large-scale machine learning

We propose a fast proximal Newton-type algorithm for minimizing regulari...
research
09/27/2022

Benchmarking Numerical Algorithms for Harmonic Maps into the Sphere

We numerically benchmark methods for computing harmonic maps into the un...

Please sign up or login with your details

Forgot password? Click here to reset