The convergence of the Generalized Lanczos Trust-Region Method for the Trust-Region Subproblem

by   Zhongxiao Jia, et al.

Solving the trust-region subproblem (TRS) plays a key role in numerical optimization and many other applications. The generalized Lanczos trust-region (GLTR) method is a well-known Lanczos type approach for solving a large-scale TRS. The method projects the original large-scale TRS onto a k dimensional Krylov subspace, whose orthonormal basis is generated by the symmetric Lanczos process, and computes an approximate solution from the underlying subspace. There have been some a-priori error bounds for the optimal solution and the optimal objective value in the literature, but no a-priori result exists on the convergence of Lagrangian multipliers involved in projected TRS's and the residual norm of approximate solution. In this paper, a general convergence theory of the GLTR method is established, and a-priori bounds are derived for the errors of the optimal Lagrangian multiplier, the optimal solution, the optimal objective value and the residual norm of approximate solution. Numerical experiments demonstrate that our bounds are realistic and predict the convergence rates of the three errors and residual norms accurately.



There are no comments yet.


page 1

page 2

page 3

page 4


A comparison of eigenvalue-based algorithms and the generalized Lanczos trust-region algorithm for Solving the trust-region subproblem

Solving the trust-region subproblem (TRS) plays a key role in numerical ...

Krylov-Simplex method that minimizes the residual in ℓ_1-norm or ℓ_∞-norm

The paper presents two variants of a Krylov-Simplex iterative method tha...

Implicit regularity and linear convergence rates for the generalized trust-region subproblem

In this paper we develop efficient first-order algorithms for the genera...

A Meshfree Generalized Finite Difference Method for Solution Mining Processes

Experimental and field investigations for solution mining processes have...

Generalized Conjugate Gradient Methods for ℓ_1 Regularized Convex Quadratic Programming with Finite Convergence

The conjugate gradient (CG) method is an efficient iterative method for ...

Range-relaxed criteria for choosing the Lagrange multipliers in the Levenberg-Marquardt method

In this article we propose a novel strategy for choosing the Lagrange mu...

On Estimating Machine-Zero Residual

In this paper, we propose two techniques to estimate the magnitude of a ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.