The behaviour of the Gauss-Radau upper bound of the error norm in CG
Consider the problem of solving systems of linear algebraic equations Ax=b with a real symmetric positive definite matrix A using the conjugate gradient (CG) method. To stop the algorithm at the appropriate moment, it is important to monitor the quality of the approximate solution x_k. One of the most relevant quantities for measuring the quality of x_k is the A-norm of the error. This quantity cannot be easily evaluated, however, it can be estimated. In this paper we discuss and analyze the behaviour of the Gauss-Radau upper bound on the A-norm of the error, based on viewing CG as a procedure for approximating a certain Riemann-Stieltjes integral. This upper bound depends on a prescribed underestimate μ to the smallest eigenvalue of A. We concentrate on explaining a phenomenon observed during computations showing that, in later CG iterations, the upper bound loses its accuracy, and it is almost independent of μ. We construct a model problem that is used to demonstrate and study the behaviour of the upper bound in dependence of μ, and developed formulas that are helpful in understanding this behavior. We show that the above mentioned phenomenon is closely related to the convergence of the smallest Ritz value to the smallest eigenvalue of A. It occurs when the smallest Ritz value is a better approximation to the smallest eigenvalue than the prescribed underestimate μ. We also suggest an adaptive strategy for improving the accuracy of the Gauss-Radau upper bound such that the resulting estimate approximates the quantity of interest with a prescribed relative accuracy.
READ FULL TEXT