Mixed Precision s-step Lanczos and Conjugate Gradient Algorithms

03/16/2021
by   Erin Carson, et al.
0

Compared to the classical Lanczos algorithm, the s-step Lanczos variant has the potential to improve performance by asymptotically decreasing the synchronization cost per iteration. However, this comes at a cost. Despite being mathematically equivalent, the s-step variant is known to behave quite differently in finite precision, with potential for greater loss of accuracy and a decrease in the convergence rate relative to the classical algorithm. It has previously been shown that the errors that occur in the s-step version follow the same structure as the errors in the classical algorithm, but with the addition of an amplification factor that depends on the square of the condition number of the O(s)-dimensional Krylov bases computed in each outer loop. As the condition number of these s-step bases grows (in some cases very quickly) with s, this limits the parameter s that can be chosen and thus limits the performance that can be achieved. In this work we show that if a select few computations in s-step Lanczos are performed in double the working precision, the error terms then depend only linearly on the conditioning of the s-step bases. This has the potential for drastically improving the numerical behavior of the algorithm with little impact on per-iteration performance. Our numerical experiments demonstrate the improved numerical behavior possible with the mixed precision approach, and also show that this improved behavior extends to the s-step CG algorithm in mixed precision.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/15/2017

The Adaptive s-step Conjugate Gradient Method

On modern large-scale parallel computers, the performance of Krylov subs...
research
08/12/2019

An Adaptive s-step Conjugate Gradient Algorithm with Dynamic Basis Updating

The adaptive s-step CG algorithm is a solver for sparse, symmetric posit...
research
02/24/2023

A mixed precision LOBPCG algorithm

The locally optimal block preconditioned conjugate gradient (LOBPCG) alg...
research
10/17/2022

Using Mixed Precision in Low-Synchronization Reorthogonalized Block Classical Gram-Schmidt

Using lower precision in algorithms can be beneficial in terms of reduci...
research
05/30/2023

Mixed Precision Rayleigh Quotient Iteration for Total Least Squares Problems

With the recent emergence of mixed precision hardware, there has been a ...
research
07/08/2023

Mixed Precision Iterative Refinement with Adaptive Precision Sparse Approximate Inverse Preconditioning

Hardware trends have motivated the development of mixed precision algo-r...
research
04/09/2018

Numerical analysis of the maximal attainable accuracy in communication hiding pipelined Conjugate Gradient methods

Krylov subspace methods are widely known as efficient algebraic methods ...

Please sign up or login with your details

Forgot password? Click here to reset