DeepAI AI Chat
Log In Sign Up

Two-Stage Gauss–Seidel Preconditioners and Smoothers for Krylov Solvers on a GPU cluster

by   Luc Berger-Vergiat, et al.

Gauss-Seidel (GS) relaxation is often employed as a preconditioner for a Krylov solver or as a smoother for Algebraic Multigrid (AMG). However, the requisite sparse triangular solve is difficult to parallelize on many-core architectures such as graphics processing units (GPUs). In the present study, the performance of the traditional GS relaxation based on a triangular solve is compared with two-stage variants, replacing the direct triangular solve with a fixed number of inner Jacobi-Richardson (JR) iterations. When a small number of inner iterations is sufficient to maintain the Krylov convergence rate, the two-stage GS (GS2) often outperforms the traditional algorithm on many-core architectures. We also compare GS2 with JR. When they perform the same number of flops for SpMV (e.g. three JR sweeps compared to two GS sweeps with one inner JR sweep), the GS2 iterations, and the Krylov solver preconditioned with GS2, may converge faster than the JR iterations. Moreover, for some problems (e.g. elasticity), it was found that JR may diverge with a damping factor of one, whereas two-stage GS may improve the convergence with more inner iterations. Finally, to study the performance of the two-stage smoother and preconditioner for a practical problem, these were applied to incompressible fluid flow simulations on GPUs.


page 1

page 2

page 3

page 4


Kaczmarz-type inner-iteration preconditioned flexible GMRES methods for consistent linear systems

We propose using greedy and randomized Kaczmarz inner-iterations as prec...

Inexact inner-outer Golub-Kahan bidiagonalization method: A relaxation strategy

We study an inexact inner-outer generalized Golub-Kahan algorithm for th...

Neumann Series in GMRES and Algebraic Multigrid Smoothers

Neumann series underlie both Krylov methods and algebraic multigrid smoo...

An Experimental Study of Two-Level Schwarz Domain Decomposition Preconditioners on GPUs

The generalized Dryja–Smith–Widlund (GDSW) preconditioner is a two-level...

Convergence analysis of two-level methods with general coarse solvers

Multilevel methods are among the most efficient numerical methods for so...

Proximal Implicit ODE Solvers for Accelerating Learning Neural ODEs

Learning neural ODEs often requires solving very stiff ODE systems, prim...

The Hierarchical Subspace Iteration Method for Laplace–Beltrami Eigenproblems

Sparse eigenproblems are important for various applications in computer ...