Most of the numerical simulation problems today involve solution of large sparse linear systems of equations obtained from discretization of partial differential equations on either structured or unstructured meshes. The combination of a Krylov subspace method with algebraic multigrid (AMG) as a preconditioner is considered to be one the most effective choices for solution of such systemsbrandt1985algebraic ; ruge1987algebraic ; Trottenberg2001 . However, the construction of an AMG preconditioner may take a considerable fraction of the total compute time. Depending on the convergence rate of the iterative solution, more than 50% of the computations could be spent on the setup. This cost is unavoidable when a steady state problem is being solved, but it could be amortized for non-steady state problems either by reusing the work that went into construction of the preconditioner.
The paper introduces the partial reuse AMG setup cost amortization strategy for the solution of non-steady state problems. The AMG hierarchy is partially updated on each of the time steps: the transfer operators (restriction and interpolation) are kept from the previous time steps, while the system matrices and the smoother operators on each level of the hierarchy are updated on every iteration. Since the computation of the transfer operators may take up to 50% of the setup, this allows to significantly reduce the setup time, while keeping the preconditioner up to date with the latest system matrix changes. The approach has been recently implemented in the opensource AMGCL libraryDemidov2019 ; Demidov2020 .
The partial reuse strategy is compared to the full reuse one, described in Demidov2012 . There the AMG preconditioner is used completely unaltered for as many time steps as the solution is able to converge within a reasonable number of iterations. Reusing the preconditioner this way works when the system matrix changes slowly over time, since there is a strong chance that a preconditioner constructed for a specific time step will act as a reasonably good preconditioner for a couple of subsequent time steps. However, the convergence may deteriorate enough to neglect any time savings due to reusing the preconditioner. It was shown in Demidov2012 that this strategy is mostly beneficial when the solution phase is comparable to or is cheaper than the setup phase, which may be the case when the solution phase is accelerated by using a GPU, or when just a couple of iterations are required for the solution to converge. Here it is shown on the example of solving a two-fluid dam break scenario that the partial reuse strategy is able to consistently reduce both the setup cost and the overall compute time, even when the full reuse strategy is counterproductive.
The rest of the paper is structured as follows. Section II provides an overview of the AMG method. Section III describes the AMG setup cost amortization strategies considered in the paper. Section IV describes the numerical experiments used to test the efficiency of the amortization strategies, and presents the results of the numerical experiments.
Ii Algebraic multigrid
where is a square matrix. Multigrid methods are based on recursive application of the two-grid scheme, which combines relaxation and coarse grid correction. Relaxation, or smoothing iteration , is a simple iterative method, such as damped Jacobi or Gauss–Seidel iteration barrett1994templates . Coarse grid correction solves the residual equation on the coarser grid, and improves the fine-grid approximation with the interpolated coarse-grid solution. Transfer between the grids is described with the transfer operators (prolongation or interpolation) and (restriction).
In geometric multigrid methods the grid hierarchy, and the matrices and operators and on each level of the hierarchy are supplied by the user based on the problem geometry. In algebraic multigrid methods the grid hierarchy and the transfer operators are in general constructed automatically, based only on the algebraic properties of the matrix . Note that this step may be both computationally intensive and hard to parallelize. Algorithm 1 describes the setup phase of a generic AMG method. Here, the transfer operators and , as well as the smoother are constructed from the system matrix on each level of the AMG grid hierarchy (a common choice for the restriction operator is the transpose of the prolongation operator ). The next coarser level of the AMG hierarchy is fully defined by the transfer operators and . Usually the most time consuming steps of the setup are the transfer operators construction and the evaluation of the Galerkin operator.
After the AMG hierarchy has been constructed, it is used to solve the system using a simple V-cycle shown in Algorithm 2. Usually AMG is not used standalone, but as a preconditioner with an iterative Krylov subspace method. In this case a single V-cycle is used as a preconditioning step.
Iii AMG setup cost amortization strategies
This section describes possible AMG setup cost amortization strategies for the solution of non-steady state problems. Generally speaking, a preconditioner for the matrix should be an approximation for its inverse . However, the approximation does not have to be perfect for the iterative Krylov solver to work. During the solution of a non-steady state problem, the system matrices at the adjacent time steps often have similar structure both in terms of the non-zero pattern and the specific non-zero values of the matrix. Having this in mind, it should be possible to reuse at least some of the work going into the construction of an AMG preconditioner.
The full reuse strategy, introduced in Demidov2012 , assumes that a preconditioner constructed for a time step may be applicable for a number of subsequent time steps. The decision on whether to reuse the current preconditioner or construct a new one is based on the performance of the preconditioner on the previous time step. When the solution on the last time step was able to converge within the iteration limit specified by the user, the preconditioner is deemed good enough and is reused. Otherwise, new preconditioner is constructed for the current system matrix. The strategy is outlined in Algorithm 3. It was shown in Demidov2012 that it works best when the solution step is accelerated with a GPU, which makes the relative setup cost to increase significantly. Without the solution acceleration, the full reuse strategy is often counterproductive, because any savings in the setup cost are neglected by the increase in the solution time due to the growing number of iterations. Also, the outcome of the strategy highly depends on the iteration threshold parameter specified by the user, which requires some experimentation for each problem.
As shown in Demidov2012 and in the next section, the main assumption behind the full reuse strategy does not always hold, and the system matrix may change enough between the adjacent time steps for the unaltered preconditioner reuse to be counterproductive. The partial reuse strategy proposed in this work tries to reuse as much as possible from the existing AMG hierarchy as well as to incorporate the data from the new system matrix. In order to achieve this, the transfer operators on each level of the existing hierarchy are left untouched, and the system matrices on all levels are updated with the Galerkin operator . This approach is especially well-suited for non-smoothed aggregation AMG, since transfer operators here mostly depend on the non-zero pattern of the system matrix, and not on its actual values. The non-zero pattern of the matrices obtained with discretization of partial differential equations on structured or unstructured grids rarely changes between time steps. The partial reuse strategy is outlined in Algorithm 4.
The partial reuse strategy may still require a full rebuild occasionally, for example if the matrix size changes between time steps as a result of adaptive remeshing. Also, the quality of the updated preconditioner may deteriorate as the system matrix diverges from its initial state, so it may be beneficial to do preventive full rebuilds from time to time. This could be done either regularly (every -th time step) or using some convergence quality criteria, similar to the full reuse strategy.
Iv Numerical experiments
In order to test the cost amortization strategies described in the previous section, the two-fluid dam break scenario was modelled using the Kratos Multi-Physics framework Dadvand2010 ; Dadvand2013 . The source code for the example is available in dambreak . In the scenario, a water-filled cuboid is positioned in one part of the domain. It is released at the start time and the water spreads across the domain driven by gravity. The water hits an obstacle and splashes develop. More details on the boundary conditions and problem settings can be found in Larese2008 ; Coppola2011 . There are two basic substeps needed to evolve the solution from time step to time step , each step requiring a solution of a linear system of equations:
Find the motion in both phases as the solution of the two-fluid Navier–Stokes equations;
Determine the position of the interface by solving a convection equation for the level-set function.
The velocity and pressure fields of two incompressible fluids moving in the domain can be described by the incompressible two-fluid Navier–Stokes equations
where is the density, is the velocity field, is the dynamic viscosity, is the pressure, is the symmetric gradient operator, and
is the external body force vector, which includes the gravity and buoyancy forces, if required. The density, velocity, dynamic viscosity, and pressure are defined as
where and indicate the parts of occupied by fluids number 1 and 2 correspondingly.
Regarding the second step, the evolution of the fluid interface is updated using the so-called level set method, which has been widely used to track free surfaces in mould filling and other metal forming processes. The basic idea of the level set method is to define a smooth scalar function , over the computational domain that determines the extent of subdomains and . For instance, positive values may be assigned to the points belonging to , and negative values — to the points belonging to . The position of the fluid front will be defined by the iso-value contour . The evolution of the front in any control volume that is moving with a divergence-free velocity field is described by the equation
Both the Navier–Stokes and the level set equations are discretized using a stabilized finite element method with linear P1 elements for all the unknowns. The resulting level set equation has 104,401 unknowns and 1,331,279 non-zero elements in the system matrix. The Navier–Stokes equation has 417,604 unknowns and 21,300,464 non-zero elements. In total, 49 time steps are made in the model example. The level set equations and the Navier–Stokes equations across the time steps are treated as independent non-steady state problems and are used to test the efficiency of the AMG setup cost amortization strategies considered here.
Both sets of problems are solved using the BiCGStab iterative solver barrett1994templates preconditioned with non-smoothed aggregation AMG. The damped Jacobi iteration is used as a smoother on each level of the AMG hierarchy. Using the non-smoothed aggregation is beneficial in the case of the partial reuse strategy, since the transfer operators in this case have minimal dependence on the system matrix values, and mostly depend on the matrix non-zero structure. The implementation uses the AMGCL library Demidov2019 ; Demidov2020 . The level set problem is solved using the scalar (double precision) AMGCL backend, and the Navier–Stokes is solved as a monolithic system using block-valued backend with statically sized matrices as the matrix values Demidov2021 . The solution in both cases was parallelized with OpenMP and CUDA technologies. All tests were conducted on a machine with 3.40GHz 4 core Intel Core i5-3570K CPU and NVIDIA GeForce GTX 1050 Ti GPU.
|Step||Level set (OpenMP)||Level set (CUDA)||Navier–Stokes (OpenMP)||Navier–Stokes (CUDA)|
|Transfer operators and||56%||23%||39%||15%|
|Direct solver for the coarsest system||2%||0%||0%||0%|
|Move hierarchy to the backend||0%||56%||0%||55%|
Table 1 shows the cost of each of the steps from Algorithm 1 for the level set and Navier–Stokes problems relative to the complete setup time. The construction of the transfer operators takes significant portion of the setup. The partial reuse strategy basically removes this step from the setup, which allows to save 40 to 60% of the setup cost. When the CUDA backend is used to accelerate the solution, an additional step of moving the constructed hierarchy to the GPU memory appears in the setup, which consumes more than 50% of the setup time. The relative cost of transfer operators construction grows proportionally smaller in this case, which yields lower savings in the setup time.
|Strategy||Setup (s)||Solve (s)||Rebuilds||Average iterations||Total
|Level set, OpenMP|
|Level set, CUDA|
Table 2 compares the achieved cost savings for the no-reuse case used as a baseline (the AMG is completely rebuilt from scratch on each of the time steps), full reuse, and partial reuse strategies on the example of the level set and the Navier–Stokes equations, solved with OpenMP and CUDA backends. The columns show the setup and solve times in seconds (accumulated across all of the time steps; the total compute time may be obtained as a sum of these two columns), the number of full rebuilds of the AMG hierarchy, the average number of iterations on a single time step, and the speedup observed for the total compute time and the AMG setup time when using one of the considered amortization strategies.
The results show that the full reuse strategy, as expected, may be a hit or miss depending on the problem that is being solved. Specifically, it appears that the AMG preconditioner constructed for the level set problem at the first time step is fully applicable to the rest of the time steps. The setup only had to be performed once, which yields the total speedup of 31% for the OpenMP backend and 220% for the CUDA-accelerated backend. However, for the Navier–Stokes problem a preconditioner constructed for one time step shows bad convergence on the next time step, which is obvious from the jump in the average number of iterations (about 17 iterations in the no-reuse case vs 46 iterations in the full reuse case). In fact, the solution here was not able to fully converge over the maximum number of 100 iterations on the every other step, and the strategy has negative total speedup.
On the contrary, the partial reuse strategy shows consistently positive results for both of the problems. The average number of iterations practically does not grow, and the setup time is reduced by 39% to 192%. The total speedup in the case of the level set problem is 28% and 75% for the OpenMP and CUDA backends correspondingly. In the case of the Navier–Stokes problem, where the setup takes only a small fraction of the total compute time, the observed total speedup is 1% and 9%.
For both strategies, reusing the AMG setup makes more sense for the CUDA backend, where the solution is about 3.5 times faster than on the OpenMP backend. This makes the amortization strategy more important, since the relative setup cost grows accordingly.
The partial reuse AMG setup cost amortization strategy for the solution of non-steady state problems introduced in this work is able to reduce the setup cost of an AMG preconditioner by about 40% to 200%. It is shown on the example of modelling the two-fluid dam break scenario that the strategy is able to robustly decrease the total compute time, as opposed to the full reuse strategy that may be counterproductive. The overall efficiency depends on the relative cost of the setup with respect to the total compute time. Naturally, the approach outlined here makes more sense when the setup takes considerable fraction of the compute time, which is usually the case when the solution step is accelerated with a GPU, or the solution demonstrates fast convergence rate.
The AMG setup cost amortization strategy strategy proposed here allows to get rid of one significant step in the AMG setup algorithm, namely the construction of the transfer operators, which may be both expensive and serial in nature. Another possibility to reduce the setup cost is to reuse the knowledge about the system matrix non-zero pattern during the computation of the Galerkin product . This would get rid of the symbolic step in the matrix-matrix product algorithm and potentially could halve the cost of the partial setup. However, this remains outside the scope of the current work.
The work was carried out at the JSCC RAS as part of the government assignment. The author would like to thank Dr. Riccardo Rossi for the helpful discussions.
- (1) R. Barrett, M. Berry, T. F. Chan, J. Demmel, J. Donato, J. Dongarra, V. Eijkhout, R. Pozo, C. Romine, and H. Van der Vorst, Templates for the solution of linear systems: building blocks for iterative methods. SIAM, 1994.
- (2) A. Brandt, S. McCormick, and J. Huge, Algebraic multigrid (AMG) for sparse matrix equations. Sparsity and its Applications 257 (1985).
- (3) H. Coppola-Owen and R. Codina, A free surface finite element model for low froude number mould filling problems on fixed meshes. International J. for Numerical Methods in Fluids 66(7), 833–851 (2011).
- (4) P. Dadvand, R. Rossi, M. Gil, X. Martorell, J. Cotela, E. Juanpere, S. R. Idelsohn, and E. Oñate, Migration of a generic multi-physics framework to HPC environments. Computers and Fluids 80(1), 301–309 (2013). DOI: 10.1016/j.compfluid.2012.02.004.
- (5) P. Dadvand, R. Rossi, and E. Oñate, An object-oriented environment for developing finite element codes for multi-disciplinary applications. Archives of Computational Methods in Engineering 17(3), 253–297 (2010). DOI: 10.1007/s11831-010-9045-2.
- (6) D. Demidov, AMGCL: An efficient, flexible, and extensible algebraic multigrid implementation. Lobachevskii J. of Mathematics 40(5), 535–546 (2019). DOI: 10.1134/S1995080219050056.
- (7) D. Demidov, AMGCL – a C++ library for efficient solution of large sparse linear systems. Software Impacts 6, 100037 (2020). DOI: 10.1016/j.simpa.2020.100037.
- (8) D. Demidov, L. Mu, and B. Wang, Accelerating linear solvers for Stokes problems with C++ metaprogramming. J. of Computational Science 49, 101285 (2021). DOI: 10.1016/j.jocs.2020.101285. URL: https://www.sciencedirect.com/science/article/pii/S1877750320305809.
- (9) D. Demidov and D. Shevchenko, Modification of algebraic multigrid for effective gpgpu-based solution of nonstationary hydrodynamics problems. J. of Computational Science 3(6), 460–462 (2012).
- (10) A. Larese, R. Rossi, E. Oñate, and S. Idelsohn, Validation of the particle finite element method (PFEM) for simulation of free surface flows. Engineering Computations 25(4), 385–425 (2008).
- (11) J. W. Ruge and K. Stüben, Algebraic multigrid. In: Multigrid methods, pp. 73–130. SIAM (1987).
- (12) K. Stuben, Algebraic multigrid (AMG): an introduction with applications. GMD Report 70, GMD, Sankt Augustin, Germany (1999).
- (13) U. Trottenberg, C. Oosterlee, and A. Schüller, Multigrid. Academic Press, London, 2001.
- (14) S. von Wenczowski, Two-fluids dam break scenario. URL: https://github.com/KratosMultiphysics/Examples/tree/master/fluid_dynamics/validation/two_fluid_dam_break. Accessed 15.05.2021.