Efficient p-multigrid spectral element model for water waves and marine offshore structures

08/24/2020 ∙ by Allan P. Engsig-Karup, et al. ∙ DTU 0

In marine offshore engineering, cost-efficient simulation of unsteady water waves and their nonlinear interaction with bodies are important to address a broad range of engineering applications at increasing fidelity and scale. We consider a fully nonlinear potential flow (FNPF) model discretized using a Galerkin spectral element method to serve as a basis for handling both wave propagation and wave-body interaction with high computational efficiency within a single modellingapproach. We design and propose an efficientO(n)-scalable computational procedure based on geometric p-multigrid for solving the Laplace problem in the numerical scheme. The fluid volume and the geometric features of complex bodies is represented accurately using high-order polynomial basis functions and unstructured meshes with curvilinear prism elements. The new p-multigrid spectralelement model can take advantage of the high-order polynomial basis and thereby avoid generating a hierarchy of geometric meshes with changing number of elements as required in geometric h-multigrid approaches. We provide numerical benchmarks for the algorithmic and numerical efficiency of the iterative geometric p-multigrid solver. Results of numerical experiments are presented for wave propagation and for wave-body interaction in an advanced case for focusing design waves interacting with a FPSO. Our study shows, that the use of iterative geometric p-multigrid methods for theLaplace problem can significantly improve run-time efficiency of FNPF simulators.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The description of water waves and their nonlinear interaction with bodies are of practical importance in marine offshore hydrodynamics. Few existing phase-resolving simulators are capable of handling both the wave propagation and wave-body interaction problem in an computationally efficient and scalable way, and within the same simulator. Adding such capability is, e.g., of relevance in many applications within offshore engineering and increasingly in the renewable sectors. For example, for designing offshore wind turbine foundations [BREDMOSE2016379], and improving wave energy device simulations [DavidsonCostello2020]. Research in theoretical foundations of wave-body modelling [John1949, lannes2016dynamics], improved uncertainty analysis approaches [BEE2016, ehi2019activesubspace] and practical applications [BOSI2019222]

illuminates the opportunities and demonstrate significant potential for improving the fidelity of conventional practical engineering tools for engineering analysis using nonlinear wave models. In water wave simulators, describing the fluid volume and taking into account bodies or structures in the representation can be handled using unstructured meshes and use high-order methods to improve cost-efficiency of simulations and reach larger scales to better estimate the influence of sea states in coastal areas. For water wave simulations, there are still progress to be made in improving dispersive and nonlinear wave propagation models, e.g to use unstructured numerical methods, e.g. see review

[Brocchini2013]. Also, recent research shows progress in the development of high-order accurate methods for nonlinear and dispersive large-scale ocean simulations [engsigkarupetal2012, engsigkarup2017stabilised, BrusEtAl2019, GlimbergEtAl2019].

For unsteady water wave modelling, the fully nonlinear potential flow (FNPF) model represents one of the two most widely used classes of nonlinear models for water waves and is particularly important due to the ability to solve the water wave propagation problem while accounting for dispersion and nonlinear transformations of waves. This can be done with much higher efficiency than the higher fidelity CFD solvers that is based on Navier-Stokes type models when the rotational and viscus effects of the fluid motion are negligible [RansleyEtAl2018]. The most cost-efficient numerical schemes for time-dependent problems with long integration times are those that are of high-order accuracy [KO72]

. Thus, numerical schemes that have fast convergence rates and scale linearly with problem size (measured in terms of degrees of freedom in the discretization) has significant potential to reduce the cost of CFD and enable simulations of increasing fidelity in practical times

[DavidsonCostello2020]. A proper designed multigrid scheme for the linear solver in CFD tools is a key component for scalability and high efficiency [HuismanStillerJochen2018]. Furthermore, the majority of practical simulation scenarios requires adaptive unstructured grids. Other high-order numerical methods that addresses the wave propagation problem and the wave-body problem in a single solver strategy is the high-order boundary element method (HOBEM) [Ferrant1996, HagueSwan2009, HarrisEtAl2014] that is particularly strong in handling the geometry using unstructured grids, but is limited in terms of numerical efficiency due to inefficient asymptotic scaling of work effort as a result of high computational complexity in the discrete solution of the resulting system of equations in the solver. The SWENSE technique alleviates the efficiency problem of CFD tools through a decomposition approach [DucrozetEtAl2014, ZhaobinEtAl2018]. However, it is less suitable for wave propagation covering long duration of times or large domains due to lack of incident wave solutions for the wave propagation problem in varying bathymetry and that numerical dispersion errors grows over time. Furthermore, the use of high-order numerical methods for dispersive and nonlinear free surface hydrodynamics are still surprisingly scarce, especially when structural bodies are taken into account. Two widely used FNPF solvers for nonlinear wave propagation are the high-order finite differences method (FDM) that is both efficient and scalable [ENGSIGKARUP20092100, EngsigKarup2014, GlimbergEtAl2019] and the high-order spectral method (HOS) [WestEtAl1987, DOMYOU1987]. The FDM discretizations can be tailored to use adaptive and body conforming meshes, e.g. via multiblock discretization and curvilinear grids [GlimbergEtAl2019], methods such as HOS based on approximations via global basis functions do not have this flexibility.

In this work, we seek to address the computational bottleneck problem of FNPF solvers. We propose a geometric -multigrid (-GMG) accelerated iterative Laplace equation solver that is a fundamentally important extension of a recently developed spectral element solver that have been successfully validated against experimental measurements [EngsigKarupEskilsson2018, RansleyEtAl2019, EngsigKarupEskilsson2019]. The solver strategy is based on the spectral element method attributed to Patera [PAT84] suitable for efficient and scalable simulations for elliptic problems [PavarinoWarburton2000]. A recent review of state-of-the-art spectral element methods is given in [XuiEtAl2018]. The single-phase FNPF model is based on explicit tracking of the water surface, and avoids the difficulties associated with interface tracking solvers such as volume of fluid (VOF) [Ubbink1997, Jasak07openfoam:a, Refresco2009] and level set (LS) method [GroossHesthaven2005, BIHS2016191, KarakusEtAl2018]. VOF methods are among the most widely used general-purpose solvers for engineering applications, while LS methods [OSHER198812, BIHS2016191] sees increasing maturity for free surface flows. These types of solvers comes with the limitation that it is difficult to device high-order accurate numerical schemes that captures the complex interfaces that is needed to make them accurate and cost-efficient. This makes them prone to errors in conserved properties such as mass and energy, significant numerical damping in wave-propagation problems, which historically have limited these solvers to applications in relatively small domains and relatively short integration times for accurate prediction of wave propagation and kinematics. Also, due to the low-order accuracy of the methods, significant number of cells is needed per wave length of the highest harmonics that needs to be resolved to deliver converged results for engineering purposes. Still, these CFD solvers are widely accepted to be favourable for the wave-body problem due to the possibility for describing bodies of arbitrary geometric complexity and accounting for violent effects such as breaking waves, slamming, aeration effects (multi-phase), green water on ship decks, etc. that is needed in typical industrial applications for estimation of wave-induced loads.

The spectral element methods (e.g. see [PAT84, KS99, deville_fischer_mund_2002, XuiEtAl2018]) is a class of finite element methods with support for adaptive meshes and that is based on piece-wise continuous arbitrary order polynomial approximations for spatial discretization. These methods are attractive for a broad range of applications due to their exponential convergence rates and geometric flexibility. In the lat decade, the spectral element methods have seen increasing maturity and adoption in terms of real-world applications. However, the main limitations are still in the mesh generation [TURNER2016340], and the work effort associated with the sparse operators with dense blocks for each element that calls for use of matrix-free implementations and high-performance computing [MarkallEtAl2013]. Remark, that multigrid methods can be accelerated via mixed-precision strategies on modern many-core architectures [GlimbergEtAl2011, engsigkarupetal2012, abdelfattah2020survey] to gain additional performance in parallel implementations.

In the context of FNPF solvers, recent work have demonstrated for practical applications that the methodology can be stabilised and is accurate [EEB2016, EngsigKarupEskilsson2018]. A remaining scientific challenge for enabling large-scale 3D simulations for FNPF solvers is the development of an efficient iterative solver strategy that is efficient for wave-body problems. A proper designed multigrid solver strategy can achieve the ideal scaling of work effort, cf. [BRANDT77, Brandt1998, Trottenberg01], and the most efficient multigrid solvers achieve ”textbook multigrid efficiency” (TME), which is defined by Brandt [Brandt1998] as solving a discrete PDE problem in a computational work which is only a small (less than 10) multiple of the operation count in the discretized system of equations itself. This is usually measured in terms of residual calculations for the discretized system. The work effort to achieve satisfactory approximate numerical solution depends on both the numerical efficiency of the multigrid schemes and the tolerance chosen. In solving linear systems of equations using multigrid methods, to maximize attainable accuracy (cf. [EngsigKarup2014]) one need to reduce the algebraic errors below truncation errors, since from the triangle inequality for the error between the true solution () and the approximate solution () to a system of equation is given as

(1)

and hence through the use of iterative solvers the attainable accuracy is limited by the level of truncation errors with the level of error influenced by the algebraic errors (). Thus, the scaling properties and TME set a clear optimality target to define to what extend a given solver strategy is efficient. The main objective of this work has been to design a geometric -multigrid accelerated FNPF solver strategy and clarify to what extend TME can be met.

There are two classes of multi-level methods that form the basis for multigrid schemes, namely, the algebraic multigrid (AMG) and geometric multigrid (GMG) methods. The AMG methods are among the most widely used in industrial applications as it handle the solution of system of equations as a black box iterative solver strategy [BrandtEtAl1984, DENDY198657, RugeStuben1987, OlSc2018] with the fine grid discretization of the model problem as input. AMG then proceeds by constructing coarser representation algebraically according to certain rules. However, when dealing with unsteady numerical solvers where grids may change and as a result the system of equations or its coefficients will be subject to changes, then AMG falls short as an efficient time-domain solver strategy due to the need for updates of the operators which happens in a post-processing step before the solve procedure. In contrast, the GMG method starts from a coarse grid problem and then refine the mesh resolution through a mesh hierarchy according to certain rules. GMG can be designed to accommodate updates of local element matrices in a more straightforward way, utilize matrix-free operator implementations, and as a result achieve a low computational cost for unsteady problems that requires frequent operator updates.

Multigrid methods can be used to accelerate the convergence rate of iterative schemes by exploiting the smoothing property where oscillatory modes of the error are damped fast and smooth modes are damped slowly. This property can be taken advantage of in iterative schemes by using a nested grid consisting of fine and coarser grids. An iterative scheme having the smoothing property is called a smoother

. Any multigrid algorithm involves components for smoothing, restriction, interpolation, solution on different grids and choice of cycle type. The framework for multigrid methods is general, however, usually the components of multigrid methods have to be tailored to specific problems

[Brandt1984]. Multigrid algorithms are attractive as they provide the basis for solving large system of equations with complexity given as and make it possible to achieve scalable solution effort, i.e. effort which only increases linearly with the degrees of freedom in the discretization.

The paper is organised as follows. The governing equations for fully nonlinear potential flow are described in Section 2. Section 3 describes the key ideas behind a geometric -multigrid model that is proposed. In Section 4 numerical results from benchmarks of the new iterative -GMG solver is provided and Section 5 contains results from the application of the geometric -multigrid solver in benchmarks in 2D and 3D using two different FNPF model formulations. Results are given for the assessment of the performance in terms of efficiency and scalability achieved for different -GMG solver strategies for the practical benchmarks. The conclusions, follow in Section 7.

2 Governing Equations

Irrotational, inviscid fluid flow and non-breaking water waves are governed by the Fully Nonlinear Potential Flow (FNPF) equations defined on a fluid domain . Domain boundaries may be constituted of a varying bathymetry (), the free surface (), horizontal boundaries, e.g. in the form of walls of a numerical wave tank, and internal boundaries in the form of the faces of structural bodies or fixed structures. The non-dimensional FNPF model equations subject to kinematic and dynamic free surface conditions and the continuity equation on can be written in an Eulerian formulation [EGNL13] as equationparentequation

(2a)
(2b)

where the ’’ symbol is used for free surface variables. Closure for the free surface equations are obtained by solving the Laplace problem equationparentequation

(3a)
(3b)
(3c)

In our numerical benchmarks we consider both the fully nonlinear free surface formulation (2b) and the small-amplitude version () of this free surface problem. Here is the wave length and is the wave amplitude. The gravitational acceleration is assumed to be 9.81 .

3 Numerical method

3.1 Spectral Element Method

For the space discretization, we use the implementation of a spectral element method detailed in 2D for a -transformed version [EEB2016] and in 3D for a non -transformed version [EngsigKarupEskilsson2018] for the governing equations (see Section 2). These already validated solvers serve the purpose of benchmarking the geometric -multigrid solver in this study for solving the Laplace problem. We revisit the discretization details in the following.

The Laplace problem can be formulated in the general non -transformed form most suitable for wave-body applications via directly discretizing (3) or alternatively via the -transformed form that is most suitable for wave propagation applications equationparentequation

(4a)
(4b)
(4c)

where is the differential operator that act in the reference domain with a -coordinate.

The weak formulation of (3) is stated in the form

(5)

Let be the time-independent computational domain . If we introduce the Jacobian of the map

(6)

then the -transformed formulation [EEB2016] that is appropriate for solving efficiently the wave propagation problem in a FNPF model can be stated in the form

(7)

or in the non -transformed version (3) that is suited for wave-body applications. In both cases, the resulting discretised systems are derived from a weak formulation of the form

(8)

The details of the spatial Galerkin discretization leading to a Spectral Element Method is briefly described next. We start by forming a partition of the spatial domain by a tessellation of the -plane into non-overlapping shape-regular elements such that with denoting the ’th element. We then introduce the space of continuous, piece-wise polynomial functions

(9)

where is the space of polynomials of degree at most . In the Galerkin scheme, we choose test functions .

We introduce the finite-dimensional approximations

(10)

where is the finite set of global finite element basis functions with cardinal property at mesh nodes with the Kronecker Symbol. If we substitute these expressions into the weak formulation and choose the discretization can be made symmetric and results in a sparse linear system of the form equationparentequation

(11a)
where is the total degrees of freedom in the discretisation in the ’th mesh and
(11b)
(11c)

and where is total number of segments of the domain boundary of the computational domain constituted from the surfaces of the elements used to tesselate the domain.

If the preconditioned conjugate gradient (PCG) is used for the solution, convergence is guaranteed [CaiEtAl1998]. Furthermore, the convergence properties are determined by the eigenspectrum

which for symmetric positive definite (SPD) systems are characterized by eigenvalues

. Focus is given to the design of an efficient iterative -multigrid spectral element solver for the Laplace problem (11a).

4 Iterative solver

The goal is to solve (11a) using a -scalable iterative -multigrid iterative scheme tailored for high-order accurate approximations on fixed meshes where is the total degrees on the finest mesh. An essential component for those methods is to have a coarse grid acceleration, which results in -multigrid strategies [FL05]. We solve the linear system of equations that arises from the SEM discretization, cf. (11a).

4.1 Preconditioning

Preconditioning implies a transformation of the original problem (11a) to a new problem with same solution, e.g. of the left-preconditioned form

(12)

where is the preconditioner. Identifying a proper preconditioning strategy is the key to efficient solution of sparse linear systems such as those that arise from numerical disceretization using the spectral element methods, e.g. discretization of (11a). The objective of preconditioning [vanderVorst:2003:IKM] is to is to accelerate the convergence of the iterative solver to make it less expensive to solve the system of equations. In this work, the design of the preconditioner comes with the requirement, that it should work efficiently in the setting of unstructured meshes and arbitrarily shaped bodies. For classical finite element methods (FEM), the main path to convergence is -type convergence where the mesh is refined to reduce the spatial errors. In the spectral element method, an additional option is available, namely, -type convergence, where the order of the local polynomial expansions to each element in the mesh is adaptive. The -type convergence comes with the additional property that the spatial resolution can be improved by modification of the expansion basis and therefore do not need to change the initial mesh as in the FEM to improve on the errors. The objective of this study is to take advantage of this and then consider the design of a geometric -multigrid scalable iterative solver strategy that works without -refinement to avoid dealing with the details of the features of complex geometry of bodies represented via an adapted mesh.

Previous benchmarks of near-optimal multigrid efficiency for a multigrid preconditioned defect correction method (PDC) obtained using a flexible-order finite difference FNPF simulator [EngsigKarup2014] can be used as a reference for the multigrid methods in this work. In the following, we provide details on the design of an iterative solver strategy for the intended applications and demonstrate through numerical examples the efficiency and scalability of geometric -multigrid solvers tailored to solving the FNPF equations in Section 2.

5 Multigrid algorithm

In this work we employ the standard version of the geometric multigrid algorithm, namely the V-cycle version [Trottenberg01], i.e. starting from the finest level, coarsening until the coarsest grid is reached and going back to the finest level. This is depicted conceptually in Figure 1, where a combination of - and -multigrid refinements is presented where each of the grids are visited using the recursive multigrid algorithm. Geometric multigrid methods relies on several grids ranging from the coarsest to the fine grids. The changes in resolution between grids are referred to coarsening. In standard coarsening, the coarser grids are simply half the size in each spatial direction of the finer grid. However, in some cases it can be advantageous to employ semi-coarsening where the grid is only coarsened in one spatial direction or combinations hereof.

Figure 1: Illustration of a multigrid -cycle combining a - and -multigrid strategy.

We restrict our focus to the geometric -multigrid version in the following. Consider the span of grids, discretized on the same domain consisting of the same elements, however, utilising different polynomial order of the element expansions , which results in discretization pairs of the form . Typically, we strive to have coarsening between grids fulfilling . This range of -grids form the basis for a nested grid approach. For example, a -cycle of a -multigrid method on several grids is depicted in figure 2.

Figure 2: Illustration of a -cycle of geometric -multigrid in 2D. Big, gray dots represent element’s vertices (the -mesh), black and white nodes represent Legendre Gauss Lobatto (LGL) nodes respectively on the edges and in the interior part of the element.

A pseudo-code for the recursive multigrid -cycle for solving a linear system is given as algorithm 1, employed with the following input: current grid level , matrix and right hand side on a grid, initial approximation and number of smoothing operations (pre-smoothing), (post-smoothing).

1:function MultigridCycle()
2:     
3:     
4:     
5:     if  then
6:         
7:     else if  then
8:               
9:     
10:     
11:     
12:return
Algorithm 1 Geometric -multigrid -cycle

Multigrid iteration can be then combined with other iterative solvers in the form of preconditioned defect correction (PDC) (2) and preconditioned conjugate gradient (PCG) (3). We use the stop criterion for the iterative solver in the form , giving the user control to decide the acceptable level for convergence in terms of combined absolute (atol) and relative (rtol) error tolerances.

1:function PDCpMG()
2:     while  do
3:         
4:         
5:         
6:               
7:return
Algorithm 2 Multigrid Preconditioned Defect Correction
1:function PCGpMG()
2:     
3:     
4:     
5:     while  do
6:         
7:         
8:         
9:         
10:         
11:         
12:         
13:         
14:               
15:return
Algorithm 3 -Multigrid Preconditioned Conjugate Gradient

5.1 Transfer operators

The multigrid method relies on the ability to transfer approximate grid solution vectors between nested grids. Transfer operators consists of prolongation operators

that takes a solution vector from the coarse grid to a fine grid, and restriction operators that takes a solution vector from the fine grid to a coarse grid. The grids are chosen according to . The natural choice for prolongation and restriction operators in a multigrid method is linear operators.

In this work we employ the interpolation strategy defined for the first time in [RP87], where the coarse grid spaces are subspaces of the fine grid space, therefore a solution vector can be expanded in terms of the fine and coarse grid basis functions

(13)

and the prolongation as a matrix-vector product interpolating from to

(14)

Similarly to [RP87], the restriction operator is a transposition of the prolongation

(15)

5.2 Relaxation strategy

The second crucial component in a multigrid scheme is the choice of a smoothing operator , which is responsible for the error correction. Most basic stationary iterative methods, such as Jacobi or Gauss-Seidel schemes prove to work fine for simple test cases on elliptic PDEs (examples in [RP87, FL05]). However, it is known that the performance of the aforementioned methods deteriorates for higher polynomial orders of the finite element discretization [Janssen2011] and for the Galerkin-based frameworks, such as SEM used in this work, efficient smoothers that can exploit element-block structure the aforementioned methods are necessary to ensure rapid decline of the error. One of the most promising strategies for conjugate gradient (CG) discretized schemes is the additive Schwarz method (ASM), which significantly enhances the convergence rate of the multigrid solver in comparison to the standard smoothers [FL05]. The ASM can be presented in the form of a preconditioned defect correction iteration

(16)

where is the resulting Schwarz preconditioner. In the context of the multigrid solvers the preconditioner is a block matrix of the form

(17)

where we have introduced a restriction operator that restricts nodes from the main computational domain to a newly defined subdomain (element-block along with the pre-defined overlapping nodes) and is diagonal weighting operator

(18)

In figure 3 we depict the main Schwarz relaxation strategy used in this work one the exemplary 2D computational domain consisting of 9 elements. Tests show that one overlapping node provides a good improvement in convergence rate in comparison to the standard block-Jacobi (ASM with zero overlapping nodes) for a wide range of meshes. In the considered problems in Section 6 we use one overlapping node in each spatial direction for all element-block smoothers as illustrated in figure 3 unless stated otherwise.

Figure 3: An example of one subdomain of the Schwarz smoother, with being the center element in the picture. Subdomain is represented by the black nodes, which consist of the element nodes and the overlapping zone with one overlapping node in each spatial direction.

5.3 Performance of multigrid methods

To measure the algorithmic efficiency of an iterative multigrid scheme, we can consider the convergence rate. We measure the relative change of the residual of the preconditioned system (12) on the finest grid between successive iterations

(19)

The smoothing is done using (16) and a Work Unit (WU) is measured relative to one sparse matrix vector (SpMV) operation using the system matrix on the fine grid. The multigrid strategy detailed in [ENGSIGKARUP20092100] where semi-coarsening is combined with standard coarsening can be invoked and help improve convergence speed in the context of anisotropic meshes with elements of high aspect ratios. The assessment of the performance focus on algorithmic efficiency that is assessed in terms of the convergence rate, and the numerical efficiency that is implementation dependent. By selecting the multigrid components it is possible to make trade-offs between algorithmic efficiency and numerical efficiency.

5.4 On computational cost of multigrid

To put a broader perspective on the cost of the solver, we define a relative measurement unit working unit (WU). One WU is equivalent to computational cost of one preconditioned defect correction iteration (16) on the finest grid [Brandt1984].

We divide the cost of the multigrid method into two parts. First part is the operator assembly, it consists of forming a smoothing operator and coarse grid Laplace operators (either by a Galerkin coarse grid operator or via direct assembly). The second part is the actual solver defined by pseudo-algorithms 1, 2 and 3. The former part is, by a large margin, more expensive. However, in our numerical experiments we pre-compute and system matrix on all grids at s and re-use them throughout the whole time domain, therefore the computational cost of assembling and becomes negligible in compare to the total simulation time.

6 Numerical benchmarks

We consider two benchmark cases. The first case is a standard test case in 2D for dispersive and nonlinear waves models that has been considered in [EngsigKarup2014] where a high-order finite difference method is used and geometric multigrid method is used in an iterative preconditioned defect correction solver. The second case we consider is a 3D case for one of the blind test experiments corresponding to case 2.3 described in [MaEtAl2015, RansleyEtAl2018]. It represents an advanced case, where the free surface mesh is fully unstructured as is typical for real cases, where a fixed floating production storage and offloading (FPSO) vessel body is represented with curvilinear elements. Remark, different formulations of the Laplace problem is used in each of the benchmarks.

We introduce three different strategies to conduct simulation on the whole time domain as illustrated in figure 4. To avoid expensive global assembly of the smoothening matrix and the coarse grid Laplace operators, we store initial and (at s) and apply these in the solver every time step, I from figure 4. We compare the numerical results to the systems II and III, where we update all the operators on all time steps. Either based on Laplace operator formed from linear formulation of potential flow on coarse grids or based on the formulation (2b) on all grid levels (the latter).

Figure 4: Strategies that carry computation throughout the whole time domain. NONLIN denotes a full, more computationally expensive, nonlinear Laplace operator (2b) on all grid levels, while the LIN is simplified, less expensive, linear version on all levels, but the finest. Subscript ’’ denotes use of an operator from time zero (first step) on all later time steps.

6.1 Harmonic generation over a submerged bar

The test case of harmonic generation over a submerged bar, has become a standard test case for evaluating dispersive and nonlinear wave models. This case was considered in [EEB2016] where a direct solver for the Laplace problem was used. In this work, we instead consider the geometric -multigrid method and solve the discretized Laplace problem in the -transformed formulation (11a). Qualitatively, we obtain similar results as presented in the previous work, hence, we report the results obtained from using the iterative multigrid solver strategies.

We consider a numerical setup where the time domain is s resolved with a fixed time step s and the spatial fluid domain with high aspect ratio elements m. The mesh consists of quadrilateral elements with number of elements in each direction such that , , . The discretization of the linear system results in discrete anisotropy of where and refers to the element size in each of the Cartesian directions. Natural restriction strategy would be to semi-coarse in direction first to perform more smoothing operations on better conditioned system. In this case however, the assembled block-smoother has proven to be efficient enough. The coarsening strategy is given in table 1. Results were computed using a fourth-order explicit Runge-Kutta (ERK4) method for the temporal integration. In total the simulation run for 1125 time steps and the results are averaged on a total of Laplace solves of 5625. Two pre- and post-smoothening are used for the condensed system, and one pre- and post smoothening for the full system of equations.

(a)
(b)
(c)
(d)
Figure 5: Time series histories of free surface elevations in comparison with experimental measurements data at wave gauges positioned at =4.0m, 14.5m, 17.3m and 21.0m.
Grid level 3 2 1
6 3 1
6 3 1
Table 1: Coarsening strategy.

Tables 2 and 3 depict results for several different schemes, that can be split with in terms of the applied iterative method: multigrid preconditioned conjugate gradient (PCG--GMG) , multigrid preconditioned defect correction (PDC--GMG) , stand-alone multigrid () and also to the unique strategies presented in figure 4. Table 2 depicts average iteration number and table 3 the measured time (in WUs) to achieve a given tolerance. The results are compared to a direct solver based on Cholesky factorization of the system matrix with permutation of both rows and columns using a reverse Cuthill McKee (RCM) ordering.

Method
PCG--GMG(3,3) I 1.00 1.99 1.99 2.00
PCG--GMG(3,3) II 1.00 1.99 1.99 2.00
PCG--GMG(3,3) III 1.00 1.99 1.99 2.00
MG(3,3) I 1.00 1.99 1.99 2.01
PDC--GMG(3,3) I 1.00 1.99 1.99 2.01
Table 2: Average number of iterations to achieve given tolerance. Full system.
Method
PCG--GMG(3,3) I 8.24 16.52 15.83 17.04
PCG--GMG(3,3) II 8.44 16.71 16.21 16.62
PCG--GMG(3,3) III 8.66 16.68 16.30 16.36
MG(3,3) I 8.67 16.93 16.84 16.86
PDC--GMG(3,3) I 8.25 15.71 16.97 17.38
Direct, LU 73.75
Table 3: Average cost to achieve given tolerance in absolute WUs .

The -GMG strategies gives similar results in solver performances, and is demonstrated in 2D to give speedups ranging from 4.5 to 8.9 times depending on choice of tolerance levels. Although for NONLIN cases it does not differ much from other strategies, when a Laplace operator based on assuming (small-amplitude wave assumption) is deployed (worse but cheaper approximation), it is the only strategy among LIN that maintains TME. More universal way of representing the results in table 2 is in Working Units (WU), cf. table 3. One WU is set to be a cost of a sparse matrix vector (SpMV) operation. Figures 7 present exemplary results of the computed free surface time series compared with experimental values released in connection with the blind test experiment described in [RansleyEtAl2018].

Remark, the number of outer CG iterations can be compared to results given in [CaiEtAl1998] and shows to be more efficient than the reported FEM results, with close to an order of magnitude improvement in iteration counts.

6.2 Focusing wave impacting a FPSO structure in 3D

We consider numerical experiment where we compare to the case 2.3 of the experimental measurement campaign from the FROTH (EPSRC) project [MAI2016115] carried out at Plymouth University in the COAST Laboratory facility in UK. In case 2.2 the nonlinear focusing wave group is determined using New Wave Theory [TromansEtAl1991] and is based on a JONSWAP spectrum with parameters m, s, m, m, and rad (angle of propagation relative to the centre-line of the basin). The measurement data CCP-WSI ID 22BT1 is provided via the CCP-WSI project [CCPWSI1]. The wave signal is generated by reverse engineering using harmonic analysis, and we use the procedure based on FFT for reproducing the time series of WG1 as described [TromansEtAl1991].

We employ a FNPF-SEM model that is based on a non--transformed formulation that results from the SEM Galerkin discretization of (5). The time series signal of the nonlinear focusing wave group is compared to the experimental data from different wave gauges in the time domain s. In the simulation, a fixed time step s is used and the spatial domain is defined with a high aspect ratio of elements for a numerical wave tank of size m with a fixed FPSO embedded in the fluid domain. The mesh setup consist 3D prism elements (see figure 6) and polynomial orders (triangular base) and . Semi-coarsening in the multigrid solver is the horizontal directions is employed until polynomial orders in all directions are equal in the expansion basis. Then standard coarsening is employed until the coarsest level is reached. This is illustrated in table 4. On all -GMG levels, but the coarsest grid, three pre- and post-smoothening operations are used. Results were computed using the ERK4 for the temporal integration. In total the simulation run 1 200 time steps.

Grid level 3 2 1
5 3 1
3 3 1
Table 4: Coarsening strategy.
(a)
(b)
Figure 6: The problem setups illustrated for the nonlinear focusing wave group interacting with a fixed FPSO at an angle. a) Experimental setup with wave generation and wave gauges. b) Unstructured mesh setup for the FPSO at an incident angle of 20 degrees with signal measurement wave gauges indicated. The (blue) interface is positioned at at wave gauge 1 (WG1) and is highlighted between the relaxation zone (left of interface) and the computational domain (right of interface).
(a)
(b)
Figure 7: Wave gauges measurements for a focusing wave interacting with a fixed FPSO. A comparison is made with experimental results from the CCP-WSI case 2.2 for wave gauges a) WG7 and b) WG24.

Similarly to the previous submerged bar test case, tables 5 and 6 present the numerical results of the PCG--GMG, PDC--GMG and solvers. Unlike the 2D problem, it can be seen in figure 5 that for the 3D case number of iterations deteriorates with lower tolerance levels, however, all the solvers still maintain approximately 1 order of magnitude per iteration convergence rate. The results in table 6 demonstrates speedup in comparison to a sparse direct solver with reordering ranging from 0.9 (slowdown) to 7.3 times depending on the choice of tolerance level. The slowdown at strict tolerance is expected as the benefits of using multigrid reduces with increasing number of iterations. However, we note that in contrast to a sparse direct solver that for highly unstructured meshes sees an impact on efficiency, the iterative -multigrid solver strategy is designed to achieve complexity. Therefore, the results of this test case confirms the feasibility and potential of using an iterative -multigrid spectrel element model on an advanced application to improve efficiency.

Method
PCG--GMG(3,3) I 1.11 2.33 5.06 8.47
PCG--GMG(3,3) II 1.11 2.33 5.08 8.51
PCG--GMG(3,3) III 1.11 2.33 5.09 8.52
-GMG (3,3) I 1.11 2.33 5.08 8.69
PDC--GMG(3,3) I 1.11 2.33 5.09 8.69
Table 5: Average number of iterations to achieve given tolerance.
Method
PCG--GMG(3,3) I 8.09 17.03 40.62 64.84
PCG--GMG(3,3) II 8.35 17.24 41.25 65.11
PCG--GMG(3,3) III 8.73 18.01 42.35 67.20
-GMG (3,3) I 8.16 16.89 41.49 65.00
PDC--GMG(3,3) I 8.01 16.59 41.35 66.09
Direct, LU 59.11
Table 6: Average cost to achieve given tolerance in relative WUs .

6.3 Scaling of the geometric p-multigrid

The scalability properties of the solver are examined in terms of increasing number of elements and polynomial order . We consider a deep water nonlinear standing wave problem using the approximate 2D solution due to Glozmann [AG96] that is used in a 3D tank and only serve to run this benchmark using a nonlinear wave problem in a finite domain. The wave height is m, wave length is m and the dispersion parameter is , where we use a flat bottom with still water depth m on the spatial domain m with no body.

To investigate scalable properties of the solver with mutable number of elements we employ a series of five 3D prismatic -meshes with the following number of elements on each with approximately constant aspect ratio . Elements in each mesh are further discretized by the same pair of polynomial orders and . The geometric -multigrid follows the same multigrid strategy as described in Section 6.2

. To test the scalability property, we consider a set of polynomial order pairs (

that discretize the problem on the finest grid. We employ the same coarsening strategy for all cases. First, the problem is semi-coarsened in the vertical direction to reach the polynomial order , and thereafter standard coarsening is used according to down to .

Figure 9 reports computational times and convergence rates for all the considered problems with various and . Convergence rate q is defined by equation (19). To test the scalability property, we consider two relaxation strategies: first with constant number of overlapping nodes 1 for all cases and all coarse grids and second one (named refined Schwarz in figure (b)b) where the number of overlapping nodes is bounded by with being the polynomial order on the respective grid. Figure 8 presents both strategies on Fekete nodes considered in the 3D prismatic mesh.

Figure 8: Exemplary overlapping regions on Fekete points for polynomial orders of the expansion on elements . Filled black circles represent overlapping region of one node in all spatial directions (constant for all polynomial orders in figure 5. Black with addition of filled grey circles represent refined Schwarz smoother with overlapping nodes in all spatial directions.

As seen in figure 9, the geometric -multigrid is shown to have excellent scalability properties with the number of elements , where the convergence rate q stays constant on all considered meshes, which results in perfectly linear computational cost increase, cf. figure (a)a. A good convergence rate is achieved on various cases for polynomial orders with the refined Schwarz method. The convergence measure q given in (19), however, shows a some variation in the various setups subject to varying number of elements in contrast to those with fixed polynomial order . This comes with increased cost of the preconditioner matrix-vector operations. Despite this, the solver achieves (with referring to the degrees of freedom) for the scaling of the computational cost. The solver with non-refined Schwarz smoother has a visible convergence rate loss, which results in increasing computational cost.

(a)
(b)
Figure 9: Scaling of computational time and convergence rate of the geometric -multigrid solver with respect to degrees of freedom (DOF) of the linear system.

7 Conclusions

We propose and demonstrate efficient and scalable iterative solution of the Laplace equation that arises in FNPF models. We propose to use a -type geometric multigrid (-GMG) iterative solver as the preferred way to solve the FNPF model equations, since it provides a natural and efficient way of refining the grid resolution and maintaining as much information as possible of the geometric features of the structural body representation across all nested grids in the multigrid solver.

We have tested several geometric -multigrid accelerated iterative strategies for efficient solution of the Laplace problem discretized using a high-order spectral element method. The main objective was to deliver textbook multigrid efficiency (TME) for spatial discretization using the spectral element method on general unstructured meshes to be able to handle wave-body problems. In the current implementations, we have shown that it is possible to achieve TME at solver tolerances around and less than two times TME for tolerances down to that would be relevant to reduce the algebraic errors throughout a simulation. In numerical benchmarks of practical relevance, we have demonstrated for a FNPF-SEM solver that it is possible to achieve engineering accuracy (within tolerances of atol = and rtol = ). This leads to a main practical result of using -GMG solvers, namely, that it is possible to achieve a low average iteration count of at most 2 iterations for the stage solver in explicit Runge-Kutta methods for all tolerances larger than . We tested both FNPF modelling using a traditional -transformed formulation suitable for pure wave propagation as well as non--formulation suitable for wave-body applications.

Based on the presented numerical experiments, the speedup in the case of a nonlinear focusing wave group presented up to 7.3 times speedup using a spectral -multigrid solver over a sparse direct solver (cf. section 6.2). The resulting FNPF-SEM simulator is as a main result efficient and prepared for advanced simulations such as a wave-body problem with an FPSO subject to wave-induced loads. The proposed iterative solver strategy demonstrated for a PCG--GMG iterative solver linearly scalable work effort (cf. section 6.3), which is the key requisite for enabling large-scale practical calculations. In ongoing work, the iterative -multigrid method is implemented in a parallel computing framework via domain decomposition designed for massive scalability to prepare it for execution on massively parallel computing systems.

Acknowledgments

The research was hosted by Department of Applied Mathematics and Computer Science (DTU Compute) and access to high-performance computing resources granted via the DTU Computing Center.

References