Parallel Direct Domain Decomposition Methods (D3M) for Finite Elements

02/11/2020 ∙ by Javad Moshfegh, et al. ∙ 0

A parallel direct solution approach based on domain decomposition method (DDM) and directed acyclic graph (DAG) scheduling is outlined. Computations are represented as a sequence of small tasks that operate on domains of DDM or dense matrix blocks of a reduced matrix. These tasks can be statically scheduled for parallel execution using their DAG dependencies and weights that depend on estimates of computation and communication costs. Performance comparison with MUMPS 5.1.2 on electrically large problems suggest up to 20 better parallel efficiency, 30 while maintaining the same accuracy.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Direct solution methods (factorization-based) and high performance parallel computing are growing trends in electromagnetic computations. When it comes to discretization of frequency-domain Maxwell’s equations via Finite Element Methods (FEM), the resulting linear systems are sparse and indefinite thus very hard to solve with direct factorization methods, and even more challenging to efficiently parallelize. State-of-the-art sparse matrix direct solvers such as MUMPS [

References] and PARDISO [References] although very reliable and fast at least on smaller scale problems, they tend to not scale favorably, and have low parallel efficiency and very high memory footprint.

Recent advances in parallel dense direct solvers, have shifted toward parallel implementation that rely on Directed Acyclic Graph (DAG) scheduling approaches [References] that can achieve highly efficient asynchronous parallel execution. However, adaptation of such schemes to sparse matrices is very hard or even impractical.

Direct Domain Decomposition Method (DM), introduced in [ReferencesReferences], offer a reliable memory efficient sparse direct solver for FEM that could deliver better parallel performance. In DM, a sparse FEM matrix is reduced via an “embarrassingly parallel” step into a block-wise sparse matrix. A special block LDL, well suitable for block directed acyclic graph (B-DAG) task scheduling asynchronous parallel execution, is used to solve the reduced system, before another step of “embarrassingly parallel” primal unknown recovery. Using this parallelization method, one can achieve significant parallel scaling improvement and time saving.

Ii Parallel DM

The DM algorithm steps is decomposed into set of instructions called primitive tasks (ptask), e.g. ‘dense update’ or ‘dense triangular solve’. To increase data locality, ptasks are agglomerated into tasks that share local memory. A dependence analysis of those tasks must be performed to determine the spatio/temporal distribution of tasks into cores. This is done by symbolic simulation of the sequential DM algorithm to generate a B-DAG, called task graph. For example, the task graph of a four-domain problem is shown in Fig. 1.

Fig. 1: Overview of the proposed parallel methodology. (a) Cross-section of a sample four-domain problem; (b) The sparsity of the reduced matrix ; (c) The task graph.

DM task graph has three parts, the primal reduction, dual factorization and solution, and primal recovery part. In primal reduction a domain is represented by a task responsible for sparse factorization of the regularized FEM matrix, the inversion on its interfaces (DtN map computation), and generation of dual domain matrices. In the dual factorization and solution step each non-zero block of the dual matrix is represented by a task responsible for possible dense update, dense triangular solver, and dense factorization. Finally in the primal recovery part each domain is again represented by a ‘task’ responsible for recovery of the primal unknowns and possible computation of scattering matrix S or far-field.

The weights of the B-DAG graph associated with the primal tasks are estimated using K-nearest neighbor method, whereas the weights for dual tasks are estimated using benchmarked level 3 BLAS operations: GEMM, sytrf, and triSolve.

The weighted B-DAG is the input of a list scheduling heuristic algorithm, which maps the tasks statically to the available processor units and determine their execution order. Asynchronous communication aided by manual progression

[8] used to ensure the overlap between communication and computation. Finally, the parallel code executes the assigned tasks on each processor.

Iii Numerical Results

The scattering of a dielectric sphere (=4) with diameter 7

, partitioned into 1000 domains, is considered. The problem is discretized in 534,516 tetrahedra and 3,411,490 second order Nedelec tangential vector FEM unknowns. A series of runs with increasing number of distributed cores is performed to compare the strong scalability of the proposed method to MUMPS 5.1.2 with METIS 5.1.0 reordering. Factorization time and memory of the two methods are compared in Fig.

2, where green lines represent the MUMPS results. The proposed method has some tuning parameters that mainly control memory. The tuning set-up in black curve offers the fastest time, and about 10 less memory than MUMPS 5.1.2, while the red curve offers better memory, approximately 30% less memory than MUMPS 5.1.2, but at almost same speed. The time breakdown for the main steps of DM method are plotted in Fig. 3 highlighting that the dual factorization/solution step dominates. Strong scaling parallel speedup and parallel efficiency are compared in Fig. 4. The proposed DM has up to 20% better parallel efficiency than MUMPS 5.1.2.

Fig. 2: Factorization time and memory for a 7 sphere problem.
Fig. 3: Time break-down in proposed DM for a 7 sphere problem.
Fig. 4: Strong parallel scaling for 7 sphere problem.

References

  • [1] P. R. Amestoy, I. S. Duff, J. Y. L’Excellent, and J. Koster, “A fully asynchronous multifrontal solver using distributed dynamic scheduling,” SIAM Journal on Matrix Analysis and Applications Vol. 23, No. 1, pp.15-41, 2001.
  • [2] O. Schenk, and K. Gärtner, “Solving unsymmetric sparse systems of linear equations with PARDISO,” Future Generation Computer Systems, Vol. 20, No. 3, pp.475-487, 2004.
  • [3] A. Buttari, J. Langou, J. Kurzak, and J. Dongarra, “A class of parallel tiled linear algebra algorithms for multicore architectures,”. Parallel Computing, Vol. 35, No. 1, pp.38-53, 2009.
  • [4] J. Moshfegh, and M. N. Vouvakis, “Direct domain decomposition method (DM) for finite element electromagnetic computations,” 2016 IEEE International Symposium on Antennas and Propagation & USNC/URSI National Radio Science Meeting, pp. 1127-1128.
  • [5] J. Moshfegh, and M. N. Vouvakis, “Direct solution of FEM models: Are sparse direct solvers the best strategy?” 2017 IEEE International Conference on Electromagnetics in Advanced Applications (ICEAA), pp. 1636–1638.
  • [6] J. Moshfegh, and M. N. Vouvakis, “A memory-efficient sparse direct solver with applications in CEM,” 2017 IEEE International Symposium on Antennas and Propagation & USNC/URSI National Radio Science Meeting, pp. 1577–1578.
  • [7] M. N. Vouvakis, and J. Moshfegh, “Sparse direct matrix solvers of finite element discretizations in electromagnetics”, 2018 International Applied Computational Electromagnetics Society Symposium (ACES), pp. 1–2.
  • [8] T. Hoefler and A. Lumsdaine, “Message progression in parallel computing - to thread or not to thread?”, IEEE Inter. Conf. on Cluster Computing, pp.213-222, 2008.