1. Introduction
In this paper, the following anisotropic elliptic equation is considered:
(1) 
where is an open set in or , its boundary is defined by ,
is exterior normal vector on the boundary
. The parameter is reciprocal of anisotropic strength, and it can be very small, i.e. . Let be a magnetic field, then we can divide the boundary into two parts, i.e. , withMoreover, we define parallel or perpendicular differential operators as follows:
where
is the identity matrix. This anisotropic elliptic equation (
1) has important applications in the physics of magnetized plasma, such as confinement fusion [13], plasma of ionosphere [1], plasma propulsion [12], etc.The problem (1) is wellposed for any . Indeed let be a convex polygon and , then we have a unique solution of the problem (1), with
Formally, if goes to 0, we get a degenerate problem:
(2) 
The problem (2) is illposed due to loss of uniqueness of solution, since any function belongs to set
is solution of (2). In other words, the solution of (2) is in kernel of differential operator .
We then define a limit problem
(3) 
This limit problem (3) has unique solution thanks to the constraint .
Solving directly the anisotropic elliptic equation with classic numerical methods may involve severe numerical pollution [21]. One possible way to alleviate such pollution of anisotropic elliptic or parabolic problems is to use high order methods, such as [10] for fourthorder finite volume method and [9] for high order finite element method. Moreover, Chacón et al. in [3] proposed an asymptoticpreserving semiLagrangian algorithm for timedependent anisotropic heat conduction equations.
In the last decade, another class of asymptotic preserving (AP) schemes were developed for the problem (1). Those AP schemes, on the one hand, are equivalent to the problem (1) when ; on the other hand, they coincide with the limit problem (3) as . Therefore, the AP schemes are welldefined for all values of . The first AP scheme was given for aligned case () in [4]. Some extensions were made for aligned case, namely Duality Based methods [2, 19]. Other methods focused on general nonaligned case, such as MicroMacro AP (MMAP) scheme [5], field line integration method [17, 18], two field iterated method [6, 20], AP scheme based on firstorder system method [11]
. In most of these methods, the key point is to introduce an auxiliary variable, then we have an AP system. To solve the AP system numerically, one must solve a linear algebraic system which has more degree of freedoms than original systems, where direct solvers may not be practical in large scale simulation. Thus the main purpose of this paper is to develop efficient and robust iterative methods.
In this paper we propose to use GMRES method [15] to solve the linear algebraic systems. To improve the robustness and efficiency of the iterative solver, the key ingredient is efficient preconditioning. The coefficient matrix generated from the MMAP scheme used in this paper has a typical block structure. Motivated by the block preconditioning methods from NavierStokes equations in [7], where the key point is approximate Schur complement, we attempt to devise efficient approximate Schur complement of the MMAP system in this paper. We firstly give a natural approximation from the continuous operator level. However this natural approximate Schur complement may be ill conditioned if is very small. We then replace its boundary rows by the ones of exact Schur complement. It is surprising that the GMRES method converges only in 2 iterations in our numerical examples. Therefore we find the core point is how to approximate the boundary rows of exact Schur complement. Based on this fact, we present some improvements in Section 3.2.
This paper is organized as follows: In Section 2, the MicroMacro Asymptotic Preserving (MMAP) schemes introduced in [5] is briefly revised. In Section 3, several block preconditioning methods for the MMAP schemes are described. In Section 4, a number of numerical comparison of those block preconditioning methods are given, including the aligned case and nonaligned case. The preconditioned methods for the MMAP scheme without inflow boundary condition is also discussed here. Finally, we finish this paper with some conclusion and perspectives.
2. Review of the MicroMacro Asymptotic Preserving schemes
In this part, we briefly revise the MicroMacro Asymptotic Preserving (MMAP) schemes introduced in [5]. For clarity, let us omit the subscript . Moreover, let us focus on an anisotropic elliptic equation defined in a unit square, e.g. . Moreover, let us denote , and . Thus the anisotropic elliptic equation is given as follows
(4) 
The key point of the MMAP scheme in [5] is to introduce an auxiliary function , with the following ansatz
(5) 
Taking another time of parallel derivative on (5), we have
(6) 
Injecting (6) into the first equation of (4), we get
Similarly, putting the ansatz (5) into the boundary conditions in (4), we can obtain the corresponding boundary conditions. Finally, the MMAP scheme is written as
(7) 
Now, let us give two remarks on the MMAP scheme (7). The first one is that the MMAP scheme has a unique solution for any provided that the parallel derivative of is vanishing on the ends of the magnetic field lines. Indeed, provided that , the MMAP scheme (7) is equivalent to the anisotropic elliptic equation (4), which is wellposed for (see [5]), thus we have unique solution . Formally letting in (7), we have
(8) 
Integrating (8) along magnetic field line and thanks to vanishing at one end of magnetic field line, we find that
which means that is constant along magnetic field line, thus , where we denote as the average of along a magnetic field line. At last, integrating the first equation of (7) along a magnetic field line, we obtain a unique solution , which is also the unique solution for the MMAP scheme (7). Notice that the constraint the parallel derivative of vanishing on the ends of the magnetic field lines is equivalent to say that the magnetic field lines cross the boundary .
The second remark is that the MMAP scheme (7) does not determine a unique . Indeed, if satisfies (7), then also satisfies (7) for any in the kernel of the operator . To determine a unique , in [5], the inflow boundary condition was added on one end of magnetic field lines (denoted as where ), that is on . Moreover, notice that along a magnetic field line, one boundary condition of on is tedious, since one can easily verify this fact by integrating the second equation of (7) along a magnetic field line. As a result, on one end of magnetic field lines, the boundary condition can be substituted by the inflow boundary condition .
3. Block preconditioning methods
In this part, we will introduce block preconditioning methods for discrete MMAP scheme of Section 2. Let us reformulate the MMAP scheme as follows
(9) 
and
(10) 
In the sequel, two cases will be considered:

Aligned case: the magnetic field line aligned with axis, i.e. .

Nonaligned case: the magnetic field line does not aligned with Cartesian coordinates.
In Appendix A, the finite difference methods for the MMAP scheme (9)(10) are given for both aligned case and nonaligned case. In any case, the linear system takes a matrix form as
(11) 
This block matrix can be factorized as
The so called Schur complement is
(12) 
Denote the lower block triangular matrix as follows
Using as preconditioner, typical Krylov subspace methods such as GMRES [15] will converge in two iterations [14] [16] for any grid size and any value of . However, the price to pay is to invert the matrix , which is too costly when discretization mesh size is large enough.
Now the central question is how to find a suitable approximation of the exact Schur complement with a cheap cost. More precisely, let us consider the following block preconditioner
its inverse is
Now applying the preconditioner to , we get
which means the eigenvalues of
are 1 and , where is eigenvalue of . If we could construct a suitable approximation for the exact Schur complement , then the eigenvalues of will be clustered tightly. As a result one can expect that the Krylov iterative methods have a rapid convergence.3.1. Approximate Schur complement
In this subsection we give the basic ideas to develop the approximate Schur complement for the MMAP system matrix in (11). We express the coefficient matrix of (11) in continuous operator form
Formally, one can verify that the underlying continuous operator for the exact Schur complement is
(13) 
which is precisely the original PDE operator for variable in (4). Thus a natural approximation matrix for is the discrete coefficient matrix for (13) with zero boundary condition on and zero Neumann boundary condition on . We denote such natural approximate Schur complement as . However when , due to zero parallel derivative , one can deduce that tends to be singular. The performance of such block preconditioner with will deteriorate. For this assertion, one can see numerical results of aligned case in Table 1.
Now we want to give a new approximation from the algebraic level. Let us denote the diagonal part of by , which may offer some useful spectral information of . Hence, substituting by , we obtain a new approximation for
(14) 
The matrix contains the spectral information of in the algebraic level and will show extra advantage when becomes very small as indicated by our numerical results in section 4. We remark that the proposed two approximate Schur complement and can be easily constructed in any space dimension and be conveniently accessed in parallel computing environment.
Remark 1. The natural approximate Schur complement needs more explanation here. Firstly it performs rather well for large . Moreover, in our numerical experiments we find that the iteration numbers of GMRES using is also stable in nonaligned case. But this is not in aligned case. In nonaligned case, we numerically observe that the condition number of scales as follows
where is a parameter of magnetic field in (17). Therefore if is not very small and the grid size is not very fine, this ”bad approximation” of nonAP discretization can be viewed as a good preconditioning for the MMAP discretization.
3.2. A further modification
Above we have constructed two approximate Schur complement. One is motivated from the continuous PDE level and the other mainly from the algebraic level. In this subsection we combine the two ideas to give further modification for the approximation.
Recall that the operator for is and for is , so we can also use matrix to approximate operator (13). Unfortunately, on the one hand, the matrix may have zero rows on , since ( is orthogonal to ) may vanish on , and on the other hand, matrix has no boundary conditions on . This fact means that may be singular due to null rows on bottom boundary . To remedy this, we replace such null rows in by the corresponding rows in . Then we obtain a third approximate Schur complement .
In order to assess the influence of the approximation of the boundary rows in , we construct an extra expense approximation using exact inverse of . In detail we replace null boundary rows of on using the corresponding rows of the exact Schur complement instead of the ones of . Here is just used for performance comparison and we do not recommend this use in practice.
We also propose two additional approximate Schur complements. The first one, denoted as , is a combination of and , that is the rows corresponding to of are simply sum of the rows of and . From this, we expect to incorporate the both advantages of and . The second one is only valid for aligned case with the discretization given in A.1, where we expect to construct a symmetric positive definite approximate Schur complement. For this, we impose a Robin boundary condition as follows
The corresponding discretization is
Let us denote this approximate Schur complement by .
Now let us give a short summarize. We first construct three approximate Schur complements from different point of view. We also use an expense for performance comparison. Finally, we give two additional ones and for improved performance.
Remark 2. In the appendix, we prove that for aligned case the exact Schur complement is precisely . Our numerical tests also seem to indicate this because the GMRES solver converges in only 2 steps for both aligned case and nonaligned case and for any grid size and anisotropic parameters. This fact tells us that if we can find good approximation of boundary rows of exact Schur complement then we can find good approximate Schur complement.
4. Numerical results
This section is devoted to show performance of the proposed block preconditioner of Section 3. We notice that to apply as a preconditioner, in each step of GMRES iteration, a following solve of triangular block linear system is needed
which is equivalent to solve two linear systems successively as
(15)  
(16) 
4.1. Setup
In this section, we consider a fixed magnetic field [5, 6]
(17) 
Thus we have
(18) 
The manufactured solution is given by
(19) 
Below, following parameters are used, , or , . The magnetic field lines defined in (17) are illustrated in Figure 1, where corresponds to aligned case and for nonaligned case. In the sequel, without special explanation, the generalized minimal residual algorithm (GMRES) [15] method with no restart will be used as iterative solver for solving the linear system (11). The relative tolerance is set to be 1.0e6, and the maximum iterative number is 100. Moreover, all the resolution of the MMAP scheme have reach desired precision, thus we will not show errors analysis in this section.
(a)  (b) 
4.2. Aligned case
In this part, we consider aligned case, where discretization is described in Appendix A.1.
Let us first show graphically the eigenvalues of different matrices, which is reported in Figure 2. We observe that the eigenvalues of the unpreconditioned matrix have both large distribution in real axis and image axis, and this similar phenomena is valid for all values of . On the contrary, for preconditioned matrix , we expect eigenvalues are tightly close to positive real axis and are as clustered as possible. Ideally, we hope these eigenvalues tend to 1. For the case with , we see the extent of eigenvalues is proportional to . This is due to the fact that there is no cross derivatives in aligned case, thus the discretizations of and can be made separately, i.e. is discretized aligned with axis and the one of is aligned with axis. The eigenvalues of
are real and the maximum range can be estimated as
[19]For the case with , we see the eigenvalues already becomes more clustered than the case with , especially for small , e.g. . However, for , the eigenvalues are not tightly clustered to the positive real axis, and this phenomena can be observed more significantly on refined mesh. This is because when is big, the solutions of original systems are close to solutions of MMAP scheme. In such case ia a very good approximation of . However this is not the case with very small .
For the case with , we see that the eigenvalues are tightly close to the positive real axis for all values of . More precisely, we find that for , the eigenvalues for cases with and are very similar, and they are better than the case with . However, for small , the eigenvalues for cases with and become more similar, and they are better than the case with . As a result, the case with seems to be the best in the comparison shown in Figure 2. Notice that in Figure 2, the eigenvalues of the case with are not shown, it is because is nothing but the exact Schur complement (see the detail in Appendix B), thus the eigenvalues of this case are all equal to 1.
Now let us solve the linear system (11) with GMRES solver. At each iteration we need a resolution of the linear systems (15)(16). This resolution is divided into two steps: the first step is to factorize the matrices and , then the second step is to solve (15)(16) thanks to the factorized matrices obtained in the first step. Due to the finite difference discretization in Appendix A.1, we know that the matrices and are symmetric positive definite (SPD), we thus factorize them by the Cholesky decomposition. For the other , we just apply the LU decomposition.
The iteration numbers of the GMRES method are reported in Table 1, where we compare among different mesh sizes and anisotropy strength. Overall, the results we got are consistent with the previous analysis of eigenvalues. For the case with , at most 3 iterations are needed for convergence of the GMRES method when . However when is vanishing, the GMRES method does not converge anymore. For the case with , the preconditioner works well for small , but the convergence rate is quite slow for large . For the case with , it combines the advantages of cases with and . We observe that the iterations do not increase when refining mesh and the iterations decrease for vanishing . For the case with , just two iterations are needed since is exact Schur complement. For the case with , we obtain exact table as for the case with . Finally, for the case with , the results are very similar as the one for the case with or , except for and , where the GMRES solver converges in 6 iterations.
(a) Case with
1  

3  3  3  3  1  
3  3  3  3  2  
3  3  3  3  2  
3  3  3  3  2 
(b) Case with
1  

14  6  5  2  2  2  
18  7  5  2  2  2  
33  11  4  2  2  2  
67  11  4  2  2  2 
(c) Case with
1  

4  4  4  3  2  2  
4  4  4  3  2  2  
4  4  4  3  2  2  
4  4  4  4  2  2 
(d) Case with
1  

2  2  2  2  2  2  
2  2  2  2  2  2  
2  2  2  2  2  2  
2  2  2  2  2  2 
(e) Case with
1  

4  4  4  3  2  2  
4  4  4  3  2  2  
4  4  4  3  2  2  
4  4  4  4  2  2 
(f) Case with
1  

4  4  4  3  2  2  
4  4  4  3  2  2  
4  4  4  2  2  2  
4  4  4  6  2  2 
The computational time for solving preconditioned linear system is reported in Figure 3. Firstly, we see the matrix factorization takes most of computational time, especially for very fine mesh, factorization time is much more than that for GMRES resolution. Secondly, Cholesky decomposition takes less than half of computational time of LU decomposition. This agrees with wellknown results, that is complexity of Cholesky decomposition is ( is the size of matrix), while the one of LU decomposition is . Therefore, we conclude that the resolution for the case with approximate Schur complement is more efficient than the cases with the other approximate Schur complements for the aligned case.
Note that finite difference discretization given in Appendix A.2 can also deal with aligned case just by setting . We find that for the case with , we almost obtain the same results as presented in this subsection. However, since in this case is no longer symmetric, we can not benefit the advantages of SPD.
4.3. Nonaligned case
For the nonaligned case, we use the finite difference discretization given in Appendix A.2. Again, let us start with the graphics of the eigenvalues distribution of different matrices, which is reported in Figure 4. We obtain very similar graphics for the matrices and (in the cases with ) as we have in aligned case. One exception is for the case with , where we find that the range of eigenvalues on the real axis is not proportional to , instead it becomes a constant for small , e.g. or . A very similar phenomena can also be observed from the condition number of , which is reported in Figure 5. From Figure 5(a), we see that the condition number of tends to be constant when for the nonaligned case, while it is proportional to for the aligned case. Moreover, from Figure 5(b), when varying , we see that for large the condition number is a constant and for small the condition number is proportional to . From Figure 5(c), when varying mesh number , we see that for large the condition number is proportional to and for small the condition number is proportional to . Therefore, we make a conjecture of the estimation of the condition number of as
(20) 
As a result, for small , the condition number of is proportional to for fixed , thus the condition number is better than that of the aligned case. This explains that the range of real part of eigenvalues of does not explode for the nonaligned case when vanishing and the good performance even with small . Finally, from numerical computation, we find that is almost equal to the exact Schur complement , thus the corresponding eigenvalues are very tightly clustered to 1. This fact indicates that the difference between the exact Schur complement and is just the added rows corresponding to the boundary , therefore the more accurate we approximate well these boundary rows, the better performance of the block preconditioner we can expect.
(a)  (b)  (c) 
The numbers of the GMRES iteration are summarized in Table 2. Remarkably for the case with , we see the GMRES iterations almost do not increase when refining meshes and they increase slightly when vanishing. It seems that the case with preforms the best comparing to the cases with . This phenomena is very different from the aligned case, especially when , where the GMRES method does not converge. This ”robustness” seems to verify our conjecture (20) from a different point of view. For the case with , we see again the GMRES method does not work well for large . For , the GMRES iteration are almost independent of the choice of . More importantly, it does not deteriorate again for vanishing . For the case with , the GMRES method converges in two iterations which indicates that is almost equal to the exact Schur complement . Finally, the case of combines the advantages of the cases with and . Thus we conclude that approximate Schur complement and are the preferred ones for nonaligned case.
(a) Case with
1  

5  8  14  13  13  13  
5  7  11  15  15  15  
4  6  9  17  17  17  
4  5  7  19  18  19 
(b) Case with
1  

33  22  18  12  12  12  
50  33  25  15  15  15  
73  48  34  17  17  17  
66  46  19  20  20 
(c) Case with
1  

11  12  14  13  12  12  
13  15  17  15  15  15  
15  18  21  20  17  17  
17  21  25  25  20  20 
(d) Case with
1  

2  2  2  2  2  2  
2  2  2  2  2  2  
2  2  2  2  2  2  
2  2  2  2  2  2 
(e) Case with
1  

7  11  13  12  12  12  
6  12  16  15  14  14  
6  12  18  19  17  17  
6  12  21  25  19  19 
Figure 6 presents relative residual history of GMRES iteration. We see a clear tendency of convergence of GMRES method with different approximate Schur complements. For the current case with , we find performs the best. However, from the conjecture (20), the condition number of deteriorates when vanishing , thus the convergence of the GMRES method may also be perturbed. For instance, by setting , the GMRES method does not converge when . The case with performs less well than the one with , but it is better than the case with , especially for large .
The computational time for solving preconditioned linear system is reported in Figure 7. Again, we factorize first the matrices and , then we apply the GMRES method. The first observation is that, very different from the aligned case, for the nonaligned case most computational time comes from the GMRES solver. For large , the cases with and are more efficient than the one with . But for small , the efficiency of all choices is almost the same. Therefore we recommend to use the preconditioner with the approximate Schur complement .
4.4. Without inflow boundary condition
In Section 2, the introduced MMAP scheme provides a unique couple of , where the key point is to impose the inflow condition for . However, we notice that the auxiliary does not involve directly in computation of , instead the parallel derivative is used in the MMAP scheme. This similar situation also appears in the solution of saddle point problem from incompressible NavierStokes equations (see [7] and references therein), where the matrix of NS equations is singular because pressure variable is not unique even though its gradient is unique.
In preconditioned GMRES iteration, one mainly needs to apply matrix vector multiplication and a inverse of the preconditioner. Thus in principle GMRES method can still work even if the matrix is singular provided the preconditioner is applicable. In the solving of NS equations [7], GMRES can obtain right solutions for the velocity. In this paper is just a auxiliary variable, thus the inflow condition can be dropped if GMRES iteration can generate the correct solutions for . In another word, the inflow condition is not necessary to be introduced for determining a unique when using GMRES. But when using direct solvers we can not drop the inflow condition.
In this subsection, we substitute the inflow condition in (9) by and solve
(21) 
Now the MMAP scheme becomes (21) and (10), and its corresponding discretization matrix (denoted again by ) is no longer invertible, because the discretization matrix of the operator of (21) (denoted by ) is now not invertible. We wirte the new linear system by
(22) 
Remember that we need the preconditioner to be applicable. To make the subblocks of the preconditioner invertible, we use to compute the approximate Schur complement. Namely we use as an approximation of , so that the preconditioners discussed in Section 3 remain the same. In short we use inflow conditions in the preconditioning but not in the computation of coefficient matrix .
Here, we only consider the approximate Schur complement , which is shown to be the most robust choice in previous subsections. The iteration number of the GMRES method is reported in Table 3. For aligned case, Table 3(a) is the same as in Table 1(e). While for nonaligned case, Table 3(b) is also similar as in Table 2(e). This show that even though the matrix of linear system (22) is not invertible, the proposed preconditioned GMRES algorithm still works well.
(a) Aligned case
1  

4  4  4  3  2  2  
4  4  4  3  2  2  
4  4  4  3  2  2  
4  4  4  4  2  2 
(b) Nonaligned case
1  

6  10  13  12  12  12  
6  12  16  15  14  14  
6  12  18  19  17  17  
6  12  20  25  19  19 
5. Conclusion and perspectives
In this paper, we introduce block preconditioning methods for the MMAP method. We propose a series of block preconditioners based on the approximate Schur complement techniques. We perform a number of numerical tests for both the aligned case and nonaligned case. Numerical results show that on the one hand, the proposed block preconditioners are robust with grid refinement; on the other hand, they can work well for all anisotropic strengths.
The MMAP method works for the cases where the border of magnetic field lines cross the boundaries of computational domain. For the case with closed magnetic field lines, we still need to explore other asymptoticpreserving methods. For instance, in [11], an asymptoticpreserving scheme based on first order system (APFOS) was introduced. Therefore, to develop a suitable block preconditioning method for APFOS scheme can be an extension of present work. Moreover the implementation of finite element method and better approximation of the boundary rows of exact Schur complement is under consideration.
Acknowledgements
This work has been supported by Heilongjiang Provincial Natural Science Foundation of China (LH2019A013) and the Fundamental Research Funds for the Central Universities.
Chang Yang acknowledges support by a public grant from the ”Laboratoire d’Excellence Centre International de Mathématiques et d’Informatique” (LabEx CIMI) overseen by the French National Research Agency (ANR) as part of the ”Investissements d’Avenir” program (reference ANR11LABX0040) in the frame of the PROMETEUS project (PRospect of nOvel nuMerical modEls for elecTric propulsion and low tEmperatUre plaSmas) and from the FrFCM (Fédération de recherche pour la Fusion par Confinement Magnétique) in the frame of the NEMESIA project (Numerical mEthods for Macroscopic models of magnEtized plaSmas and related anIsotropic equAtions).
Lingxiao Li has also been supported by National Natural Science Foundation of China under Grant 11901042.
Appendix A Finite difference discretizations
In this part, let us give the finite difference discretizations for the MMAP scheme introduced in Section 2.
a.1. Aligned case
In the aligned case, we consider the magnetic field aligned with axis, i.e. . For the convenience of construction of matrices, we write the MMAP scheme of Section 2 as follows
(23) 
and
(24) 
The mesh is defined by
where and . Furthermore, we take and , so that . Then a second order finite difference discretization for (23) and (24) can be directly determined as
(25) 
and
(26) 
where and are numerical approximations of and respectively, and . Now, to construct the linear system to (25)(26), we define a lexicographic, that is to count first in direction then in direction. Denote the index of the linear system, we then define a function ’lexico’ to convert from the couple to as
(27) 
So the linear system corresponding to (25)(26) is given by
At last, in order to get a more symmetrical matrix forms, we make some algebraic operations, which yield in the first equation of (25) as
Thanks to these algebraic operations, the matrices and become symmetric.
a.2. Nonaligned case
Now let us concentrate discretization for the general MMAP scheme (9)(10). Again, this time we will consider a second order finite difference method.
Firstly, the mesh is defined different from the previous subsection, since we have to discretize cross derivative on all interior grids. For this, the mesh is defined by
where and . Again, we take , so that .
Secondly, we notice that
its corresponding discretization on interior grids is
Using the usual central discretization, can be approximated as
Then, let us denote by
thus we can recast as
where
Comments
There are no comments yet.