1 Introduction
Timedependent problems involving fractional Laplacian operator are of particular interest in fractional calculus p01 ; w02 ; m02 ; c08 . The fractional Laplacian, which is the infinitesimal generator of a standard isotropic stable Lévy flight s01 , has been used in a broad range of applications including quantum mechanics l03 , finance c06 , elasticity d01 , porous medium p02 and stochastic dynamics g04 . Further details on this topic and equivalent characterizations can be found in the review article l01 . We are concerned in this paper with a nonintrusive parallel time integration with multigrid to a class of twodimensional unsteady fractional Laplacian problems of the form
(1.1) 
where , the fractional order , solution domain is an open and bounded subset with Lipschitz continuous boundary .
One of the main challenges of problem (1.1) lies in the nonlocality of the fractional Laplacian. Numerous numerical approximations have been proposed to treat the fractional Laplacian on bounded domains in recent years, such as finite difference h01 ; s02 ; w08 ; d03 ; h04 ; f03 ; w05 ; w07 ; l06 ; w09 , finite element (FE) k01 ; w10 ; n01 ; g05 ; c01 ; a02 ; a05 ; j01 , finite volume y02 ; c07 ; g08 , discontinuous Galerkin x02 ; l05 ; z05 ; a04 and spectral (element) b01 ; m04 ; d04 ; z01 ; h02 ; w06 ; z02 ; w11 ; t02 ; x03 methods. Utilizing a similar technique to the fractional Laplacian in c05 , we can obtain the CaffarelliSilvestre extension, which is reformulated the original problem posed over on the semiinfinite cylinder with dynamic boundary condition:
(1.2) 
where the parameter , the lateral boundary , the positive normalization constant and the conormal exterior are respectively defined by
A widely used approach to solve nonlocal problem (1.1) is based on the solution of local problem (1.2), however, incorporating one more dimension in space n01 ; c01 . Specifically, exploiting the intermediate function , we immediately get the desired solution subject to the expression for all and . It is worthwhile to point out that the anisotropic spatial mesh must be used to capture the singular behavior of as d05 , and Nochetto et al. have proved the existence and uniqueness for solutions of problems (1.1) and (1.2) (see (n02, , Theorem 6)) in a purely theoretical manner.
Moving from twodimensional problems to threedimensional problems can be difficult for solvers. Chen et al. developed uniformly convergent multilevel methods to tackle steady fractional Laplacian problems c01 , which is an important constructive study in the literature. However, a more severe challenge to be faced with arises from the facts that (i) the higher the spatial resolution signifies a comparative or greater scaling in the temporal direction, (ii) future computing speed must rely on the increased concurrency provided by not faster but more processors. The practical consequence is that solution algorithms limited to spatial parallelism for problems with evolutionary behavior shall entail long overall computation time. Therefore, solution algorithms achieving parallelism in time have been of especially high demand. Although time is sequential in essence, parallelintime methods are not new and may be dated back to n03 ; m05 . A recent review of the literature can be found in g07 . At present, parareal in time l02 and multigridreductionintime (MGRIT) f02 are two of the most popular choices. Wu and Zhou analyzed convergence properties of the parareal algorithm for space fractional diffusion problems w01 ; w04 and time fractional equations with local timeintegrators w03 in constant timesteps. Note that Gander and Vandewalle interpreted parareal as a twolevel multigrid full approximation scheme g02 , whereas Falgout et al. identified parareal with an optimal twolevel MGRIT algorithm and further pointed out that the obvious generalization of parareal to multiple levels couldn’t produce an optimal algorithm f02 . In addition, the concurrency of parareal is still limited as its relatively large sequential coarsegrid solve. Conversely, MGRIT, a truly multilevel algorithm implemented in the opensource package XBraid x01 , has been proven to be rather effective and analyzed in twolevel d02 and multilevel h03 settings, for integer order basic parabolic and hyperbolic problems using Lstable RungeKutta schemes, with the limitation on analysis that they only consider the same fine timegrid propagators. Extensions on the twolevel convergence analysis to the case of nonuniform temporal meshes are provided in y01 ; s03 . However, we should point out that only MGRIT using Frelaxation was abstractly discussed in s03 and the unitary diagonalization assumption must be enforced in y01 . Moreover, from the survey of references, there are no calculations taking into account of MGRIT to the fully implicit FE discretization of problem (1.1).
The arrangement of this work proceeds as follows. Section 2 introduces the functional framework and outlines the fully implicit discrete FE scheme. In Section 3, we present the MGRIT V(1,0)cycle algorithm using FCFrelaxation followed by a generalized twolevel convergence analysis without the assumption of unitary diagonalization. Section 4 conducts numerical experiments to illustrate the optimal order of convergence and the computational errornorms decay rate of the fully discrete FE scheme, the robustness of Chen’s multigrid with line smoother with respect to the fractional order and the spatial mesh size, as well as the sharpness of the derived convergence upper bound on the twolevel MGRIT algorithm. Finally, in Section 5, we draw some conclusions and discuss future work.
2 Mathematical preliminaries
This section is devoted to introducing some mathematical preliminaries, including the variational formulation of the local problem (1.2) and its fully implicit discrete FE scheme.
2.1 The variational formulation
Let be open, and denote the weighted Lebesgue and Sobolev spaces g06
with norms
where is the distributional gradient of the measurable function .
Since , weight belongs to the Muckenhoupt class m03 , which implies that is Hilbert. In order to settle problem (1.2), we define
Thus, given a forcing term and an initial datum , the function solves problem (1.1) if and only if its harmonic extension solves problem (1.2), where
and
Recall that the trace operator onto satisfies (see (n01, , Proposition 2.5))
we get the variational formulation of problem (1.2): to find subject to and
(2.1) 
where is the duality pairing between and .
2.2 The fully implicit discrete FE scheme
Note that problem (1.2) can’t be approximated except that is truncated to for a suitable with and an additional homogeneous Dirichlet on , whose theoretical foundation is the exponential decay of in the extended direction on the basis of the length , see (n02, , Proposition 22).
The first step toward the discretization is to obtain a semidiscrete approximation of the truncated problem of the original problem (2.1). To do so, we associate a (possibly nonuniform) mesh by points and time step sizes for . The backward Euler discretization is applied: to seek subject to and
for all , where . Obviously, the sequence
is a series of piecewise constant approximations to the solution of problem (1.1).
Next, we turn to discuss the spatial discretization. Given a quasiuniform triangulation over and a graded partition of by the transition points
(2.2) 
as well as segments for , we construct the mesh over
as the tensor product of
and , and defineand the weighted elliptic projector such that
is valid for any function , where denotes the space of all polynomials of at most one degree.
Finally, we arrive at the fully discrete FE scheme: with the initial guess , to calculate satisfying the constraint
(2.3) 
As before, approximate solutions of problem (1.1) are given by
In order to get the matrix formulation of (2.3), it is necessary to introduce
and
where is the number of interior vertices in . It is straightforward to verify that and can be respectively written as
Choosing the test function in (2.3), we deduce
by Green’s first identity and the fact that , and one further expansion via separation of variables
where entries
Thus, the system of linear equations obtained from (2.3) is of the form
(2.4) 
where

matrixes , , and ,

with for to approximate the initial ,

is the auxiliary vector produced by the other points in the extended direction,

with .
It is trivial to see that the coefficient matrix of (2.4) is symmetric positive definite (SPD). Therefore, the multigrid with line smoother proposed in c01 is a very suitable method and is employed in our implementation.
3 MGRIT formulation and a generalized twolevel convergence upper bound
The main focus of this section is the general MGRIT algorithm using FCFrelaxation, followed by its twolevel convergence analysis for the general case where the timegrid propagators don’t have to be the same and need not be simultaneously diagonalizable with a unitary matrix.
3.1 The general MGRIT algorithm using FCFrelaxation
Consider the following block unit lower bidiagonal system
(3.1) 
where ,
is the identity matrix, vectors
, and timegrid propagators are assumed not to be connected with any specific forms.Various components are required in order to solve the global spacetime problem (3.1) through the MGRIT algorithm based on the multigrid reduction technique r01 . Define the coarse temporal mesh with nodes and time step sizes for some coarsening factor , where . Then all are disconnected Cpoints representing those variables in the coarse level and the others are Fpoints grouped adjacent points into a bunch. The particular case when is illustrated in Fig. 1. It should be mentioned that is still viewed to be nonuniform. Assume for simplicity of presentation here that Fpoints are ordered first. MGRIT utilizes the coarse injection as the restriction operator, i.e., only the residual at Cpoints is computed, where is the identity matrix. The FCFrelaxation (i.e., an initial Frelaxation followed by a Crelaxation and then a second Frelaxation) is often the most preferable choice to produce optimal and scalable multilevel iterations f02 , where Frelaxation refers to updating variables at Fpoints in turn yet independent of other Fintervals, and Crelaxation stands for performing (2.4
) to propagate the solution at a Cpoint. It is apparent that these two processes are both highly parallel. The ideal interpolation is used to provide the coarsegrid correction, which is implemented at the cost of updating Cvariables by the coarse approximations immediately followed by an Frelaxation over the timegrid propagators
. It is worth mentioning that the resulting PetrovGalerkin triple matrix product is exactly the Schur complement(3.2) 
where with , positive integer and nonnegative integer . However, this coarsegrid problem would be as expensive as the original finegrid problem. MGRIT approximates the coarsegrid operator with
(3.3) 
where is an approximate coarsegrid timestepping operator to . The most natural choice of is to rediscretize the timedependent linear continuous problem on the coarse temporal mesh. The final stage is the application of the above processes recursively, with possibly different coarsening factors at different time levels, to construct a multilevel hierarchy under certain stopping criteria, e.g., by setting the maximum number of time levels and the minimum coarsest temporal grid size. Below is a sketch of the resulting MGRIT V(1,0)cycle algorithm using FCFrelaxation in the recursive fashion, where is the number of time levels determined by the user supplied stopping procedure, , , matrices , and () respectively correspond to the th coarsescale time rediscretization, the mentioned above restriction and interpolation.
Algorithm 1
MGRIT V(1,0)cycle algorithm using FCFrelaxation:
 Step 1

Apply FCFrelaxation to with as the initial guess.
 Step 2

Inject the residual to the coarse time level .
 Step 3

If , then solve ; Else compute .
 Step 4

Perform the coarsegrid correction using the ideal interpolation .
Remark 1
(Reduction of the computational overhead) Here, we point out that Step 4 of Algorithm 1 is executed by just correcting Cvariables, and updating Fvariables only when the Euclidean norm of the residual is small enough. The underlying reason is that corrections at Fpoints are equivalent to an Frelaxation, which will be performed in the subsequent MGRIT iteration.
3.2 A generalized twolevel convergence analysis
Setting , Algorithm 1 in this case reduces to the wellknown twolevel scheme, whose residual propagation operator for a single iteration, as in the derivations of y01 ; s03 , can be expressed as in an FCpartitioning of :
(3.4) 
where the matrix used in forming the “ideal restriction” operator
Below, we wish to find an upper bound for a certain norm of the residual propagation .
 S1

Assume that timestepping operators and are both strongly stable l04 (i.e., the discrete norms and ) and simultaneously diagonalizable by the matrix . Note that the assumption on simultaneous diagonalization will be tenable when the same spatial discretization is exploited on fine and coarse temporal meshes. Denote by and the sets of all complex eigenvalues of and , respectively.
 S2
 S3

By exploiting the closed form
the other two matrix forms appeared in are obtained from block lower bidiagonal structures (3.2)(3.3):
and
As a result, the expression of can be rewritten as
(3.5) which can be reformulated as the undermentioned triple matrix product
Then, under the assumption that is invertible for , using the relation , and recalling that the MoorePenrose conditions are used to define the pseudoinverse of any matrix :
the unique pseudoinverse of can be established in a straightforward way
(3.6) by noticing the following two important results
It is easy to see that the use of
allows for a simpler technical path to analyze the maximum singular value of
through the minimum nonzero singular value of with the advantage of its simplicity.  S4

Based on the simultaneous diagonalizability from Stage S1:
we have
where

consisting of diagonal blocks,

denotes the orthogonal permutation matrix which makes block diagonal,

and represent and , respectively, evaluated at () and ().
We need to determine the nonzero singular value . For this purpose, deleting the last two zerorows and zerocolumns of which correspond to two zero eigenvalues, the remaining algebraic manipulation is to pursue after the square root of the smallest nonzero eigenvalue of the complex Hermitian tridiagonal matrix
(3.7) with

Comments
There are no comments yet.