1 Introduction
Parabolic initialboundary value problems are usually discretized by timestepping schemes and spatial finite element methods. These methods treat the time and spatial variables differently, see, e.g., [45]
. In addition, the resulting approximation methods are sequential in time. In contrast to these approaches, spacetime methods discretize timedependent partial differential equations without separating the temporal and spatial directions. In particular, they are based on spacetime variational formulations. There exist various spacetime techniques for parabolic problems, which are based on variational formulations in BochnerSobolev spaces, see, e.g.,
[3, 4, 14, 20, 23, 28, 31, 35, 39, 43, 46], or on discontinuous Galerkin methods, see, e.g., [19, 32, 33], or on discontinuous PetrovGalerkin methods, see, e.g., [13], and the references therein. We refer the reader to [15] and [40] for a comprehensive overview of parallelintime and spacetime methods, respectively. An alternative is the discretization of spacetime variational formulations in anisotropic Sobolev spaces, see, e.g., [8, 24, 36, 41, 48]. These variational formulations allow the complete analysis of inhomogeneous Dirichlet or Neumann conditions, and were used for the analysis of the resulting boundary integral operators, see [6, 9]. Hence, discretizations for variational formulations in anisotropic Sobolev spaces can be used for the interior problems of FEMBEM couplings for transmission problems.In this work, the approach in anisotropic Sobolev spaces is applied in combination with a novel Hilberttype transformation operator , which has recently been introduced in [41, 48]. This transformation operator maps the ansatz space to the test space, and gives a symmetric and elliptic variational setting of the firstorder time derivative. The homogeneous Dirichlet problem for the nonstationary diffusion respectively heat equation
(1) 
serves as model problem for a parabolic initialboundary value problem, where is a bounded Lipschitz domain with boundary , is a given terminal time, and is a given righthand side. With the help of the Hilberttype transformation operator , a Galerkin finite element method is derived, which results in one global linear system
(2) 
When using a tensorproduct approach, the system matrix can be represented as a sum of Kronecker products. The purpose of this paper is the development of efficient direct spacetime solvers for the global linear system 2, exploiting the Kronecker structure of . Therefore, we apply the BartelsStewart method [5] and the Fast Diagonalization method [29] to solve (2), see also [16, 37]
. For both methods, we derive complexity estimates of the resulting algorithms.
The rest of this paper is organized as follows: In Section 2, we consider the spacetime variational formulation in anisotropic Sobolev spaces and the Hilberttype transformation operator with its main properties. In Section 3, we rephrase properties of the Kronecker product and of sparse direct solvers, which are needed for the new spacetime solver. Section 4 is devoted to the construction of efficient spacetime solvers. Numerical examples for a twodimensional spatial domain are presented in Section 5. Finally, we draw some conclusions in Section 6.
2 SpaceTime Method in Anisotropic Sobolev Spaces
In this section, we give the variational setting for the parabolic model problem (1), which is studied in greater detail in [6, 21, 25, 26, 41, 48]. We consider the spacetime variational formulation of (1) in anisotropic Sobolev spaces to find such that
(3) 
for all where is a given righthand side. Here, the bilinear form
for , , is bounded, i.e. there exists a constant such that
see [6, Lemma 2.6, p. 505]. The anisotropic Sobolev spaces
are endowed with the Hilbertian norms
with the usual BochnerSobolev norms
see [25, 26, 41, 48] for more details. The dual space is characterized as completion of with respect to the Hilbertian norm
where denotes the duality pairing as extension of the inner product in In [6]
, the following existence and uniqueness theorem is proven by a transposition and interpolation argument as in
[25, 26], see also [21].Theorem 2.1.
Let the righthand side be given. Then, the variational formulation (3) has a unique solution satisfying
with a constant Furthermore, the solution operator
is an isomorphism.
For simplicity, we only consider homogeneous Dirichlet conditions, where inhomogeneous Dirichlet conditions can be treated via homogenization as for the elliptic case, see [38, p. 6162], since for any Dirichlet data , an extension with exists, see [6, 9] for more details.
For a discretization scheme, let the bounded Lipschitz domain be an interval for or polygonal for or polyhedral for For a tensorproduct ansatz, we consider admissible decompositions
with spacetime elements, where the time intervals with mesh sizes are defined via the decomposition
of the time interval . The maximal and the minimal time mesh sizes are denoted by and , respectively. For the spatial domain , we consider a shaperegular sequence of admissible decompositions
of into finite elements with mesh sizes and the maximal mesh size . The spatial elements are intervals for , triangles or quadrilaterals for , and tetrahedra or hexahedra for . Next, we introduce the finite element space
(4) 
of piecewise multilinear, continuous functions, i.e.
In fact, is either the space of piecewise linear, continuous functions on intervals (), triangles (), and tetrahedra (), or is the space of piecewise linear/bilinear/trilinear, continuous functions on intervals (), quadrilaterals (), and hexahedra (). Analogously, for a fixed polynomial degree , we consider the space of piecewise polynomial, continuous functions
(5) 
Using the finite element space (4), it turns out that a discretization of (3) with the conforming ansatz space and the conforming test space is not stable, see [48, Section 3.3]. A possible way out is the modified Hilbert transformation defined by
where the given function is represented by
(6) 
with the eigenfunctions
and eigenvalues
, satisfyingThis approach was introduced recently in [41] and [48, Section 3.4]. The novel transformation acts on the finite terminal , whereas analogous considerations of an infinite time interval with the classical Hilbert transformation are investigated in [8, 11, 12, 24]. The most important properties of are summarized in the following, see [41, 42, 47, 48]. The map
is norm preserving, bijective and fulfills the coercivity property
(7) 
for functions with expansion coefficients as in (6). Note that the norm induced by the inner product is equivalent to the norm Moreover, the relations
(8) 
hold true. With the modified Hilbert transformation , the variational formulation (3) is equivalent to find such that
(9) 
Hence, unique solvability of the variational formulation (9) follows from the unique solvability of (3), which implies the stability estimate
with a constant When using some conforming spacetime finite element space the Galerkin variational formulation of (9) is to find such that
(10) 
Note that ansatz and test spaces are equal. In [48], the following theorem is proven.
Theorem 2.2.
Let be a conforming spacetime finite element space and let be a given righthand side. Then, a unique solution of the Galerkin variational formulation (10) exists. If, in addition, the righthand side fulfills then the stability estimate
is true with a constant
Theorem 2.2 states that, under the assumption , any conforming spacetime finite element space leads to an unconditionally stable method, i.e. no CFL condition is required. For the choice of the tensorproduct spacetime finite element space
from (5), the Galerkin variational formulation (10) to find such that
fulfills the spacetime error estimates
(11)  
(12)  
(13) 
with and with a constant for a sufficiently smooth solution of (3) and a sufficiently regular boundary where for the error estimate (13), the sequence of decompositions of is additionally assumed to be globally quasiuniform, see [41, 48] for details.
In the remainder of this work, we consider , i.e. the tensorproduct space of piecewise linear, continuous functions , where analogous results hold true for an arbitrary polynomial degree
So, the number of the degrees of freedom is given by
For an easier implementation, we approximate the righthand side by
(14) 
with coefficients , where is the projection on the piecewise constant functions with and . So, we consider the perturbed variational formulation to find such that
(15) 
Note that, for piecewise linear functions, i.e. , the spacetime error estimates (11), (12), (13) are not spoilt. Note additionally that, for , a projection on polynomials of degree should be used instead of for preserving the spacetime error estimates (11), (12), (13). After an appropriate ordering of the degrees of freedom, the discrete variational formulation (15) is equivalent to the global linear system
(16) 
with the system matrix
where and denote spatial mass and stiffness matrices given by
(17) 
and and are defined by
for . Note that the matrix is dense, symmetric and positive definite, see (7), whereas the matrix is dense, nonsymmetric and positive definite, see (8
). Additionally, the vector of the righthand side in (
16) is given bywith the vectors , , where, with the help of (14),
To assemble the vector of the righthand side in (16), the relation
holds true with where
3 Preliminaries for the SpaceTime Solvers
In this section, some properties of the Kronecker product and direct solvers, which are needed in Section 4, are summarized.
3.1 Kronecker Product
In this subsection, some basic properties of the Kronecker product are stated, see, e.g., [18, 37]. Let , and be given matrices for The Kronecker product is defined as the matrix
Furthermore, the vectorization of a matrix converts the matrix into a column vector, i.e. we define
In the remainder of this work, we use the following properties of the Kronecker product and the vectorization of a matrix:

For the conjugate transposition and transposition, it holds true that

For regular matrices , we have

The mixedproduct property
is valid.

It holds true that
(18) where also in the case of complex matrices, only the transposition is applied.
For a given vector , define the matrix
with given by for , i.e. Then, the equality (18) yields
(19) 
3.2 Sparse Direct Solver
In this subsection, we repeat some properties of sparse direct solver, like leftlooking/rightlooking/multifrontal methods, for solving linear systems , see, e.g., [7, 10, 17, 22, 27, 30, 34]. Here, is a sparse matrix coming from finite element/difference discretizations of a physical domain , , like the spatial mass or stiffness matrix (17). Sparse direct solvers exploit the sparsity pattern of the system matrix , and are based on a divideandconquer technique, which can be interpreted as procedure of subdividing the physical domain , which leads also to a subdivision of the degrees of freedom. Usually, the following steps have to be applied in such a method:

Ordering Step, e.g., minimum degree or nested dissection methods,

Symbolic Factorization Step,

Numerical Factorization Step,

Solving Step.
For structured grids, the complexity of these methods is summarized in Table 1.
Ordering and Factorization Steps  Solving Step  Memory  

1  
2  
3 
There are several opensource software packages for sparse direct solvers. In this paper, we use only the sparse direct solver MUMPS 5.3.3
[1, 2].4 SpaceTime Solvers
In this section, efficient solvers for the largescale spacetime system (16) are developed. Our new solver is based on [19, Section 3] and [44, Section 4], where analogous results are derived for methods in isogeometric analysis. In greater detail, we state solvers for the global linear system
(20) 
given in (16) with the symmetric, positive definite matrices , , and the nonsymmetric, positive definite matrix . Since (20) is a (generalized) Sylvester equation, we can apply the BartelsStewart method [5] with real or complexSchur decomposition and the Fast Diagonalization method [29] to solve (20), see also [16, 37]. In all three cases, the matrix pencil is decomposed in the form
with real, regular matrices , where is an upper (quasi)triangular matrix, or complex, regular matrices , where is an upper triangular or diagonal matrix. Defining
gives the representations
Hence, the global linear system is equivalent to solving
with the identity matrices and . Thus, the solution of (20) is given by
(21) 
The first step in (21) is the calculation of the vector
(22) 
with a matrix corresponding to the relation (19), satisfying where The second step in (21) is to solve the linear system
Comments
There are no comments yet.