1 Introduction
We focus on the adaptive finite element method for timedependent problems. We consider spacetime formulation in arbitrary dimension . Here dimensions correspond to the spatial discretization, and the last dimension corresponds to time discretization. We start from a uniform dimensional spacetime mesh, and we compare it to a sequence of dimensional meshes, employed by the timemarching scheme. Next, we consider different possible spacetime mesh refinement patterns resulting from various spacetime singularities. Finally, we look at the structure of the dimensional spacetime refined mesh and compare it to a corresponding sequence of
dimensional refined meshes employed by the timemarching scheme. First, we consider the refinement towards a spacetime ”edge”, resulting from a point traveling through space and time. This spacetime refinement pattern corresponds to a sequence of spatial meshes refined towards ”points” located on the spacetime edge at particular time moments. Second, we consider the refinement towards a spacetime ”face”, resulting from an edge traveling through space and time. This spacetime refinement pattern corresponds to a sequence of spatial meshes refined towards ”edges” located on the spacetime face at a particular time moment. Third, we consider the refinement towards a spacetime ”hyperface” resulting from a face traveling through space and time. This spacetime refinement pattern corresponds to a sequence of spatial meshes refined towards ”faces” located on the spacetime hyperface at a particular time moment. We assume that the time step size in the timemarching scheme equals the diameter along the time axis of the smallest elements in the spacetime mesh. We compare the computational cost of the iterative solver executed over the
dimensional spacetime domain to the computational cost of the iterative solver executed multiple times over the 1 dimensional meshes during the timemarching scheme. We compare the computational cost of the direct solver executed over the dimensional spacetime domain to the computational cost of the direct solver executed multiple times over 1 dimensional meshes within the timemarching scheme.Computational complexity, and especially time complexity, is one of the most fundamental concepts of theoretical computer science. It was first defined in 1965 by Hartmanis, and Stearns [1]. The systems of equations generated by nonstationary problems can be solved by either direct [2; 3] or iterative solvers [18]
. The time complexity of iterative solvers, in general, can be estimated as
, where is the number of iterations, and in general, it depends on the spectral properties of the matrix, and it grows with the problem size . The time complexity of direct solvers [2; 3] for certain classes of meshes, especially regular meshes, is well known. In particular, for threedimensional uniform grids, the computational cost is of the order of , and for the twodimensional grids, the cost is of the order of [4; 5]. The sparse direct solvers rely on smart permutation of the matrix, resulting in its banded structure and on efficient sparse factorization avoiding zero entries. The problem of finding an optimal order of elimination of unknowns for the direct solver, in general, is indeed NPcomplete [7]. There are several heuristic algorithms analyzing the sparsity pattern of the resulting matrix
[9; 10; 11; 12; 13]. We focus on adaptive grids, refined both in spacetime and in the space domains. For threedimensional grids adapted towards the point, edge, and face, the computational complexities are , and , respectively [6]. These estimates assume a prescribed order of elimination of variables [14]. Similarly, for twodimensional grids refined towards a point or edge, it is [15]. These estimates assume a prescribed order of elimination of variables [16]. There are also solvers based on hierarchical matrices decomposition [8]. Their time complexity, in general, depends on the number of unknowns in the mesh. However, the constant in front of the complexity will also grow with the mesh dimension. We leave their analysis in the spacetime set up for our future work.This paper derives a general formula for the number of unknowns for the mesh of arbitrary dimension , refined towards a singularity of order . This paper generalizes results discussed in [19], focusing on the computational costs of spacetime formulations and timemarching schemes. In our general estimates, we do not consider the polynomial order of approximation , and we assume that this is a constant in our formulas. Additionally, the singularities in our case may have an arbitrary shape, as is presented in Figure 1. We only fix the dimension of the space and the order of the singularity; see Figure 13.
Summing up, this paper estimates the time complexity of finite element method simulations performed on adaptive dimensional spacetime meshes. We also estimate the time complexity of finite element method simulations performed on a sequence of adaptive dimensional meshes resulting from the corresponding timemarching scheme. In particular, we estimate = the number of unknowns for the whole adaptive spacetime formulation, = the number of unknowns from a mesh from a sequence of adaptive meshes resulting from the timemarching scheme. We estimate the time complexity of the iterative and direct solvers. Thus, we have the lower bounds (expressed by the number of unknowns) and upper bounds (expressed by the time complexity of sparse direct solver). We also compare the time complexity of iterative solvers executed for either spacetime formulation and timemarching scheme. We show that the time complexity of spacetime formulations is usually higher than the time complexity of time marching schemes.
The structure of the paper is the following. We start in Section 2 with the introduction of a general algorithm refining the mesh towards a given singularity. Next, Section 3 presents an overview of the sparse Gaussian elimination, usually implemented by using a multifrontal solver approach. Then, we introduce our notion of element partition tree that allows constructing an ordering for sparse matrix permutation in the spacetime formulations. We also discuss the recursive formula for estimating the time complexity of our sparse direct solvers based on orderings constructed from the element partition trees. Section 4 is devoted to time complexity estimates for multidimensional meshes refined toward point singularities. In this case, we include the polynomial orders of approximation. In the following Section 5, we derive time complexity for multidimensional grids refined toward arbitrary singularities. Section 6 summarized our findings in the context of spacetime formulations and timemarching schemes. We also present numerical verification using MATLAB codes in Section 7. Finally, the paper is concluded in the last section.
2 Construction of an adaptive mesh
We employ dimensional hypercube elements (rectangles in 2D, hexahedral in 3D). We utilize hierarchical basis functions of order
=2. We define onedimensional hierarchical shapefunctions, which will be used later to define the two, three, and higherdimensional basis functions by their tensor products and by gluing together the shape functions from adjacent elements. For more details, we refer to
[17].(1) 
These shape functions are used first to define shape functions over two, threedimensional and higherdimensional finite elements. For example, in two dimensions, equation 2 defines shape functions over each of the four vertices of the element:
(2) 
The vertex shape functions are created by the tensor product of onedimensional linear shape functions.
Next, Equation 3 defines quadratic shape functions over each of the four edges of the element:
(3) 
where denotes the polynomial order of approximation over the finite element. The edge shape functions are constructed by multiplying one higherorder shape function and 1 linear shape functions over a finite element. The edge shape functions are hierarchical. Namely, they define shape functions for the approximation of order .
Finally, we define quadratic basis function over an element interior
(4) 
The interior shape functions are constructed by the tensor product of higherorder shape functions. The interior shape functions are hierarchical. Namely, they define shape functions for the approximation of order .
This tensor product construction can be generalized to arbitrary dimensional finite elements. For example, we consider shape functions over vertices, edges, faces, and interiors in three dimensions. In four dimensions, we consider shape functions over vertices, edges, faces, hyperfaces, and interiors.
We identify nodes of the mesh with basis functions. We also consider supports of nodes, defined as supports of basis functions associated with the node. We have basis functions assigned to vertices, edges, faces, hiperfaces (in higher dimensions), and interiors. In general, the support of the node span over all the adjacent elements having the node. For example, in the case of twodimensional regular mesh, the support of the vertex node spans into four elements having the node, the support of the edge node spans into two elements having the node, and the support of the interior node is equal to the single element.
We focus on adaptive meshes, where we employ the 1irregularity rule, saying that an element can be broken only once without breaking its neighbors. To construct an adaptive mesh over a singularity, we start with the one initial element and iteratively refine all elements that overlap with the singularity, making sure that the irregularity rule is being followed. As an example, Algorithm 1 can be used to build adaptive mesh over a singularity with shape with refinement level . Exemplary singularity mesh construction has been shown in Figure 1.
3 Sparse Gaussia elimination and matrix permutations based on element partition tree
3.1 Element partition tree
Element partition tree is a tree created by recursive partitioning of the elements of the mesh into two parts using some partitioning strategy – an example tree can be found on figure 2. Element partition tree is a binary tree with the following properties:

is a set of mesh elements contained in tree node

the root node contains all elements of the mesh , in other words , where is a set of all elements in the mesh

each node for which is a leaf

each node for which has exactly two children, and .

is a partition function that for a given set of mesh elements returns a proper nonempty subset of , i.e.

and
3.2 Ordering generation and solving using generated ordering
Given an element partition tree, we can generate a row elimination order for the matrix using a postorder element partition tree traversal. At each traversed partition tree node, we list out all basis functions that have supports completely contained by the tree node elements and have not been listed already. This will produce a permutation (or ordering) of all nodes. An example of ordering generation can be seen on figure 3.
3.3 Sparse Gaussian elimination
We focus on the Gaussian elimination adapted to work over sparse matrices. The basic algorithm is outlined in Algorithm 2. The algorithm solves linear equation system represented ( is a sparse matrix and vector) by multiplying both sides by some ordering and solving ( being a permutation matrix representing some ordering). The permutation of the matrix in our case is based on the postorder traversal of the element partition tree. The practical implementation of the algorithm is the multifrontal solver [2; 3], avoiding zeros in matrices by constructing internally the elimination tree, based on the proposed ordering and the sparsity pattern of the matrix. In our numerical experiments we will employ the multifrontal solver from Octave.
3.4 Time complexity of the element partition tree based solvers
This section will show that, compared to traditional algorithms that generate orderings, an element partition treebased ordering gives a recursive computation cost formula that is easy to calculate. Let us analyze Algorithm 2. Step 2 (back substitution) will require a number of operations proportional to the number of nonzero elements in the row echelon form matrix. Each nonzero element either had to be nonzero at the beginning of the algorithm or originates from an operation done in step 1 (forward elimination); thus, step 2 does not add anything to the computational complexity of the whole algorithm. For practical reasons, most sparse matrix algorithms do not remove an element from the memory when it has been modified to be . For the sake of brevity, we will call a nonzero element any element that is or has previously been set to a nonzero value, without regard to whether it is equal to at a given time.
The computational cost of Step 1 can be analyzed as a sum of costs of elimination of rows for each element partition tree node. Let us make the following set of observations:

A nonzero element in the initial matrix happens when the two basis functions corresponding to that row and column have overlapping supports. Let us denote the graph created by considering the initial matrix to be an adjacency matrix of a graph as an overlap graph. Two graph nodes cannot be neighbors in an overlap graph unless the supports of their corresponding basis functions overlap.

When a row is eliminated, the new nonzero elements will be created on the intersection of columns and rows that had nonzero values in the eliminated row or corresponding column. If we analyze the matrix as a graph, then elimination of the row corresponding to a graph node will produce edges between all pairs of nodes that were neighbors of the node being removed.

If at any given time during the forward elimination step a nonzero element exists on the intersection of a row and a column corresponding to two basis functions, then either those two basis functions have corresponding graph nodes that are neighbors in the overlap graph, or that there exists a path between those two nodes in the overlap graph that traverses only elements that have been eliminated already.

All variables corresponding to the neighboring nodes of the graph node of a variable in the overlap graph will be either:

listed in one of the element partition tree nodes that are descendants of the element partition tree node listing the variable – and those variables are eliminated already by the time this variable is eliminated, or

listed in the same element partition tree node as the variable , or

having the support of the corresponding basis function intersected by the boundary of the submesh of the element partition tree node containing the variable – those graph nodes will be listed in one of the ancestors of the element partition tree node listing the variable .
Thus, in the overlap graph, there are no edges between nodes that belong to two different element partition tree nodes that are not in an ancestordescendant relationship. At the same time, any path that connects a pair of nonneighboring nodes in the overlap graph will have to go through at least one graph node corresponding to a variable that is listed in a common ancestor of the element partition tree nodes containing the variables from that pair of nodes.

These observations lead to a conclusion that a removal of a single row (lines 5 to 9 in 2) will require no more than subtractions, where is the amount of nodes that were not removed earlier and have support overlapping with the same tree node. This will lead to the following recursive formula for the cost of removing all mesh nodes at a single tree node:
(5)  
where:  
– number of mesh nodes removed at given tree node  
– plus the number of mesh nodes on the interface of that tree node 
In further sections, we will denote the cost of removal of nodes belonging to a tree node from nodes that have overlapping support with that node as:
(6) 
4 Computational complexity for point singularity meshes
This section analyzes the time complexity of the direct solver being run on hierarchical meshes adapted toward a point singularity. While the analysis presented in this section is not as generic and robust as the ones presented in the latter section, its relative simplicity makes it a useful tool to help understand the impact of singularities on the calculation cost.
Firstly in this section, we define how point singularity mesh is constructed in section 4.1. Then, we show how an element partition tree providing an ordering with an optimal, linear complexity can be built for a wellformed adaptive mesh in section 4.4. Later, we show an example of how the computation cost can be calculated for a singularity mesh with singularity in the corner in section 4.2.
4.1 Point singularity mesh definition
A point singularity mesh is a mesh refined hierarchically towards a single point. Let us consider a point singularity in a dimensional space refined towards point until some arbitrary refinement level – denoting such mesh as . The singularity point can be placed either inside of an element or on a boundary of elements or elements. In particular, the singularity point can be placed on a boundary of the whole mesh. Examples of meshes with point singularities have been shown in Figure 4.
4.2 Analysis: point singularity placed on the boundary of the mesh
As an example of a point singularity mesh we analyze a relatively simple case with a singularity point placed in one of the corners of the mesh and show a way to generate an ordering that results in linear complexity of the solver. Examples of adaptive meshes of this type in two and three dimensions can be seen in figure 6.
In the case of such meshes, we do not have to do the extra step of making sure that irregularity rule is met. Thus, such meshes have the number of elements described by the following formula:
(7) 
where is the refinement level of the mesh and is the dimensionality of the mesh. The formula 7 is equivalent to:
(8)  
In other words, the number of elements in a corner point singularity mesh grows linearly with the refinement level .
After the first refinement the mesh has nodes. Each further refinement level adds a layer of elements and for each new element, nodes are created. The following formula describes the number of the basis functions on such a mesh for :
(9) 
If is constant, the number of variables will be linearly proportional to the refinement level . A visual explanation of how the number of basis functions grow has been shown on the Figure 7.
4.2.1 Time complexity
To calculate the time complexity, we will use the element partition tree method of ordering generation. The element partition tree can be built by recursively removing the layer of least refined (or largest) elements. The structure of such an element partition tree is shown in the Figure 8.
Let us denote each layer by , where is is the number of the shell, counting from the most refined one, being the special case of the single element in the corner. It is easy to see that the whole tree is recursive and that each layer where contains elements.
During the solution process at each layer (), we wil first remove all the rows that correspond to the nodes with support entirely within (denoted by ), and then remove all rows that correspond to nodes on the interface between and (denoted ). This results result at the following cost formula for a single layer (excluding layers , and ):
(10) 
To calculate the number of nodes on the interface between layers, we can make an observation that this number stays the same from the moment given layer has been refined. In other words, the number of nodes on the interface between any layers is equal to the number of nodes on the interface between layer and :
(12) 
Similarly we can calculate the number of internal nodes, making the observation that it is equal to the number of nodes in a mesh of refinement level 1 () excluding the nodes on the (hyper)faces not touching the corner with singularity and minus amount of nodes in a mesh of refinement level 0 ().
(13) 
Thus the cost of elimination of a single layer , where , goes as follows:
(14) 
The computational cost of the whole mesh will then be as follows:
(15) 
With the assumption that the parameter is constant and considering that dimension is constant for a problem, we arrive at a linear time complexity () of the whole algorithm.
4.3 Quasioptimal point singularity meshes
To generalize the analysis above, we will present a set of properties of the mesh that guarantee that it is possible to create an ordering that leads to linear solver execution time. We will call a mesh that fulfills those properties as quasioptimal point singularity mesh. The properties of quasioptimal point singularity mesh are the following:

The mesh can be split into no more than consecutive layers, where is the refinement level of the mesh and is some constant.

Each basis function can be assigned to one of the layers in such a way that a pair of basis functions will not have overlapping supports if they are more than shells apart, where is some constant;

Each layer will have no more than basis functions assigned, where is some constant;
If those properties are met, it is possible to create an ordering in which the nodes are removed layer by layer. Removal of each consecutive layer will require no more than subtractions and the whole solver will require no more than operations. The constructed mesh will have nodes, so the time complexity of the solver can be stated as . As , and are constants, we can remove them from the big notation and the resulting solution cost will be , i.e. it grows linearly with the number of nodes (which number in turn grows linearly with the refinement level).
4.4 Quasioptimality of adaptive point singularity meshes
A final step of the proof presented in this section is to show that any adaptive point singularity mesh created with Algorithm 1 will be quasioptimal. It is easy to notice that:

The elements can be grouped into layers by their refinement level and each basis function can be assigned to the layer of one of its elements.

Because of the irregularity rule, two elements sharing a vertex cannot differ by more than two refinement levels, so a single basis function cannot span over two elements with the difference of more than two refinement levels.

There will be no more than elements of each refinement level. At the same time, each element will have no more than basis functions – both values are constant.
Those observations lead to the conclusion that any point singularity adaptive mesh will be quasioptimal:

There are exactly layers.

Nodes that are more than two layers apart will never overlap.

There will be no more than nodes at each layer.
It is easy to extend those observations and see that a mesh refined towards more than one but a finite number of point singularities is quasioptimal in the same way (each layer will potentially have the number of basis functions multiplied by the number of singularities).
5 Computational complexity for multidimensional singularity meshes
This section analyzes the time complexity of direct solvers being run over adaptive meshes refined over singularities of simple shapes of higer dimensionality. In particular, we analyze meshes with singularities in shape of lines, planes/faces and hyperplanes/hyperfaces with three or more dimensions, depending on the dimensionality of the space. In this case, for the simplicity of derivation, we will ignore the polynomial order
factor.5.1 Structure of the mesh with singularity
In this section, we will analyze dimensional meshes refined towards dimensional singularities. Such mesh will be considered refined until refinement level towards that singularity if all elements that overlap any part of the singularity are refined to refinement level . Such refinement can again be achieved using Algorithm 1. A process of such refinement is shown in Figure 9 and 10, respectively for edge and face singularity in 3dimensional space. It is important to notice that regular meshes can also be analyzed as mesh refined toward dimensional singularity in dimensional space.
5.2 Analysis: singularity placed on the boundary of the mesh
For the sake of simplicity, we will start by analyzing the basic case of the singularity placed on the boundary of the whole adapted mesh. Let us denote the dimensionality of the mesh as , the dimensionality of the singularity as and the singularity itself as . The mesh will be denoted as , where is its refinement level.
5.2.1 Properties of the mesh
The number of elements in such a singularity mesh can be calculated using the following recursive formula:
(16) 
The formula expands to the following values:
Point ()  Edge ()  Face ()  Hyperface ()  
1D  
2D  
3D  
4D  
D 
For the number of elements can be approximated by the following lower and upper bounds:
(17) 
The number of variables can be estimated to be between and per element, this leads us to the following approximation for :
(18) 
For set , and , the approximations lead to the following formulas:
(19)  
(20) 
In other words, both the number of elements and variables grow proportionally to and that this growth speed (understood in terms of notation) depends only on the dimensionality of the singularity , not the dimensionality of the mesh .
5.2.2 Time complexity of a solution with singularity built on mesh boundary
To analyze the time complexity of the solver, we can again use the element partition tree approach. The element partition tree can be built using the following recursive procedure:

Create a root node of the element partition tree and attach all the elements to that node.

If there is just one element, finish the procedure, and the root node is the sole node of the returned tree.

Create a child node of the root node containing all the least refined elements.

Divide the other elements by parallel planes perpendicular to the singularity and parallel to the boundaries of the mesh (let us denote those planes as dividing planes), crossing the midpoint of the singularity. This will create submeshes.

For each submesh generated above, run this procedure recursively and attach the resulting trees as subtrees of the second child node.

Finish the procedure and return the tree stemming from the root node.
An example of such an element partition tree is shown in Figure 11. We denote by the element partition tree nodes from the refinement level .
Even though the elimination tree nodes on each level have an analogous set of elements, the order of elimination will differ slightly for the nodes that contain elements on the boundary of the mesh other than the boundary containing the singularity. To simplify the analysis, we will modify the order of elimination slightly so that those nodes behave similarly to the others: the variables corresponding to the basis functions on the boundary of the mesh will be eliminated at the root node . This change increases the computation time slightly. However, it has no impact on the time complexity.
To calculate the computational cost of the solver using the ordering generated from that element partition tree, we will need to know two values for each element partition tree node:

The number of variables removed in that element partition tree node, or . For the nodes, this number is proportional to the number of elements of that node that are touching the dividing planes that are used to further divide the submesh.

The total number of variables with support over the elements in this subtree, or . For the nodes, this number is proportional to the number of elements that are on the dividing planes of the ancestral tree nodes.
It is not difficult to see that the crosssection of the mesh through the dividing planes looks like a mesh in space of one less dimension built over a singularity of dimensionality one less than the original one.
Thus for , the following relationships are true:
(21)  
(22) 
All the other nodes have a number of elements.
Thanks to those observation, we can calculate the time complexity of running the solver for using the following equation:
(23)  
For , the analogous calculations go as follows:
(25)  
(26) 
And the time complexity of running the solver will follow this equation:
Considering that the number of variables , we can calculate the time complexity as a fuction of number of variables for to be:
(28) 
We sum up the analysis in the table 2.
Variables  Operations  Operations in  

Point singularity  
Edge singularity  
Face singularity  
Hyperface (3D) singularity  
4D singularity  
5D singularity  
dimensional singularity 
5.3 Quasioptimal dimensional singularity meshes
To generalize the analysis from the previous section, we can observe that a broader class of meshes with singularities will follow the same time complexity – we will define those meshes as quasioptimal dimensional singularity meshes. Examples of such meshes are presented in Figure 1213.
A mesh will be a quasioptimal dimensional singularity mesh if it has the following properties:

The basis functions of the mesh can be assigned to tree nodes of a full nary tree () of height not larger than some , where is some constant.

If a pair of basis functions have overlapping supports, they will be assigned to the same tree node, or one of them will be assigned to an ancestor of the tree node of the other one.

Each tree node will not have more than (if ) or more than (if ) basis functions assigned, where is the height of the subtree that given node is the root of and is some arbitrary positive constant. At the same time, the number of overlaps between basis functions belonging to that tree with basis functions of ancestor tree nodes will be limited by the same number.
If such a tree is created, we can use it to create an ordering that would follow the postorder traversal of that tree. If so, when , for dimensional singularity the cost of removing of all nodes belonging to a tree node with height is bounded by the following equation:
(29) 
The total cost of the execution of the solver will in turn be no more than:
(30) 
In the case of dimensional singularity with , the cost of removing of all nodes belonging to a tree node with height is bounded by the following equation:
(31) 
And the total cost of running the whole solver will be no more than:
(32) 
Together, both cases can be stated as:
(33) 
5.4 Quasioptimality of adaptive meshes over singularities
To prove that every adaptive mesh adapted towards a singularity using the Algorithm 1 is quasioptimal, we can propose the following tree generation algorithm:

Find a dividing plane that crosses the least amount of basis functions’ supports out of all planes perpendicular to the singularity that divide the mesh in such a way that no more than half of all basis functions lay solely on either one of the sides of the plane. Create a tree node with all basis functions that the chosen plane crosses the support of and remove them from the mesh.

For each side of the plane take the basis functions on that side and recursively run the algorithm. The resulting trees become subtrees of the node created in the previous function.

Return the tree rooted in the node created in the first step.
Let us analyze if a tree generated that way proves the quasioptimality of the mesh:

Every child of any node will have at most half of the basis functions of its parent. Because of that, the total height of the tree cannot be larger than ().

There is no overlap between supports of basis functions of either side of the dividing plane.

Elements of refinement level cannot be produced farther than times the side of those elements. Thus the number of elements of a given refinement level that cross the dividing plane is limited.

In the case of , all the dividing planes are parallel to each other. The dividing plane of tree node nodes deep from the root will have a distance between the nearest planes of tree nodes higher in the gree of at most , where is the length of the singularity. This means that no elements larger than will be eliminated at this node. In other words, while each dividing plane crosses at most some elements of each refinement level ( is an arbitrary constant), the minimal refinement level of elements removed at given tree node increases by every step down the tree. This means that the root node will have at most elements, and other nodes will have at most elements.

In case of , the limit of elements of refinement level crossed by a dividing plane is , where is the length/area/volume of cross section between the singularity and the dividing plane (assuming that the whole mesh has side of length ). Because the plane of the smallest crosssection is chosen, will decrease on average by . The total number of elements crossed will be . Considering that (up to a constant), this is equivalent to .

This shows that all meshes with singularities that are adapted using the Algorithm 1 are quasioptimal.
6 Computational costs of spacetime formulations versus timemarching schemes
In this section, we consider possible spacetime mesh refinement patterns resulting from different spacetime singularities. We look at the structure of the dimensional spacetime refined mesh. Namely, we focus on

spacetime ”edge”, resulting from a point traveling through space and time,

spacetime ”face”, resulting from an edge traveling through space and time,

spacetime ”hyperface”, resulting from a face traveling through space and time.
We will consider these spacetime refined meshed in threedimensions () and in fourdimensions (). These spacetime refined dimensional meshes correspond to the following sequences of dimensional refined meshes employed by the timemarching scheme:

sequence of spatial meshes refined towards ”points” located on the spacetime edge at particular time moments,

sequence of spatial meshes refined towards ”edges” located on the spacetime face at a particular time moment,

sequence of spatial meshes refined towards ”faces” located on the spacetime hyperface at a particular time moment.
This correspondence is illustrated in Figure 14 for the threedimensional spacetime mesh with edge singularity, and the resulting sequence of twodimensional meshed refined to the corresponding points.
We assume that the time step size in the timemarching scheme is equal to the diameter along the time axis of the smallest elements in the spacetime mesh.
6.1 Computational costs of direct solvers for spacetime formulations and timemarching schemes
We compare the computational cost of the direct solver executed over the dimensional spacetime domain to the computational cost of the direct solver executed multiple times over 1 dimensional meshes within the timemarching scheme. In Table 3 we list spacetime adaptive meshes. We include the 4D spacetime uniform mesh, 4D spacetime mesh refined towards the hyperface singularity, 4D spacetime mesh refined towards the face singularity, 4D spacetime mesh refined towards edge singularity, as well as 3D spacetime unfirm mesh, 3D spacetime mesh refined towards face singularity, and 3D spacetime mesh refined towards edge singularity. We compute dimensions of all these grids and express it by the number of refinement levels . Additionally, we provide the estimates for the direct solver execution time. An interesting observation is that the computational cost of the direct solver executed for the spacetime mesh refined towards an edge is linear .
Next, in Table 4 we evaluate the computational cost for timemarching scheme. For each dimensional spacetime refined toward the singularity, we construct a sequence of meshes. Each of the grids from the sequence is dimensional, and it is refined towards singularity. Such the sequence of grids provides the solution with similar accuracy to the spacetime grid. We estimate the dimensions of the spatial meshes from the sequence and estimate the computational cost of the direct solver executed times, once for each grid from the sequence. An interesting observation is that the computational cost of the timemarching scheme corresponding to the spacetime mesh refined towards an edge is times higher than the spacetime cost. However, for all other spacetime grids, the time marching scheme is cheaper.
Spacetime mesh  Spacetime mesh size  Spacetime mesh 

direct solver cost  
4D uniform  
4D hyperface  
4D face  
4D edge  
3D uniform 
Comments
There are no comments yet.