We focus on the adaptive finite element method for time-dependent problems. We consider space-time formulation in arbitrary dimension . Here dimensions correspond to the spatial discretization, and the last dimension corresponds to time discretization. We start from a uniform -dimensional space-time mesh, and we compare it to a sequence of dimensional meshes, employed by the time-marching scheme. Next, we consider different possible space-time mesh refinement patterns resulting from various space-time singularities. Finally, we look at the structure of the -dimensional space-time refined mesh and compare it to a corresponding sequence of
dimensional refined meshes employed by the time-marching scheme. First, we consider the refinement towards a space-time ”edge”, resulting from a point traveling through space and time. This space-time refinement pattern corresponds to a sequence of spatial meshes refined towards ”points” located on the space-time edge at particular time moments. Second, we consider the refinement towards a space-time ”face”, resulting from an edge traveling through space and time. This space-time refinement pattern corresponds to a sequence of spatial meshes refined towards ”edges” located on the space-time face at a particular time moment. Third, we consider the refinement towards a space-time ”hyperface” resulting from a face traveling through space and time. This space-time refinement pattern corresponds to a sequence of spatial meshes refined towards ”faces” located on the space-time hyperface at a particular time moment. We assume that the time step size in the time-marching scheme equals the diameter along the time axis of the smallest elements in the space-time mesh. We compare the computational cost of the iterative solver executed over thedimensional space-time domain to the computational cost of the iterative solver executed multiple times over the -1 dimensional meshes during the time-marching scheme. We compare the computational cost of the direct solver executed over the dimensional space-time domain to the computational cost of the direct solver executed multiple times over -1 dimensional meshes within the time-marching scheme.
Computational complexity, and especially time complexity, is one of the most fundamental concepts of theoretical computer science. It was first defined in 1965 by Hartmanis, and Stearns . The systems of equations generated by non-stationary problems can be solved by either direct [2; 3] or iterative solvers 
. The time complexity of iterative solvers, in general, can be estimated as, where is the number of iterations, and in general, it depends on the spectral properties of the matrix, and it grows with the problem size . The time complexity of direct solvers [2; 3] for certain classes of meshes, especially regular meshes, is well known. In particular, for three-dimensional uniform grids, the computational cost is of the order of , and for the two-dimensional grids, the cost is of the order of [4; 5]. The sparse direct solvers rely on smart permutation of the matrix, resulting in its banded structure and on efficient sparse factorization avoiding zero entries. The problem of finding an optimal order of elimination of unknowns for the direct solver, in general, is indeed NP-complete 
. There are several heuristic algorithms analyzing the sparsity pattern of the resulting matrix[9; 10; 11; 12; 13]. We focus on adaptive grids, refined both in space-time and in the space domains. For three-dimensional grids adapted towards the point, edge, and face, the computational complexities are , and , respectively . These estimates assume a prescribed order of elimination of variables . Similarly, for two-dimensional grids refined towards a point or edge, it is . These estimates assume a prescribed order of elimination of variables . There are also solvers based on hierarchical matrices decomposition . Their time complexity, in general, depends on the number of unknowns in the mesh. However, the constant in front of the complexity will also grow with the mesh dimension. We leave their analysis in the space-time set up for our future work.
This paper derives a general formula for the number of unknowns for the mesh of arbitrary dimension , refined towards a singularity of order . This paper generalizes results discussed in , focusing on the computational costs of space-time formulations and time-marching schemes. In our general estimates, we do not consider the polynomial order of approximation , and we assume that this is a constant in our formulas. Additionally, the singularities in our case may have an arbitrary shape, as is presented in Figure 1. We only fix the dimension of the space and the order of the singularity; see Figure 13.
Summing up, this paper estimates the time complexity of finite element method simulations performed on adaptive -dimensional space-time meshes. We also estimate the time complexity of finite element method simulations performed on a sequence of adaptive -dimensional meshes resulting from the corresponding time-marching scheme. In particular, we estimate = the number of unknowns for the whole adaptive space-time formulation, = the number of unknowns from a mesh from a sequence of adaptive meshes resulting from the time-marching scheme. We estimate the time complexity of the iterative and direct solvers. Thus, we have the lower bounds (expressed by the number of unknowns) and upper bounds (expressed by the time complexity of sparse direct solver). We also compare the time complexity of iterative solvers executed for either space-time formulation and time-marching scheme. We show that the time complexity of space-time formulations is usually higher than the time complexity of time marching schemes.
The structure of the paper is the following. We start in Section 2 with the introduction of a general algorithm refining the mesh towards a given singularity. Next, Section 3 presents an overview of the sparse Gaussian elimination, usually implemented by using a multi-frontal solver approach. Then, we introduce our notion of element partition tree that allows constructing an ordering for sparse matrix permutation in the space-time formulations. We also discuss the recursive formula for estimating the time complexity of our sparse direct solvers based on orderings constructed from the element partition trees. Section 4 is devoted to time complexity estimates for multi-dimensional meshes refined toward point singularities. In this case, we include the polynomial orders of approximation. In the following Section 5, we derive time complexity for multi-dimensional grids refined toward arbitrary singularities. Section 6 summarized our findings in the context of space-time formulations and time-marching schemes. We also present numerical verification using MATLAB codes in Section 7. Finally, the paper is concluded in the last section.
2 Construction of an -adaptive mesh
We employ dimensional hypercube elements (rectangles in 2D, hexahedral in 3D). We utilize hierarchical basis functions of order
=2. We define one-dimensional hierarchical shape-functions, which will be used later to define the two-, three-, and higher-dimensional basis functions by their tensor products and by gluing together the shape functions from adjacent elements. For more details, we refer to.
These shape functions are used first to define shape functions over two-, three-dimensional and higher-dimensional finite elements. For example, in two dimensions, equation 2 defines shape functions over each of the four vertices of the element:
The vertex shape functions are created by the tensor product of one-dimensional linear shape functions.
Next, Equation 3 defines quadratic shape functions over each of the four edges of the element:
where denotes the polynomial order of approximation over the finite element. The edge shape functions are constructed by multiplying one higher-order shape function and -1 linear shape functions over a finite element. The edge shape functions are hierarchical. Namely, they define shape functions for the approximation of order .
Finally, we define quadratic basis function over an element interior
The interior shape functions are constructed by the tensor product of higher-order shape functions. The interior shape functions are hierarchical. Namely, they define shape functions for the approximation of order .
This tensor product construction can be generalized to arbitrary -dimensional finite elements. For example, we consider shape functions over vertices, edges, faces, and interiors in three dimensions. In four dimensions, we consider shape functions over vertices, edges, faces, hyperfaces, and interiors.
We identify nodes of the mesh with basis functions. We also consider supports of nodes, defined as supports of basis functions associated with the node. We have basis functions assigned to vertices, edges, faces, hiperfaces (in higher dimensions), and interiors. In general, the support of the node span over all the adjacent elements having the node. For example, in the case of two-dimensional regular mesh, the support of the vertex node spans into four elements having the node, the support of the edge node spans into two elements having the node, and the support of the interior node is equal to the single element.
We focus on -adaptive meshes, where we employ the 1-irregularity rule, saying that an element can be broken only once without breaking its neighbors. To construct an -adaptive mesh over a singularity, we start with the one initial element and iteratively refine all elements that overlap with the singularity, making sure that the -irregularity rule is being followed. As an example, Algorithm 1 can be used to build -adaptive mesh over a singularity with shape with refinement level . Exemplary singularity mesh construction has been shown in Figure 1.
3 Sparse Gaussia elimination and matrix permutations based on element partition tree
3.1 Element partition tree
Element partition tree is a tree created by recursive partitioning of the elements of the mesh into two parts using some partitioning strategy – an example tree can be found on figure 2. Element partition tree is a binary tree with the following properties:
is a set of mesh elements contained in tree node
the root node contains all elements of the mesh , in other words , where is a set of all elements in the mesh
each node for which is a leaf
each node for which has exactly two children, and .
is a partition function that for a given set of mesh elements returns a proper nonempty subset of , i.e.
3.2 Ordering generation and solving using generated ordering
Given an element partition tree, we can generate a row elimination order for the matrix using a post-order element partition tree traversal. At each traversed partition tree node, we list out all basis functions that have supports completely contained by the tree node elements and have not been listed already. This will produce a permutation (or ordering) of all nodes. An example of ordering generation can be seen on figure 3.
3.3 Sparse Gaussian elimination
We focus on the Gaussian elimination adapted to work over sparse matrices. The basic algorithm is outlined in Algorithm 2. The algorithm solves linear equation system represented ( is a sparse matrix and vector) by multiplying both sides by some ordering and solving ( being a permutation matrix representing some ordering). The permutation of the matrix in our case is based on the post-order traversal of the element partition tree. The practical implementation of the algorithm is the multi-frontal solver [2; 3], avoiding zeros in matrices by constructing internally the elimination tree, based on the proposed ordering and the sparsity pattern of the matrix. In our numerical experiments we will employ the multi-frontal solver from Octave.
3.4 Time complexity of the element partition tree based solvers
This section will show that, compared to traditional algorithms that generate orderings, an element partition tree-based ordering gives a recursive computation cost formula that is easy to calculate. Let us analyze Algorithm 2. Step 2 (back substitution) will require a number of operations proportional to the number of non-zero elements in the row echelon form matrix. Each non-zero element either had to be non-zero at the beginning of the algorithm or originates from an operation done in step 1 (forward elimination); thus, step 2 does not add anything to the computational complexity of the whole algorithm. For practical reasons, most sparse matrix algorithms do not remove an element from the memory when it has been modified to be . For the sake of brevity, we will call a non-zero element any element that is or has previously been set to a non-zero value, without regard to whether it is equal to at a given time.
The computational cost of Step 1 can be analyzed as a sum of costs of elimination of rows for each element partition tree node. Let us make the following set of observations:
A non-zero element in the initial matrix happens when the two basis functions corresponding to that row and column have overlapping supports. Let us denote the graph created by considering the initial matrix to be an adjacency matrix of a graph as an overlap graph. Two graph nodes cannot be neighbors in an overlap graph unless the supports of their corresponding basis functions overlap.
When a row is eliminated, the new non-zero elements will be created on the intersection of columns and rows that had non-zero values in the eliminated row or corresponding column. If we analyze the matrix as a graph, then elimination of the row corresponding to a graph node will produce edges between all pairs of nodes that were neighbors of the node being removed.
If at any given time during the forward elimination step a non-zero element exists on the intersection of a row and a column corresponding to two basis functions, then either those two basis functions have corresponding graph nodes that are neighbors in the overlap graph, or that there exists a path between those two nodes in the overlap graph that traverses only elements that have been eliminated already.
All variables corresponding to the neighboring nodes of the graph node of a variable in the overlap graph will be either:
listed in one of the element partition tree nodes that are descendants of the element partition tree node listing the variable – and those variables are eliminated already by the time this variable is eliminated, or
listed in the same element partition tree node as the variable , or
having the support of the corresponding basis function intersected by the boundary of the submesh of the element partition tree node containing the variable – those graph nodes will be listed in one of the ancestors of the element partition tree node listing the variable .
Thus, in the overlap graph, there are no edges between nodes that belong to two different element partition tree nodes that are not in an ancestor-descendant relationship. At the same time, any path that connects a pair of non-neighboring nodes in the overlap graph will have to go through at least one graph node corresponding to a variable that is listed in a common ancestor of the element partition tree nodes containing the variables from that pair of nodes.
These observations lead to a conclusion that a removal of a single row (lines 5 to 9 in 2) will require no more than subtractions, where is the amount of nodes that were not removed earlier and have support overlapping with the same tree node. This will lead to the following recursive formula for the cost of removing all mesh nodes at a single tree node:
|– number of mesh nodes removed at given tree node|
|– plus the number of mesh nodes on the interface of that tree node|
In further sections, we will denote the cost of removal of nodes belonging to a tree node from nodes that have overlapping support with that node as:
4 Computational complexity for point singularity meshes
This section analyzes the time complexity of the direct solver being run on hierarchical meshes adapted toward a point singularity. While the analysis presented in this section is not as generic and robust as the ones presented in the latter section, its relative simplicity makes it a useful tool to help understand the impact of singularities on the calculation cost.
Firstly in this section, we define how point singularity mesh is constructed in section 4.1. Then, we show how an element partition tree providing an ordering with an optimal, linear complexity can be built for a well-formed -adaptive mesh in section 4.4. Later, we show an example of how the computation cost can be calculated for a singularity mesh with singularity in the corner in section 4.2.
4.1 Point singularity mesh definition
A point singularity mesh is a mesh refined hierarchically towards a single point. Let us consider a point singularity in a -dimensional space refined towards point until some arbitrary refinement level – denoting such mesh as . The singularity point can be placed either inside of an element or on a boundary of elements or elements. In particular, the singularity point can be placed on a boundary of the whole mesh. Examples of meshes with point singularities have been shown in Figure 4.
4.2 Analysis: point singularity placed on the boundary of the mesh
As an example of a point singularity mesh we analyze a relatively simple case with a singularity point placed in one of the corners of the mesh and show a way to generate an ordering that results in linear complexity of the solver. Examples of -adaptive meshes of this type in two and three dimensions can be seen in figure 6.
In the case of such meshes, we do not have to do the extra step of making sure that -irregularity rule is met. Thus, such meshes have the number of elements described by the following formula:
where is the refinement level of the mesh and is the dimensionality of the mesh. The formula 7 is equivalent to:
In other words, the number of elements in a corner point singularity mesh grows linearly with the refinement level .
After the first refinement the mesh has nodes. Each further refinement level adds a layer of elements and for each new element, nodes are created. The following formula describes the number of the basis functions on such a mesh for :
If is constant, the number of variables will be linearly proportional to the refinement level . A visual explanation of how the number of basis functions grow has been shown on the Figure 7.
4.2.1 Time complexity
To calculate the time complexity, we will use the element partition tree method of ordering generation. The element partition tree can be built by recursively removing the layer of least refined (or largest) elements. The structure of such an element partition tree is shown in the Figure 8.
Let us denote each layer by , where is is the number of the shell, counting from the most refined one, being the special case of the single element in the corner. It is easy to see that the whole tree is recursive and that each layer where contains elements.
During the solution process at each layer (), we wil first remove all the rows that correspond to the nodes with support entirely within (denoted by ), and then remove all rows that correspond to nodes on the interface between and (denoted ). This results result at the following cost formula for a single layer (excluding layers , and ):
To calculate the number of nodes on the interface between layers, we can make an observation that this number stays the same from the moment given layer has been refined. In other words, the number of nodes on the interface between any layers is equal to the number of nodes on the interface between layer and :
Similarly we can calculate the number of internal nodes, making the observation that it is equal to the number of nodes in a mesh of refinement level 1 () excluding the nodes on the (hyper)faces not touching the corner with singularity and minus amount of nodes in a mesh of refinement level 0 ().
Thus the cost of elimination of a single layer , where , goes as follows:
The computational cost of the whole mesh will then be as follows:
With the assumption that the parameter is constant and considering that dimension is constant for a problem, we arrive at a linear time complexity () of the whole algorithm.
4.3 Quasi-optimal point singularity meshes
To generalize the analysis above, we will present a set of properties of the mesh that guarantee that it is possible to create an ordering that leads to linear solver execution time. We will call a mesh that fulfills those properties as quasi-optimal point singularity mesh. The properties of quasi-optimal point singularity mesh are the following:
The mesh can be split into no more than consecutive layers, where is the refinement level of the mesh and is some constant.
Each basis function can be assigned to one of the layers in such a way that a pair of basis functions will not have overlapping supports if they are more than shells apart, where is some constant;
Each layer will have no more than basis functions assigned, where is some constant;
If those properties are met, it is possible to create an ordering in which the nodes are removed layer by layer. Removal of each consecutive layer will require no more than subtractions and the whole solver will require no more than operations. The constructed mesh will have nodes, so the time complexity of the solver can be stated as . As , and are constants, we can remove them from the big- notation and the resulting solution cost will be , i.e. it grows linearly with the number of nodes (which number in turn grows linearly with the refinement level).
4.4 Quasi-optimality of -adaptive point singularity meshes
A final step of the proof presented in this section is to show that any -adaptive point singularity mesh created with Algorithm 1 will be quasi-optimal. It is easy to notice that:
The elements can be grouped into layers by their refinement level and each basis function can be assigned to the layer of one of its elements.
Because of the -irregularity rule, two elements sharing a vertex cannot differ by more than two refinement levels, so a single basis function cannot span over two elements with the difference of more than two refinement levels.
There will be no more than elements of each refinement level. At the same time, each element will have no more than basis functions – both values are constant.
Those observations lead to the conclusion that any point singularity -adaptive mesh will be quasi-optimal:
There are exactly layers.
Nodes that are more than two layers apart will never overlap.
There will be no more than nodes at each layer.
It is easy to extend those observations and see that a mesh refined towards more than one but a finite number of point singularities is quasi-optimal in the same way (each layer will potentially have the number of basis functions multiplied by the number of singularities).
5 Computational complexity for multi-dimensional singularity meshes
This section analyzes the time complexity of direct solvers being run over adaptive meshes refined over singularities of simple shapes of higer dimensionality. In particular, we analyze meshes with singularities in shape of lines, planes/faces and hyperplanes/hyperfaces with three or more dimensions, depending on the dimensionality of the space. In this case, for the simplicity of derivation, we will ignore the polynomial orderfactor.
5.1 Structure of the mesh with singularity
In this section, we will analyze -dimensional meshes refined towards -dimensional singularities. Such mesh will be considered refined until refinement level towards that singularity if all elements that overlap any part of the singularity are refined to refinement level . Such refinement can again be achieved using Algorithm 1. A process of such refinement is shown in Figure 9 and 10, respectively for edge and face singularity in 3-dimensional space. It is important to notice that regular meshes can also be analyzed as mesh refined toward -dimensional singularity in -dimensional space.
5.2 Analysis: singularity placed on the boundary of the mesh
For the sake of simplicity, we will start by analyzing the basic case of the singularity placed on the boundary of the whole -adapted mesh. Let us denote the dimensionality of the mesh as , the dimensionality of the singularity as and the singularity itself as . The mesh will be denoted as , where is its refinement level.
5.2.1 Properties of the mesh
The number of elements in such a singularity mesh can be calculated using the following recursive formula:
The formula expands to the following values:
|Point ()||Edge ()||Face ()||Hyperface ()|
For the number of elements can be approximated by the following lower and upper bounds:
The number of variables can be estimated to be between and per element, this leads us to the following approximation for :
For set , and , the approximations lead to the following formulas:
In other words, both the number of elements and variables grow proportionally to and that this growth speed (understood in terms of -notation) depends only on the dimensionality of the singularity , not the dimensionality of the mesh .
5.2.2 Time complexity of a solution with singularity built on mesh boundary
To analyze the time complexity of the solver, we can again use the element partition tree approach. The element partition tree can be built using the following recursive procedure:
Create a root node of the element partition tree and attach all the elements to that node.
If there is just one element, finish the procedure, and the root node is the sole node of the returned tree.
Create a child node of the root node containing all the least refined elements.
Divide the other elements by parallel planes perpendicular to the singularity and parallel to the boundaries of the mesh (let us denote those planes as dividing planes), crossing the midpoint of the singularity. This will create submeshes.
For each submesh generated above, run this procedure recursively and attach the resulting trees as subtrees of the second child node.
Finish the procedure and return the tree stemming from the root node.
An example of such an element partition tree is shown in Figure 11. We denote by the element partition tree nodes from the refinement level .
Even though the elimination tree nodes on each level have an analogous set of elements, the order of elimination will differ slightly for the nodes that contain elements on the boundary of the mesh other than the boundary containing the singularity. To simplify the analysis, we will modify the order of elimination slightly so that those nodes behave similarly to the others: the variables corresponding to the basis functions on the boundary of the mesh will be eliminated at the root node . This change increases the computation time slightly. However, it has no impact on the time complexity.
To calculate the computational cost of the solver using the ordering generated from that element partition tree, we will need to know two values for each element partition tree node:
The number of variables removed in that element partition tree node, or . For the nodes, this number is proportional to the number of elements of that node that are touching the dividing planes that are used to further divide the submesh.
The total number of variables with support over the elements in this subtree, or . For the nodes, this number is proportional to the number of elements that are on the dividing planes of the ancestral tree nodes.
It is not difficult to see that the cross-section of the mesh through the dividing planes looks like a mesh in space of one less dimension built over a singularity of dimensionality one less than the original one.
Thus for , the following relationships are true:
All the other nodes have a number of elements.
Thanks to those observation, we can calculate the time complexity of running the solver for using the following equation:
For , the analogous calculations go as follows:
And the time complexity of running the solver will follow this equation:
Considering that the number of variables , we can calculate the time complexity as a fuction of number of variables for to be:
We sum up the analysis in the table 2.
|Hyperface (3-D) singularity|
5.3 Quasi-optimal -dimensional singularity meshes
To generalize the analysis from the previous section, we can observe that a broader class of meshes with singularities will follow the same time complexity – we will define those meshes as quasi-optimal -dimensional singularity meshes. Examples of such meshes are presented in Figure 12-13.
A mesh will be a quasi-optimal -dimensional singularity mesh if it has the following properties:
The basis functions of the mesh can be assigned to tree nodes of a full -nary tree () of height not larger than some , where is some constant.
If a pair of basis functions have overlapping supports, they will be assigned to the same tree node, or one of them will be assigned to an ancestor of the tree node of the other one.
Each tree node will not have more than (if ) or more than (if ) basis functions assigned, where is the height of the subtree that given node is the root of and is some arbitrary positive constant. At the same time, the number of overlaps between basis functions belonging to that tree with basis functions of ancestor tree nodes will be limited by the same number.
If such a tree is created, we can use it to create an ordering that would follow the post-order traversal of that tree. If so, when , for -dimensional singularity the cost of removing of all nodes belonging to a tree node with height is bounded by the following equation:
The total cost of the execution of the solver will in turn be no more than:
In the case of -dimensional singularity with , the cost of removing of all nodes belonging to a tree node with height is bounded by the following equation:
And the total cost of running the whole solver will be no more than:
Together, both cases can be stated as:
5.4 Quasi-optimality of -adaptive meshes over singularities
To prove that every -adaptive mesh adapted towards a singularity using the Algorithm 1 is quasi-optimal, we can propose the following tree generation algorithm:
Find a dividing plane that crosses the least amount of basis functions’ supports out of all planes perpendicular to the singularity that divide the mesh in such a way that no more than half of all basis functions lay solely on either one of the sides of the plane. Create a tree node with all basis functions that the chosen plane crosses the support of and remove them from the mesh.
For each side of the plane take the basis functions on that side and recursively run the algorithm. The resulting trees become subtrees of the node created in the previous function.
Return the tree rooted in the node created in the first step.
Let us analyze if a tree generated that way proves the quasi-optimality of the mesh:
Every child of any node will have at most half of the basis functions of its parent. Because of that, the total height of the tree cannot be larger than ().
There is no overlap between supports of basis functions of either side of the dividing plane.
Elements of refinement level cannot be produced farther than times the side of those elements. Thus the number of elements of a given refinement level that cross the dividing plane is limited.
In the case of , all the dividing planes are parallel to each other. The dividing plane of tree node nodes deep from the root will have a distance between the nearest planes of tree nodes higher in the gree of at most , where is the length of the singularity. This means that no elements larger than will be eliminated at this node. In other words, while each dividing plane crosses at most some elements of each refinement level ( is an arbitrary constant), the minimal refinement level of elements removed at given tree node increases by every step down the tree. This means that the root node will have at most elements, and other nodes will have at most elements.
In case of , the limit of elements of refinement level crossed by a dividing plane is , where is the length/area/volume of cross section between the singularity and the dividing plane (assuming that the whole mesh has side of length ). Because the plane of the smallest cross-section is chosen, will decrease on average by . The total number of elements crossed will be . Considering that (up to a constant), this is equivalent to .
This shows that all meshes with singularities that are adapted using the Algorithm 1 are quasi-optimal.
6 Computational costs of space-time formulations versus time-marching schemes
In this section, we consider possible space-time mesh refinement patterns resulting from different space-time singularities. We look at the structure of the -dimensional space-time refined mesh. Namely, we focus on
space-time ”edge”, resulting from a point traveling through space and time,
space-time ”face”, resulting from an edge traveling through space and time,
space-time ”hyperface”, resulting from a face traveling through space and time.
We will consider these space-time refined meshed in three-dimensions () and in four-dimensions (). These space-time refined -dimensional meshes correspond to the following sequences of dimensional refined meshes employed by the time-marching scheme:
sequence of spatial meshes refined towards ”points” located on the space-time edge at particular time moments,
sequence of spatial meshes refined towards ”edges” located on the space-time face at a particular time moment,
sequence of spatial meshes refined towards ”faces” located on the space-time hyperface at a particular time moment.
This correspondence is illustrated in Figure 14 for the three-dimensional space-time mesh with edge singularity, and the resulting sequence of two-dimensional meshed refined to the corresponding points.
We assume that the time step size in the time-marching scheme is equal to the diameter along the time axis of the smallest elements in the space-time mesh.
6.1 Computational costs of direct solvers for space-time formulations and time-marching schemes
We compare the computational cost of the direct solver executed over the dimensional space-time domain to the computational cost of the direct solver executed multiple times over -1 dimensional meshes within the time-marching scheme. In Table 3 we list space-time adaptive meshes. We include the 4D space-time uniform mesh, 4D space-time mesh refined towards the hyperface singularity, 4D space-time mesh refined towards the face singularity, 4D space-time mesh refined towards edge singularity, as well as 3D space-time unfirm mesh, 3D space-time mesh refined towards face singularity, and 3D space-time mesh refined towards edge singularity. We compute dimensions of all these grids and express it by the number of refinement levels . Additionally, we provide the estimates for the direct solver execution time. An interesting observation is that the computational cost of the direct solver executed for the space-time mesh refined towards an edge is linear .
Next, in Table 4 we evaluate the computational cost for time-marching scheme. For each -dimensional space-time refined toward the -singularity, we construct a sequence of meshes. Each of the grids from the sequence is dimensional, and it is refined towards singularity. Such the sequence of grids provides the solution with similar accuracy to the space-time grid. We estimate the dimensions of the spatial meshes from the sequence and estimate the computational cost of the direct solver executed times, once for each grid from the sequence. An interesting observation is that the computational cost of the time-marching scheme corresponding to the space-time mesh refined towards an edge is times higher than the space-time cost. However, for all other space-time grids, the time marching scheme is cheaper.
|Space-time mesh||Space-time mesh size||Space-time mesh|
|direct solver cost|