Recently we have constructed finite elements for divdiv conforming symmetric tensors in two dimensions . In this paper, we shall continue our study of space
and construct corresponding finite element spaces in three dimensions.
It turns out the construction in three dimensions is much harder than that in two dimensions. One obvious reason is the complication of interplay of differential operators ( and ) and matrix operators ( and etc) in three dimensions. The essential difficulty arises from the Hilbert complex
In the divdiv complex in three dimensions, the Sobolev space before
consists of tensor functions rather than vector functions in two dimensions. The subspace of polynomial space with vanishing boundary degree of freedoms is not clear as no finite element spaces foris known. In two dimensions, we have and finite element spaces for are relatively mature. By analogy, an nonconforming finite element in two dimensions was advanced for discretizing the mixed formulation of the biharmonic equation in [8, 9, 11] in 1960s, while the corresponding nonconforming finite element in three dimensions was constructed to solve a mixed formulation of the linear elasticity until 2011 , rather than the biharmonic equation.
To attack the main difficulty, we present two polynomial complexes and reveal several decomposition of polynomial vector and tensors spaces from the complexes. With further help from the Green’s identity and characterization of the trace, we are able to construct two types of finite element spaces on a tetrahedron. Here we present the BDM-type space below. Let be a tetrahedron and let be a positive integer. The shape function space is simply . The set of edges of is denoted by , the faces by , and the vertices by . For each edge, we chose two normal vectors and . The degrees of freedom are given by
where is an arbitrary but fixed face. The degrees of freedom (6) will be regarded as interior degrees of freedom to the tetrahedron , that is the degrees of freedom (6) are double-valued on each face when defining the global finite element space. RT-type space can be obtained by further reducing the index of degree of freedoms by
except the moment with.
The boundary degree of freedoms (1)-(4) are motivated by the Green’s formulae and the characterization of the trace of . The interior moments of is to determine . Together with , the volume moments can determine the polynomial of degree up to .
We then use the vanished trace and the symmetry of the tesnor. Similarly as the RT and BDM elements , the vanishing normal-normal trace (34) implies the normal-normal part of is zero. To determine the normal-tangential terms, further degrees of freedoms are needed. Due to the symmetry of , it is sufficient to provide additional degrees of freedoms on one face, which are inspired by the RT and BDM elements in two dimensions.
It is arduous to figure out the explicit basis functions dual to the degrees of freedom (1)-(6). Hybridization is thus provided for the ease of implementation. The basis functions of the standard Lagrange element can be used to implement the hybridized mixed finite element methods. The constructed divdiv conforming elements are exploited to discretize the mixed formulation of the biharmonic equation. Optimal order and superconvergence error analysis is provided.
The rest of this paper is organized as follows. In Section 2, we present some operations for vectors and tensors. Two polynomial complexes related to the divdiv complex, and direct sum decompositions of polynomial spaces are shown in Section 3. We derive the Green’s identity and characterize the trace of on polyhedrons in Section 4, and then construct the conforming finite elements for in three dimensions. Mixed finite element methods for the biharmonic equation are developed in Section 5.
2. Matrix and vector operations
In this section, we shall survey operations for vectors and tensors. Some of them are standard but some are not unified in the literature. In particular, we shall introduce operators appending to the right side of a matrix for operations applied to columns of a matrix. We will mix the usage of row and column vectors which will be clear in the context.
Given a plane with normal vector , for a vector , we have the orthogonal decomposition
The vector is also on the plane and is a rotation of by counter-clockwise with respect to . Therefore
We treat Hamilton operator as a column vector and define
For a scalar function ,
are the surface gradient of and surface , respectively. For a vector function , is the surface divergence and by definition
By the cyclic invariance of the mix product, the surface rot operator is
In particular, for , is the plane. Then is the two dimensional operator which is a rotation of the two dimensional divergence operator .
The matrix-vector product can be interpret as the inner product of with the row vectors of . We thus define the dot operator
Namely the vector inner product is applied row-wise to the matrix. Similarly we can define the cross product row-wise from the left . Here rigorously speaking when a column vector is treat as a row vector, should be used. In most places, however, we will scarify this precision for the ease of notation.
When the vector is on the right of the matrix, the operation is defined column-wise. That is
By moving the column operation to the right, it is consistent with the transpose operator. For the transpose of product of two objects, we take transpose of each one, switch their order, and add a negative sign if it is the cross product.
For two vectors and matrix , we define the following products
By moving the column operation to the right, these operations are associative. That is the ordering of performing the products does not matter.
Apply these matrix-vector operations to the Hamilton operator , we get row-wise differentiation
and column-wise differentiation
And we can write the divdiv operator applied to a matrix as
Denote the space of all matrix by , all symmetric matrix by
, all skew-symmetricmatrix by , and all trace-free matrix by . For any matrix , we can decompose it into symmetric and skew-symmetric part as
We can also decompose it into a direct sum of a trace free matrix and a diagonal matrix as
For a vector function , and are standard differential operations. The gradient is a matrix Its symmetric part is defined as
In the last identity notation is used to emphasize the symmetry form. Similarly we can define operator for a matrix
We define an isomorphism of and the space of skew-symmetric matrices as follows: for a vector
For two vectors , one can easily verify that
We will use the following identities which can be verified by direct calculation.
More identities involving the matrix operation and vector differentiation are summarized in .
3. Divdiv complex and polynomial complexes
In this section, we shall consider the divdiv complex and establish two related polynomial complexes. We assume is a bounded and strong Lipschitz domain which is topologically trivial in the sense that it is homeomorphic to a ball. Without loss of generality, we also assume .
3.1. The complex
Assume is a bounded and topologically trivial strong Lipschitz domain in . Then (11) is a complex and exact sequence.
Any skew-symmetric can be written as . Assume , it follows from (9) that
Since for any smooth skew-symmetric tensor field , we obtain
For any , it follows from (7) that
Hence . For any , it holds from (10) that
We get . As a result, (11) is a complex.
For any , there exists such that
Hence here exists such that
By the symmetry of , we have Noting that
it follows . Thus
3.2. Polynomial complexes
Given a bounded domain and a non-negative integer , let stand for the set of all polynomials in with the total degree no more than , and denote the tensor or vector version.
The polynomial complex
It follows from (12) that
For any , we have . Hence
which means and . We conclude from the fact that
is the identity matrix multiplied by a constant.
For any , there exists satisfying , i.e. . Then
from which we get , and thus . As a result . And we also have
Define operator as
The following complex is a generalization of the Koszul complex for vector functions.
The polynomial complex
thus (16) is a complex.
For any satisfying , since , there exist and such that . Noting that
which indicates and thus . Hence there exists such that . Taking , we get
Namely is a projector. Consequently, the operator is surjective. And we have
Unlike the Koszul complex for vectors functions, we do not have the identity property applied to homogenous polynomials. Fortunately decomposition of polynomial spaces using Koszul and differential operators still holds.
Let be the space of homogeneous polynomials of degree . Then by Euler’s formula
Due to (20), for any satisfying , we have . And
for any positive integer .
We have the decomposition
Since the dimension of space in the left hand side is the summation of the dimension of the two spaces in the right hand side in (23), we only need to prove that the sum in (23) is the direct sum. For any satisfying and , we have , that is
By (20), there exist such that . Then
which indicates . Hence it follows from the last identity that , which combined with (21) gives for some constant . Thus , and . This ends the proof. ∎
Finally we present a decomposition of space . Let
Their dimensions are
is a bijection.
Since and , we get
Hence property (i) follows from (20). Property (ii) is obtained by writing . Now we prove property (iii). First the dimension of space in the left hand side is the summation of the dimension of the two spaces in the right hand side in (iii). Assume satisfies , which means
For the simplification of the degree of freedoms, we need another decomposition of the symmetric tensor polynomial space which can be derived from the dual complex of polynomial divdiv complexes.
4. Divdiv Conforming finite element spaces
We first present a Green’s identity based on which we can characterize the trace of on polyhedrons and give a sufficient continuity condition for a piecewise smooth function to be in . Then we construct the finite element space and prove the unisolvence.
Let be a regular family of polyhedral meshes of . Our finite element spaces are constructed for tetrahedrons but some results, e.g., traces and Green’s formulae etc, hold for general polyhedrons. For each element , denote by the unit outward normal vector to , which will be abbreviated as for simplicity. Let , , , , and be the union of all faces, interior faces, all edges, interior edges, vertices and interior vertices of the partition , respectively. For any , fix a unit normal vector and two unit tangent vectors and , which will be abbreviated as and without causing any confusions. For any , fix a unit tangent vector and two unit normal vectors and , which will be abbreviated as and without causing any confusions. For being a polyhedron, denote by , and the set of all faces, edges and vertices of , respectively. For any , let be the set of all edges of . And for each , denote by the unit vector being parallel to and outward normal to . Furthermore, set
4.2. Green’s identity
We start from the Green’s identity for smooth functions on polyhedrons.
Lemma 4.1 (Green’s identity).
Let be a polyhedron, and let and . Then we have
We start from the standard integration by parts
We then decompose and apply the Stokes theorem to get
Now we rewrite the term
Thus the Green’s identity (26) follows by merging all terms. ∎
When the domain is smooth in the sense that is an empty set, the term disappears. When is continuous on edge , this term will define a jump of the tensor.
4.3. Traces and continuity across the boundary
Let for the normal-normal trace, and for the trace involving combination of derivatives.
For any , it holds
Conversely, for any and , there exists some such that