DeepAI
Log In Sign Up

Symmetry and Invariant Bases in Finite Element Exterior Calculus

We study symmetries of bases and spanning sets in finite element exterior calculus using representation theory. The group of affine symmetries of a simplex is isomorphic to a permutation group and represented on simplicial finite element spaces by the pullback action. We want to know which vector-valued finite element spaces have bases that are invariant under permutation of vertex indices. We determine a natural notion of invariance and sufficient conditions on the dimension and polynomial degree for the existence of invariant bases. We conjecture that these conditions are necessary too. We utilize Djokovic and Malzan's classification of monomial irreducible representations of the symmetric group and use symmetries of the geometric decomposition and canonical isomorphisms of the finite element spaces. Invariant bases are constructed in dimensions two and three for different spaces of finite element differential forms.

READ FULL TEXT VIEW PDF

page 1

page 2

page 3

page 4

12/11/2021

Symmetric bases for finite element exterior calculus spaces

In 2006, Arnold, Falk, and Winther developed finite element exterior cal...
06/14/2019

Duality in finite element exterior calculus and Hodge duality on the sphere

Finite element exterior calculus refers to the development of finite ele...
04/01/2021

Local L^2-bounded commuting projections in FEEC

We construct local projections into canonical finite element spaces that...
01/29/2018

An Algorithm to Decompose Permutation Representations of Finite Groups: Polynomial Algebra Approach

We describe an algorithm for splitting a permutation representation of a...
12/03/2021

Unifying the geometric decompositions of full and trimmed polynomial spaces in finite element exterior calculus

Arnold, Falk, Winther, in _Finite element exterior calculus, homolog...
03/08/2019

Quadratic Probabilistic Algorithms for Normal Bases

It is well known that for any finite Galois extension field K/F, with Ga...

1. Introduction

The Lagrange finite element space over a simplex is a canonical example of a finite element space that can easily be defined for arbitrary polynomial degree. The literature knows several common examples of bases for the higher-order Lagrange space, including the standard nodal basis, the barycentric bases, and the Bernstein bases [2, 1, 29]. A convenient feature of these canonical bases is their invariance under re-numbering of the vertices: the basis does not change if we re-number the vertices of the simplex, or equivalently, if we transport the basis functions along an affine automorphism of the simplex.

While this convenient feature might easily be taken for granted, it fails to hold for vector-valued finite element spaces, such as the Raviart-Thomas spaces, Brezzi-Douglas-Marini spaces, and the Nédeléc spaces of first and second kind [41, 12, 37]. Indeed, even finding explicit bases for these vector-valued finite element spaces is a non-trivial topic which has only been addressed after the turn of the century [6, 7, 24, 20, 9, 30]. Whether an invariant basis exists seems to be an intricate question: while no such basis exists for the space of constant vector fields over a triangle, one easily finds such a basis for the linear vector fields of a triangle. Generally speaking, the existence of such a basis for each family of vector-valued finite element spaces seems to depend on the polynomial degree and the dimension.

The purpose of this article is to address this algebraic aspect of finite element differential forms: we present a natural notion of invariance and show the existence of invariant bases for certain finite element spaces. In particular, we give sufficient conditions on the polynomial degree and the dimension for each family of finite element spaces. We conjecture that these conditions are also necessary. This work continues prior study of bases and spanning sets in finite element exterior calculus [30].

In order to achieve the aim of this article, we adopt the framework of finite element exterior calculus (FEEC, [6]), which translates the vector-valued finite element spaces into the calculus of differential forms. A peculiar feature of FEEC is that it has generalized results in finite element theory previously known only for special cases and puts these into a common framework: this includes convergence results [8, 5, 4, 3], approximation theory [16, 26, 32, 31]

, and a posteriori error estimation

[18].

We use the representation theory of the permutation group in order to address the question of invariant bases in finite element exterior calculus. Affine automorphisms of a simplex correspond to permutations reordering the vertices of that simplex. These automorphisms constitute a finite symmetric group, and they transform finite element differential forms via the pullback operation. Thus representation theory emerges naturally in our study, because these pullbacks are a linear representations over finite element spaces. These automorphisms constitute a finite group isomorphic to the group of permutations of the simplex vertices.

It turns out that we need to study finite element spaces with complex coefficients in order to develop a satisfying theory of invariant bases. Our notion of invariance in this article is invariant under the action of the symmetric group up to multiplication by complex units. In the language of representation theory, we are interested under which circumstances the action of the symmetric group over a finite element space can be represented by a monomial matrix group with real or complex coefficients [38]. The transition to complex numbers reveals interesting structures: for example, the constant complex vector fields over a triangle have a basis invariant up to multiplication by complex roots of unity. The calculus of differential forms is essential for our theoretical framework.

We construct invariant bases for finite element spaces of higher polynomial order by a reduction to the case of lower polynomial degree. Towards that aim, we analyze the interaction of simplicial symmetries with two concepts in the theory of finite element exterior calculus. On the one hand, we recall the geometric decomposition of the finite element spaces [7]

This decomposition involves extension operators that we show to preserve invariant bases, so a geometrically decomposed invariant basis for finite element methods can be constructed from invariant bases for the finite element spaces with boundary conditions. On the other hand, we recall the canonical isomorphisms [6] over an -dimensional simplex

These isomorphisms are natural for the algebraic theory of finite element exterior calculus in the sense that they preserve the canonical spanning sets [30]. We show that they commute with the simplicial symmetries and thus preserve invariant bases. Hence, in order to construct invariant bases for the finite element spaces with boundary conditions it suffices to find invariant bases for finite element spaces of lower polynomial degree and over lower-dimensional simplices. The former two observations combined enable a recursive construction of invariant bases, under the precondition that invariant bases are available in the base cases.

The aforementioned base case refers to the finite element spaces of differential forms of polynomial order zero, that is, constant fields. The theory of invariant bases for the constant differential forms over a simplex derives from the classification of monomial irreducible representations of the symmetric group due to Djoković and Malzan [19]. Specifically, invariant bases for the constant differential forms exist only in the case of scalar and volume forms, the case of differential forms up to dimension , and the case of constant -forms over -simplices. Starting from these base cases, we determine conditions on the dimension and the polynomial degree that ensure that our construction produces invariant bases.

We outline the invariant bases for constant fields in vector calculus notation and using barycentric coordinates of the simplex. Over a tetrahedron, the three vector fields

(1)

are a basis for the constant vector fields, and that basis is invariant under renumbering of vertices up to signs. Similarly, the three constant cross products

(2)

are a basis for the constant pseudovector fields over a tetrahedron and that basis is again invariant under renumbering up to signs. Over a triangle, the transition to complex coefficients reveals the following observation which may come surprising to some readers: the two constant vector fields

(3)

are a basis for the complex constant vector fields over a triangle that is invariant under renumbering of vertices up to complex units, more specifically, up to cubic roots of unity. Lastly, we mention that quartic roots of unity appear in the construction of an invariant bases for the bivector fields over a four-dimensional hypertetrahedron.

This allows the construction of bases for finite element spaces that are invariant up to complex roots of unity. Whether these complex roots of unity are real, that is, the basis is invariant up to sign changes, depends on the simplex dimension and the polynomial degree. We conjecture that our construction is exhaustive: no finite element spaces in finite element exterior calculus have bases invariant up to real and complex units except for the one discussed in this article.

As a convenience for the reader, we summarize the application of our theory to common (real-valued) finite element spaces below. We use the language of vector analysis and the notation as in the article. The following finite element spaces have bases that are invariant up to sign changes under reordering of the vertices:

  • The Brezzi-Douglas-Marini space of degree over a triangle ,

    if is not divisible by .

  • The Raviart-Thomas space of degree over a triangle ,

    if is not divisible by .

  • The divergence-conforming Brezzi-Douglas-Marini space of degree over a tetrahedron ,

    if .

  • The divergence-conforming Raviart-Thomas space of degree over a tetrahedron ,

    if .

  • The curl-conforming Nédélec space of the first kind degree over a tetrahedron ,

    if .

  • The curl-conforming Nédélec space of the second kind of degree over a tetrahedron ,

    if .

However, the complex-valued versions of these finite element spaces have bases invariant up to multiplication by cubic roots of unity, irrespective of the polynomial degree. We conjecture that the for remaining polynomial degrees not covered above, no basis invariant up to sign changes exists.

This article utilizes representation theory for a theoretical contribution to numerical analysis, and some aspects can be of broader interest in representation theory. Our action of the symmetric group on finite element spaces is fully specified by action of the symmetric group on the barycentric coordinates, which is a rewriting of the standard representation of the symmetric group. Our recursive construction showcases new aspects of the representation theory of the symmetric group. The notion of monomial representation is central to our contribution. However, monomial representations do not seem to be a standard topic in introductory textbooks on representation theory, and only a few articles approach constructive aspects of monomial representations (see [39, 40]). We also remark that groups of monomial matrices over finite fields have found use in cryptography and coding theory [23]. The author suggests a comprehensive study of the category of monomial representations of finite groups. Such study may verify or refute our conjecture that there are no monomial representations on finite element spaces not covered by our recursive construction.

The representation theory of groups has had various applications throughout numerical and computational mathematics, such as in geometric integration theory [14, 45, 36]

and artificial neural networks

[11]. Our application in finite element methods adds a new application of representation theory to that list. It remains for future research to study the symmetry of finite element bases over non-simplicial cells [35, 21, 33, 13].

Bases for finite element spaces have been subject of research for a long time. The choice of bases influences the condition numbers and sparsity properties of the global finite element matrices [2, 42, 10, 28]. Bases for vector-valued finite element spaces, such as Brezzi-Douglas-Marini spaces, Raviart-Thomas spaces, or Nédélec spaces have been stated explicitly relatively recently [6, 7, 24, 20, 9, 30]. The invariance of bases under renumbering of the vertices of a simplex is not an issue for scalar-valued finite element spaces but becomes a highly nontrivial topic for vector-valued finite element spaces. To the author’s best knowledge, the questions addressed in this article have been in informal circulation for quite some time but no research results have been published before. We remark that the seminal article of Arnold, Falk, and Winther [6]

utilized techniques of representation theory to classify the affinely invariant finite-dimensional vector spaces of polynomial differential forms.


The remainder of this work is structured as follows. Important preliminaries on combinatorics, exterior calculus, and polynomial differential forms are summarized in Section 2. We review elements of representation theory in Section 3. In Section 4 we establish first results on the coordinate transformation of polynomial differential forms. In Section 5 we study invariant bases and spanning sets for lowest-order finite element spaces. We discuss the symmetry properties of the canonical isomorphisms in Section 6. We discuss extension operators, geometric decompositions, and their symmetry properties in Section 7. Putting these results together, the recursive construction of invariant bases and applications are discussed in Section 8.

2. Notation and Definitions

We introduce and review notions from combinatorics, simplicial geometry, and differential forms over simplices. Much of this section, though not everything, is a summary of results in [30]. We refer to Arnold, Falk, and Winther [6, 7] and to Hiptmair [25] for further background on polynomial differential forms.

2.1. Combinatorics

We let be the Kronecker delta for any . For we write and let if and if . The set of all permutations of is written and we abbreviate . We let be the sign of any permutation .

We write for the set of multiindices over . For any ,

We let be the set of those for which , and we abbreviate . The sum of is defined in the obvious manner. We let be the function that equals at and is zero otherwise. When and , then is notation for . Similarly, when , then is notation for .

For , we let be the set of functions from to . For any we let be the sign of the permutation that orders the sequence in ascending order.

We let be the set of strictly ascending mappings from to . We call those mappings also alternator indices. By convention, whenever . For any we let

and we write for the minimal element of provided that is not empty, and otherwise. Furthermore, if , then we write for the unique element of with image . In that case, we also write for the sign of the permutation that orders the sequence in ascending order, and we write for the sign of the permutation that orders the sequence in ascending order. Note also that . Similarly, if , then we write for the unique element of with image .

We abbreviate and . If is understood and , then for any we define by the condition , and for any we define by the condition . In particular, and . We emphasize that and depend on , which we suppress in the notation.

When and with , then denotes the sign of the permutation ordering the sequence in ascending order.

2.2. Simplices

Let . An -dimensional simplex is the convex closure of pairwise distinct affinely independent points in Euclidean space, called the vertices of . We call a subsimplex of if the set of vertices of is a subset of the set of vertices of . We write for the set inclusion of into .

As an additional structure, we assume that the vertices of all simplices are ordered. For simplicity, we assume that all simplices have vertices ordered compatibly to the order of vertices on their subsimplices. Suppose that is an -dimensional subsimplex of with ordered vertices . With a mild abuse of notation, we let be defined by .

2.3. Barycentric Coordinates and Differential Forms

Let be a simplex of dimension . Following the notation of [6], we denote by the space of differential -forms over with smooth bounded real coefficients of all orders, where . Recall that these mappings take values in the -th exterior power of the dual of the tangential space of the simplex . In the case , the space is just the space of smooth functions over with uniformly bounded derivatives. Furthermore, unless .

We write and let denote the complexification of . All the algebraic operations defined in the following apply to completely analogously.

We recall the exterior product for and and that it satisfies . We let denote the exterior derivative. It satisfies for and . We also recall that the integral of a differential -form over is well-defined.

Let be an -dimensional subsimplex of . The inclusion naturally induces a mapping by taking the pullback, which we call the trace from onto . It is well-known that the exterior derivative commutes with taking traces, that is, for all .

The barycentric coordinates are the unique affine functions over that satisfy the Lagrange property

(4)

The barycentric coordinate functions of are linearly independent and constitute a partition of unity:

(5)

We write for the exterior derivatives of the barycentric coordinates. The exterior derivatives are differential -forms and constitute a partition of zero:

(6)

It can be shown that this is the only linear independence between the exterior derivatives of the barycentric coordinate functions.

Several classes of differential forms over that are expressed in terms of the barycentric polynomials and their exterior derivatives. When and , then the corresponding barycentric polynomial over is

(7)

When and , the corresponding barycentric alternator is

(8)

Here, we treat the special case by defining .

Whenever and , then the corresponding Whitney form is

(9)

In the special case that is the single member of , then we write for the associated Whitney form. In the sequel, we call the differential forms (7), (8), (9), and their sums and exterior products, barycentric differential forms over .

For notational convenience in some statements, we will also allow and to be arbitrary functions and then define

Lemma 2.1.

Let be injective. Then there exist a unique and a unique permutation such that . Moreover

Proof.

For every injective there exist a unique and such that . We have and , and is the permutation that orders the sequence into ascending order.

Consider any such that for some ; we let be the minimal number of transpositions necessary to bring the sequence into order and let be the minimal number of transpositions necessary to bring the sequence into ascending order. We now see that . It follows that

This had to be shown. ∎

2.4. Finite Element Spaces over Simplices

Consider an -dimensional simplex , a polynomial degree , and a form degree . Let or . We introduce the sets of polynomial differential forms

(10a)
(10b)
(10c)
(10d)

and their linear hulls

(11)

The sets (10) are called the canonical spanning sets. Their linear hulls give rise to the standard finite element spaces (11) of finite element exterior calculus.

These canonical spanning sets are generally not linearly independent and so it remains to state an explicit bases for the spaces (11). We can give some simply examples whenever . We define the sets of barycentric differential forms

(12a)
(12b)
(12c)
(12d)

A particular feature of these bases and spanning sets are their inclusion relations. On the one hand, the bases are subsets of the spanning sets,

On the other hand, the generators for the spaces with boundary conditions are contained in the generators for the unconstrained spaces,

We remark that

(13)
(14)

and we could have defined equivalently

For any and we let

We can thus write

(15)
(16)
(17)
(18)

These identities will be used in Section 6 but we note that they also simplify indexing the basis forms, which is an auxiliary result in its own right.

3. Elements of Representation Theory

In this section we gather elements of the representation theory of finite groups. We keep this rather concise and refer to the literature [43, 17, 44, 27, 22] for thorough exposition of representation theory. We introduce the relevant definitions and results so that the reader can follow the literature references upon which we build later in this exposition, which include the notions of irreducible representations, induced representations, and monomial representations. While the first two concepts are all but standard material in expositions on representation theory, the notion of monomial representation does not seem to have attracted much attention yet.

Throughout this section we fix a finite group . The binary operation of the group is written multiplicatively. We let denote the identity element of and we let be the inverse of any . Furthermore, we fix in this section to be either the field of real numbers or the field of complex numbers. For any vector space over we write for its general linear group.

A representation of is a group homomorphism from into the general linear group of a vector space . Definitions imply that and that for all we have

The dimension of any vector space is denoted by . The dimension of is defined as the dimension of , and the representation is called finite-dimensional if .

Example 3.1.

The most important example of a group in this article is the group of permutations of the set for some . The composition is the binary operation of that group. We also recall the cycle notation: when are pairwise distinct, then is the unique permutation that satisfies

and leaves all other members of invariant.

Example 3.2.

For any group and any vector space over some field the mapping that assumes the constant value is a representation of . This simple but important example is the trivial representation of . For another basic example, recall that every group generates the vector space over . The mapping such that for all is a representation of .

The matrices representing a finite group are all unitary. Indeed, since every satisfies , where denotes the cardinality of , we also have . Consequently, the determinant of every is a complex unit.

The representation is called faithful if it is a group monomorphism, that is, only the unit of the group is mapped onto the identity.

We call two representations and equivalent if there exists an isomorphism such that for all . In many circumstances, we are only interested in features of representations up to equivalence.

3.1. Direct sums, subrepresentations, and irreducible representations

We want to compose new representations from old representations. One way of doing so is the direct sum. Let and be two representations of . Their direct sum

is another representation of and is defined by

The definition of the direct sum extends to the case of several summands in the obvious manner.

We are interested in how to conversely decompose a representation into direct summands. To study that question, we introduce further terminology.

Let be a representation. A subspace is called -invariant if for all . Examples of -invariant subspaces are itself and the zero vector space. We call the representation irreducible if the only -invariant subspaces of are itself and the zero vector space, and otherwise we call reducible.

Suppose that is an -invariant subspace. Then there exists a representation in the obvious way. We call a subrepresentation of .

The following result is well-known in the literature of representation theory, and is known as Maschke’s theorem [34].

Lemma 3.3.

Let be a finite-dimensional representation of . Then there exist -invariant subspaces such that

and such that is irreducible.

Proof.

If is irreducible, then there is nothing to show. Otherwise, there exists an -invariant subspace that is neither nor trivial. We let be any projection of onto . Since is finite, we can define the linear mapping

One verifies that is again a projection onto . Furthermore, we see that for all and . So is -invariant. Since by linear algebra, we have a decomposition of as the direct sum of two non-trivial -invariant subspaces. One then sees that is the direct sum of the representations of over these spaces. The claim follows by an induction argument over the dimension of . ∎

3.2. Restrictions and Induced Representations

Let be a subgroup of . We recall that the cardinality of divides the cardinality of , and that the quotient is called the index of in . Then we have a representation that is called the restriction of to the subgroup . We generally cannot recover the original representation from its restriction to a subgroup, but there exists canonical way of inducing a representation of a group from a representation of a subgroup.

Suppose that we have a representation of the subgroup over the vector space . First, we let be the list of representatives of the left cosets of in , where necessarily is in the index of in . We recall that for every there exists a unique permutation such that . More specifically, there exists a unique such that . We now define the vector space

and define a representation by setting

In other words,

We call this the induced representation. Conceptually, consists of copies of , each of which is associated to a coset representative , and the induced representation applies the representation of componentwise and then permutates the components.

We remark that the induced representation as defined above depends on the choice of representatives of the left cosets, which we have encoded in the set . However, different choices of representatives will yield representations that are equivalent; we refer to [43, Chapter 12.5] for the details.

3.3. Monomial representations and invariant sets

An square matrix is called monomial or a generalized permutation matrix if it is the product of a permutation matrix and an invertible diagonal matrix. A group representation is called monomial if there exists a basis of with respect to which is a monomial matrix for each .

A representation of is called induced monomial if it is induced by a one-dimensional representation of a subgroup of . It is easy to see that every induced monomial representation is monomial. We remark that many authors use the term monomial for what we call induced monomial. For irreducible representations, being monomial and being induced monomial are equivalent [17, Corollary 50.6].

Lemma 3.4.

If the representation is irreducible and induced monomial, then is monomial.

We now introduce the notion of invariance that is the central object to study in this article. Our notion of invariance is not standard in the literature of representation theory to the authors best knowledge.

Let be a finite set of cardinality ,

We say that is -invariant if for every there exists a permutation and a sequence of non-zero numbers such that

Since finite groups have unitary representations, must be units in provided that are scaled to unit length.

Finally, we remark that any -invariant subset of a real vector space gives rise to an -invariant subset of the complexification of that vector space.

4. Notions of Invariance

In this section we study the pullback of polynomial differential forms under affine transformations between simplices in greater detail. We then introduce the simplicial symmetry group and its action on finite element spaces.

Suppose that and are -simplices and write and for the ordered list of vertices of and , respectively. For any permutation there exists a unique affine diffeomorphism such that