Algorithms in Linear Algebraic Groups

03/12/2020
by   Sushil Bhunia, et al.
0

This paper presents some algorithms in linear algebraic groups. These algorithms solve the word problem and compute the spinor norm for orthogonal groups. This gives us an algorithmic definition of the spinor norm. We compute the double coset decomposition with respect to a Siegel maximal parabolic subgroup, which is important in computing infinite-dimensional representations for some algebraic groups.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

03/15/2019

On Cayley graphs of basic algebraic structures

We present simple graph-theoretic characterizations of Cayley graphs for...
04/26/2017

Computing representation matrices for the action of Frobenius to cohomology groups

This paper is concerned with the computation of representation matrices ...
03/30/2020

Parallel Computation of tropical varieties, their positive part, and tropical Grassmannians

In this article, we present a massively parallel framework for computing...
05/02/2019

Learning Algebraic Structures: Preliminary Investigations

We employ techniques of machine-learning, exemplified by support vector ...
03/04/2019

An Adaptive Grid Algorithm for Computing the Homology Group of Semialgebraic Set

Looking for an efficient algorithm for the computation of the homology g...
12/02/2020

Algebraically-Informed Deep Networks (AIDN): A Deep Learning Approach to Represent Algebraic Structures

One of the central problems in the interface of deep learning and mathem...
12/01/2019

Algebraic Analysis of Rotation Data

We develop algebraic tools for statistical inference from samples of rot...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction

Spinor norm was first defined by Dieudonné and Kneser using Clifford algebras. Wall [21] defined the spinor norm using bilinear forms. These days, to compute the spinor norm, one uses the definition of Wall. In this paper, we develop a new definition of the spinor norm for split and twisted orthogonal groups. Our definition of the spinor norm is rich in the sense, that it is algorithmic in nature. Now one can compute spinor norm using a Gaussian elimination algorithm that we develop in this paper. This paper can be seen as an extension of our earlier work in the book chapter [3], where we described Gaussian elimination algorithms for orthogonal and symplectic groups in the context of public key cryptography.

In computational group theory, one always looks for algorithms to solve the word problem. For a group defined by a set of generators , the problem is to write as a word in : we say that this is the word problem for (for details, see [18, Section 1.4]). Brooksbank [4] and Costi [10] developed algorithms similar to ours for classical groups over finite fields. It is worth noting that, Chernousov et. al. [7] also used Steinberg presentation for Gauss decomposition for Chevalley groups over arbitrary fields. We refer the reader to a book by Carter [5, Theorem 12.1.1] for Steinberg presentation. We prove the following:

Theorem A.

Let be a reductive linear algebraic group defined over an algebraically closed field of which has the Steinberg presentation. Then every element of can be written as a word in Steinberg generators and a diagonal matrix. The diagonal matrix is: , where , in its natural presentation. Furthermore, we prove that the length of the word is bounded by , where is the rank of the group .

We prove this theorem in Section 3.4. The proof is algorithmic in nature. The method we develop is Gaussian elimination algorithm to solve this problem. Steinberg generators are also called elementary matrices (for details, see Sections 3.1.13.1.23.1.3 and 3.3.1). A novelty of our algorithm is that we do not need to assume that the Steinberg generators generate the group under consideration. Thus our algorithm independently proves the fact that Chevalley groups are generated by elementary matrices. Also, this paper can be seen as developing a Gaussian elimination algorithm in reductive algebraic groups. This is to our knowledge the first attempt to develop a Gaussian elimination algorithm for reductive algebraic groups over an algebraically closed field.

Now we move on to discuss two applications of our algorithm. One is spinor norm and the other is double coset decomposition. Murray and Roney-Dougal [17] studied computing spinor norm earlier. From our algorithm, one can compute the spinor norm easily (for details see Section 4.1). 1 has the following surprising corollary:

Corollary A1.

Let be a field of . In the split orthogonal group , the image of in is the spinor norm.

We prove this corollary in Section 4.2. Since the commutator subgroup of the orthogonal group is the kernel of the spinor norm restricted to the special orthogonal group, the above corollary also gives a membership test for the commutator subgroup in the orthogonal group. In other words, an element in the special orthogonal group belongs to the commutator subgroup if and only if the produced in the Gaussian elimination algorithm is a square in the field.

Furthermore, the spinor norm can also be computed using our algorithm for the twisted orthogonal group. For terminologies of the next result, we refer to Definition 2.5 and Section 3.3.1.

Corollary A2.

Let , then the spinor norm of is the following:

Here .

We prove this corollary in Section 4.3. So, we have an efficient algorithm to compute the spinor norm.

Suppose we want to construct infinite-dimensional representations of reductive linear algebraic groups. One way to construct such representations is parabolic induction. Let be a parabolic subgroup of with Levi decomposition , where is the maximal reductive subgroup of and is the unipotent radical of . Then a representation of the Levi subgroup can be inflated to which acts trivially on . Then we use the parabolic induction to get a representation of from the representation of actually from . For instance, one uses the Siegel maximal parabolic subgroups to construct infinite-dimensional representations. Since the same Levi subgroup can lie in two non-conjugate parabolic subgroups, one uses double coset decomposition of to remedy the situation. Therefore the Levi decomposition as above does not depend on the choice of a parabolic containing . Our algorithm can be used to compute the double coset decomposition corresponding to the Siegel maximal parabolic subgroup (for details, see [6]). We have the following:

Corollary A3.

Let be the Siegel maximal parabolic subgroup in , where is either or . Let . Then there is an efficient algorithm to determine such that . Furthermore, the set of all is a finite set of elements, where or .

We prove this corollary in Section 4.5. We hope this will shed some light in the infinite-dimensional representations of linear algebraic groups.

2. Preliminaries

In this section, we fix some notations and terminologies for this paper. We denote the transpose of a matrix by .

2.1. Algebraic groups

Algebraic groups have a distinguished history. Their origin can be traced back to the work of Cartan and Killing but we do not discuss the history of the subject here as it is quite complex. Here we just mention Chevalley who have made pioneering contributions to this field, see for example [8, 9]. In this paper, we develop some algorithms for reductive linear algebraic groups. There are several excellent references on this topic. Here we follow Humphreys [13]. We fix a perfect field of for this section, and denotes the algebraic closure of . An algebraic group defined over is a group as well as an affine variety over such that the maps , and given by , and are morphisms of varieties. An algebraic group is defined over , if the polynomials defining the underlying affine variety are defined over , with the maps and defined over , and the identity element is a -rational point of . We denote the -rational points of by . Any algebraic group is a closed subgroup of for some . Hence algebraic groups are called linear algebraic groups.

The radical of an algebraic group over is defined to be the largest closed, connected, solvable, normal subgroup of , denoted by . We call to be a semisimple algebraic group if . The unipotent radical of is defined to be the largest, closed, connected, unipotent, normal subgroup of , denoted by . We call a connected group to be reductive if . For example, the group is a reductive group, whereas is a semisimple group. A semisimple algebraic group is always a reductive group. In the next section, we see more examples of algebraic groups, namely, classical groups.

2.2. Similitude groups

In this section, we follow Grove [11] and Knus et al. [14] and define two important classes of groups which preserve a certain bilinear form. Let be an

-dimensional vector space over

, where or and . Let be a bilinear form. By fixing a basis of we can associate a matrix to . With abuse of notation, we denote the matrix of the bilinear form by itself. Thus , where are column vectors. We work with the non-degenerate bilinear forms, i.e.,

. A symmetric (resp. skew-symmetric) bilinear form

satisfies (resp. ). By fixing a basis for , we identify with and treat symplectic and orthogonal similitude groups as subgroups of the general linear group .

2.2.1. Symplectic similitude groups

Up to equivalence, there is a unique non-degenerate skew-symmetric bilinear form over a field  [11, Corollary 2.12]. Moreover, a non-degenerate skew-symmetric bilinear form exists only in even dimension. Fix a basis of as so that the matrix is:

(2.1)
Definition 2.1.

The symplectic group is defined for as

Definition 2.2.

The symplectic similitude group with respect to (as in Equation (2.1)), is defined by

where is a group homomorphism with and the factor is called the multiplier of .

2.2.2. Orthogonal similitude groups

We work with the following non-degenerate symmetric bilinear forms: Fix a basis

for odd dimension and

for even dimension so that the matrix is:

(2.2)

The above form exists on every field and the form is unique up to equivalence and is called the split form (see [2, Chapter 2]).

Definition 2.3.

The orthogonal group is defined as

Definition 2.4.

The orthogonal similitude group with respect to (as in Equation (2.2)) is defined by

where is a group homomorphism with and the factor is called the multiplier of .

Next, we define the twisted analog of the orthogonal group. We talk about twisted form only when , a finite field. For the twisted form, we fix a basis so that the matrix is:

(2.3)

where and is a fixed non-square in , i.e., .

Definition 2.5.

The twisted orthogonal group is defined as

Definition 2.6.

The twisted orthogonal similitude group with respect to (Equation (2.3)), is defined by

where is a group homomorphism with .

2.3. Clifford algebra

Clifford algebras are far-reaching generalizations of the classical Hamiltonian quaternions. One motivation to study Clifford algebras comes from the Euclidean rotational groups. For details, we refer to the reader [11, Chapters 8 and 9]. Let be a quadratic space. Let

be the Clifford algebra, where

is the tensor algebra. Then

is -graded algebra, say, . The subalgebra is called special Clifford algebra and it is a Clifford algebra in its own right. Then there is a unique anti-automorphism, say such that (see, for example, [11, Proposition 8.15.]). Now suppose that are non-zero anisotropic vectors in such that , then (for details, see [11, Proposition 9.1.]). So from the above, we get a well-defined map from the orthogonal group to using Cartan-Dieudonne theorem. This map is called the spinor norm on orthogonal group. See the next section for precise definition.

2.3.1. Spinor norm

It is well-known that is a simple group if contains an isotropic vector and dimension of is at least but we do not know when is an element of the commutator subgroup of the orthogonal group. Then the theory of spinor norm comes into play via Clifford algebra, for example, see Artin [1, Chapter V, Page 193] or L. C. Grove [11, Chapter 9, Page 76]. Here we use the theory of spinor norm developed by G. E. Wall [21]. For details and the connection between Clifford algebras and Wall’s approach, refer to a nice article by R. Lipschitz [15].

2.3.2. Classical spinor norm

The classical way to define the spinor norm is via Clifford algebras [11, Chapters 8 and 9]. For with , we define the reflection

in the hyperplane orthogonal to

by , which is an element of the orthogonal group. We know that every element of the orthogonal group can be written as a product of at most reflections.

Definition 2.7.

The spinor norm is a group homomorphism defined by , where is written as a product of reflections.

However, in practice, it is difficult to use the above definition to compute the spinor norm.

2.3.3. Wall’s spinor norm

Wall [21], Zassenhaus [22] and Hahn [12] developed a theory to compute the spinor norm. So we now define the spinor norm using Wall’s idea. For our exposition, we follow [20, Chapter 11]. For more details on spinor norm using Wall’s idea, see Bhunia [2, Chapter 4, Page 41].

Let be an element of the orthogonal group. Let and . Using we define Wall’s bilinear form on as follows:

This bilinear form satisfies following properties:

  1. for all .

  2. is an isometry on with respect to .

  3. for all .

  4. is non-degenerate.

Then the spinor norm is

extended to by defining . An element is called regular if is a non-degenerate subspace of with respect to the form . Hahn [12, Proposition 2.1] proved that for a regular element , the spinor norm is . This gives,

Proposition 2.1.
  1. For a reflection , .

  2. For a unipotent element the spinor norm is trivial, i.e., .

Proof.
  1. Let be a reflection in the hyperplane orthogonal to , i.e., . Then , therefore . Hence .

  2. See [12, Corollary 2.2] for proof.

In this direction, we show that the Gaussian elimination algorithm we develop outputs the spinor norm (see Section 4.1).

3. Solving the word problem in classical reductive groups

Let be a reductive linear algebraic group over . Then using root datum, is of -classical types and , and -exceptional types and respectively. In this paper, we solve the word problem for classical type groups. The groups that correspond to these types are:

  • (-type): ,

  • (-type): ,

  • (-type): ,

  • (-type): .

Let be a general linear group, then one has a well-known algorithm to solve the word problem – the Gaussian elimination. One observes that the effect of multiplying an element of the general linear group by an elementary matrix (also known as elementary transvection) from left or right is a row or a column operation respectively. Using this algorithm one can start with any matrix and get . Thus writing as a product of elementary matrices and a diagonal matrix. One objective of this paper is to discuss a similar algorithm for symplectic and orthogonal similitude groups.

We first describe the elementary matrices and the elementary operations for the symplectic and orthogonal similitude groups. These elementary operations are nothing but multiplication by elementary matrices from left and right respectively. The elementary matrices used here are nothing but the Steinberg generators which follows from the theory of Chevalley groups. For simplicity, we will write the algorithm for , , and separately.

3.1. Elementary matrices and elementary operations

In what follows, the scalar varies over the field and or . We define as the matrix with in the position and zero everywhere else, where . We simply use to denote . We often use the well-known matrix identity , where is the Kronecker delta. For more details on elementary matrices, we refer  [5, Chapter 11].

3.1.1. Elementary matrices for

We index rows and columns by . The elementary matrices are defined as follows:

We write as , where and are matrices. As , then we have and . Let us note the effect of multiplying by elementary matrices in the following table.

Row operations Column operations
ER1 row and EC1 column and
row column
ER2 row and EC2 column and
row column
ER3 row and EC3 column and
row column
Table 1. The elementary operations for

3.1.2. Elementary matrices for

We index rows and columns by . The elementary matrices are as follows:

We write as , where and are matrices. Let us note the effect of multiplying by elementary matrices in the following table.

Row operations Column operations
ER1 row and EC1 column and
row column
ER2 row and EC2 column and
row column
ER3 row and EC3 column and
row column
ER1a row EC1a column
ER2a row EC2a column
Table 2. The elementary operations for

3.1.3. Elementary matrices for

We index rows and columns by . The elementary matrices are defined as follows:

We write an element as , where and are matrices, and are matrices, and are matrices, . Let us note the effect of multiplying by elementary matrices in the following table.

Row operations Column operations
ER1 row and EC1 column and
row column
ER2 row and EC2 column and
row column
ER3 row and EC3 column and
row column
ER4 row and EC4 column and
row column
ER4a row and EC4a column and
row column
Table 3. The elementary operations for

3.2. Gaussian elimination

To explain the steps of our Gaussian elimination algorithm we need some lemmas. In this subsection, we prove these lemmas.

Lemma 3.1.

Let be of size with the number of s equal to . Let be a matrix of size such that is symmetric (resp. skew-symmetric) then is of the form , where is an symmetric (resp. skew-symmetric), and (resp. ). Furthermore, if then is symmetric (resp. skew-symmetric).

Proof.

First, observe that the matrix . Since the matrix is symmetric (resp. skew-symmetric), then is symmetric (resp. skew-symmetric), and (resp. ). Also if then is symmetric (resp. skew-symmetric). ∎

Corollary 3.2.

Let be either in or .

  1. If is a diagonal matrix , then the matrix is of the form , where is an symmetric if , and is skew-symmetric with if .

  2. If is a diagonal matrix , then the matrix is of the form , where is an symmetric matrix if , and is skew-symmetric if .

Proof.

We use the condition that satisfies , and is symmetric (using , as is diagonal) when , and is skew-symmetric when . Then Lemma 3.1 gives the required form for . ∎

Corollary 3.3.

Let or , where , then the matrix is of the form , where is a symmetric matrix of size if , and skew-symmetric with if .

Proof.

We use the condition that satisfies and to get is symmetric if , and skew-symmetric if . Again Lemma 3.1 gives the required form for . ∎

Lemma 3.4.

Let . Then,

  1. if and only if and , and

  2. if and only if and .

Proof.
  1. Let then satisfies . Then this implies and .

    Conversely, if satisfies the given condition then clearly .

  2. This follows by similar computation.

Lemma 3.5.

Let be of size , where and be a matrix such that is symmetric (resp. skew-symmetric). Then , where each is of the form for some or of the form for some (resp. each is of the form for some ).

Proof.

Since the matrix is symmetric (resp. skew-symmetric), then the matrix is of the form , where is symmetric (resp. skew-symmetric), (resp. ) and is a row of size . Clearly, is a sum of the matrices of the form . ∎

Lemma 3.6.

For ,

  1. The element is a product of elementary matrices.

  2. The element is a product of elementary matrices.

  3. The element is a product of elementary matrices.

Proof.
  1. We have .

  2. We produce these elements inductively. First we get , and . Set . Then compute . So inductively we get is a product of elementary matrices.

  3. We have .

Lemma 3.7.

The element is a product of elementary matrices.

Proof.

First we compute