1. Introduction
Spinor norm was first defined by Dieudonné and Kneser using Clifford algebras. Wall [21] defined the spinor norm using bilinear forms. These days, to compute the spinor norm, one uses the definition of Wall. In this paper, we develop a new definition of the spinor norm for split and twisted orthogonal groups. Our definition of the spinor norm is rich in the sense, that it is algorithmic in nature. Now one can compute spinor norm using a Gaussian elimination algorithm that we develop in this paper. This paper can be seen as an extension of our earlier work in the book chapter [3], where we described Gaussian elimination algorithms for orthogonal and symplectic groups in the context of public key cryptography.
In computational group theory, one always looks for algorithms to solve the word problem. For a group defined by a set of generators , the problem is to write as a word in : we say that this is the word problem for (for details, see [18, Section 1.4]). Brooksbank [4] and Costi [10] developed algorithms similar to ours for classical groups over finite fields. It is worth noting that, Chernousov et. al. [7] also used Steinberg presentation for Gauss decomposition for Chevalley groups over arbitrary fields. We refer the reader to a book by Carter [5, Theorem 12.1.1] for Steinberg presentation. We prove the following:
Theorem A.
Let be a reductive linear algebraic group defined over an algebraically closed field of which has the Steinberg presentation. Then every element of can be written as a word in Steinberg generators and a diagonal matrix. The diagonal matrix is: , where , in its natural presentation. Furthermore, we prove that the length of the word is bounded by , where is the rank of the group .
We prove this theorem in Section 3.4. The proof is algorithmic in nature. The method we develop is Gaussian elimination algorithm to solve this problem. Steinberg generators are also called elementary matrices (for details, see Sections 3.1.1, 3.1.2, 3.1.3 and 3.3.1). A novelty of our algorithm is that we do not need to assume that the Steinberg generators generate the group under consideration. Thus our algorithm independently proves the fact that Chevalley groups are generated by elementary matrices. Also, this paper can be seen as developing a Gaussian elimination algorithm in reductive algebraic groups. This is to our knowledge the first attempt to develop a Gaussian elimination algorithm for reductive algebraic groups over an algebraically closed field.
Now we move on to discuss two applications of our algorithm. One is spinor norm and the other is double coset decomposition. Murray and RoneyDougal [17] studied computing spinor norm earlier. From our algorithm, one can compute the spinor norm easily (for details see Section 4.1). 1 has the following surprising corollary:
Corollary A1.
Let be a field of . In the split orthogonal group , the image of in is the spinor norm.
We prove this corollary in Section 4.2. Since the commutator subgroup of the orthogonal group is the kernel of the spinor norm restricted to the special orthogonal group, the above corollary also gives a membership test for the commutator subgroup in the orthogonal group. In other words, an element in the special orthogonal group belongs to the commutator subgroup if and only if the produced in the Gaussian elimination algorithm is a square in the field.
Furthermore, the spinor norm can also be computed using our algorithm for the twisted orthogonal group. For terminologies of the next result, we refer to Definition 2.5 and Section 3.3.1.
Corollary A2.
Let , then the spinor norm of is the following:
Here .
We prove this corollary in Section 4.3. So, we have an efficient algorithm to compute the spinor norm.
Suppose we want to construct infinitedimensional representations of reductive linear algebraic groups. One way to construct such representations is parabolic induction. Let be a parabolic subgroup of with Levi decomposition , where is the maximal reductive subgroup of and is the unipotent radical of . Then a representation of the Levi subgroup can be inflated to which acts trivially on . Then we use the parabolic induction to get a representation of from the representation of actually from . For instance, one uses the Siegel maximal parabolic subgroups to construct infinitedimensional representations. Since the same Levi subgroup can lie in two nonconjugate parabolic subgroups, one uses double coset decomposition of to remedy the situation. Therefore the Levi decomposition as above does not depend on the choice of a parabolic containing . Our algorithm can be used to compute the double coset decomposition corresponding to the Siegel maximal parabolic subgroup (for details, see [6]). We have the following:
Corollary A3.
Let be the Siegel maximal parabolic subgroup in , where is either or . Let . Then there is an efficient algorithm to determine such that . Furthermore, the set of all is a finite set of elements, where or .
We prove this corollary in Section 4.5. We hope this will shed some light in the infinitedimensional representations of linear algebraic groups.
2. Preliminaries
In this section, we fix some notations and terminologies for this paper. We denote the transpose of a matrix by .
2.1. Algebraic groups
Algebraic groups have a distinguished history. Their origin can be traced back to the work of Cartan and Killing but we do not discuss the history of the subject here as it is quite complex. Here we just mention Chevalley who have made pioneering contributions to this field, see for example [8, 9]. In this paper, we develop some algorithms for reductive linear algebraic groups. There are several excellent references on this topic. Here we follow Humphreys [13]. We fix a perfect field of for this section, and denotes the algebraic closure of . An algebraic group defined over is a group as well as an affine variety over such that the maps , and given by , and are morphisms of varieties. An algebraic group is defined over , if the polynomials defining the underlying affine variety are defined over , with the maps and defined over , and the identity element is a rational point of . We denote the rational points of by . Any algebraic group is a closed subgroup of for some . Hence algebraic groups are called linear algebraic groups.
The radical of an algebraic group over is defined to be the largest closed, connected, solvable, normal subgroup of , denoted by . We call to be a semisimple algebraic group if . The unipotent radical of is defined to be the largest, closed, connected, unipotent, normal subgroup of , denoted by . We call a connected group to be reductive if . For example, the group is a reductive group, whereas is a semisimple group. A semisimple algebraic group is always a reductive group. In the next section, we see more examples of algebraic groups, namely, classical groups.
2.2. Similitude groups
In this section, we follow Grove [11] and Knus et al. [14] and define two important classes of groups which preserve a certain bilinear form. Let be an
dimensional vector space over
, where or and . Let be a bilinear form. By fixing a basis of we can associate a matrix to . With abuse of notation, we denote the matrix of the bilinear form by itself. Thus , where are column vectors. We work with the nondegenerate bilinear forms, i.e.,. A symmetric (resp. skewsymmetric) bilinear form
satisfies (resp. ). By fixing a basis for , we identify with and treat symplectic and orthogonal similitude groups as subgroups of the general linear group .2.2.1. Symplectic similitude groups
Up to equivalence, there is a unique nondegenerate skewsymmetric bilinear form over a field [11, Corollary 2.12]. Moreover, a nondegenerate skewsymmetric bilinear form exists only in even dimension. Fix a basis of as so that the matrix is:
(2.1) 
Definition 2.1.
The symplectic group is defined for as
Definition 2.2.
The symplectic similitude group with respect to (as in Equation (2.1)), is defined by
where is a group homomorphism with and the factor is called the multiplier of .
2.2.2. Orthogonal similitude groups
We work with the following nondegenerate symmetric bilinear forms: Fix a basis
for odd dimension and
for even dimension so that the matrix is:(2.2) 
The above form exists on every field and the form is unique up to equivalence and is called the split form (see [2, Chapter 2]).
Definition 2.3.
The orthogonal group is defined as
Definition 2.4.
The orthogonal similitude group with respect to (as in Equation (2.2)) is defined by
where is a group homomorphism with and the factor is called the multiplier of .
Next, we define the twisted analog of the orthogonal group. We talk about twisted form only when , a finite field. For the twisted form, we fix a basis so that the matrix is:
(2.3) 
where and is a fixed nonsquare in , i.e., .
Definition 2.5.
The twisted orthogonal group is defined as
Definition 2.6.
The twisted orthogonal similitude group with respect to (Equation (2.3)), is defined by
where is a group homomorphism with .
2.3. Clifford algebra
Clifford algebras are farreaching generalizations of the classical Hamiltonian quaternions. One motivation to study Clifford algebras comes from the Euclidean rotational groups. For details, we refer to the reader [11, Chapters 8 and 9]. Let be a quadratic space. Let
be the Clifford algebra, where
is the tensor algebra. Then
is graded algebra, say, . The subalgebra is called special Clifford algebra and it is a Clifford algebra in its own right. Then there is a unique antiautomorphism, say such that (see, for example, [11, Proposition 8.15.]). Now suppose that are nonzero anisotropic vectors in such that , then (for details, see [11, Proposition 9.1.]). So from the above, we get a welldefined map from the orthogonal group to using CartanDieudonne theorem. This map is called the spinor norm on orthogonal group. See the next section for precise definition.2.3.1. Spinor norm
It is wellknown that is a simple group if contains an isotropic vector and dimension of is at least but we do not know when is an element of the commutator subgroup of the orthogonal group. Then the theory of spinor norm comes into play via Clifford algebra, for example, see Artin [1, Chapter V, Page 193] or L. C. Grove [11, Chapter 9, Page 76]. Here we use the theory of spinor norm developed by G. E. Wall [21]. For details and the connection between Clifford algebras and Wall’s approach, refer to a nice article by R. Lipschitz [15].
2.3.2. Classical spinor norm
The classical way to define the spinor norm is via Clifford algebras [11, Chapters 8 and 9]. For with , we define the reflection
in the hyperplane orthogonal to
by , which is an element of the orthogonal group. We know that every element of the orthogonal group can be written as a product of at most reflections.Definition 2.7.
The spinor norm is a group homomorphism defined by , where is written as a product of reflections.
However, in practice, it is difficult to use the above definition to compute the spinor norm.
2.3.3. Wall’s spinor norm
Wall [21], Zassenhaus [22] and Hahn [12] developed a theory to compute the spinor norm. So we now define the spinor norm using Wall’s idea. For our exposition, we follow [20, Chapter 11]. For more details on spinor norm using Wall’s idea, see Bhunia [2, Chapter 4, Page 41].
Let be an element of the orthogonal group. Let and . Using we define Wall’s bilinear form on as follows:
This bilinear form satisfies following properties:

for all .

is an isometry on with respect to .

for all .

is nondegenerate.
Then the spinor norm is
extended to by defining . An element is called regular if is a nondegenerate subspace of with respect to the form . Hahn [12, Proposition 2.1] proved that for a regular element , the spinor norm is . This gives,
Proposition 2.1.

For a reflection , .

For a unipotent element the spinor norm is trivial, i.e., .
Proof.

Let be a reflection in the hyperplane orthogonal to , i.e., . Then , therefore . Hence .

See [12, Corollary 2.2] for proof.
∎
In this direction, we show that the Gaussian elimination algorithm we develop outputs the spinor norm (see Section 4.1).
3. Solving the word problem in classical reductive groups
Let be a reductive linear algebraic group over . Then using root datum, is of classical types and , and exceptional types and respectively. In this paper, we solve the word problem for classical type groups. The groups that correspond to these types are:

(type): ,

(type): ,

(type): ,

(type): .
Let be a general linear group, then one has a wellknown algorithm to solve the word problem – the Gaussian elimination. One observes that the effect of multiplying an element of the general linear group by an elementary matrix (also known as elementary transvection) from left or right is a row or a column operation respectively. Using this algorithm one can start with any matrix and get . Thus writing as a product of elementary matrices and a diagonal matrix. One objective of this paper is to discuss a similar algorithm for symplectic and orthogonal similitude groups.
We first describe the elementary matrices and the elementary operations for the symplectic and orthogonal similitude groups. These elementary operations are nothing but multiplication by elementary matrices from left and right respectively. The elementary matrices used here are nothing but the Steinberg generators which follows from the theory of Chevalley groups. For simplicity, we will write the algorithm for , , and separately.
3.1. Elementary matrices and elementary operations
In what follows, the scalar varies over the field and or . We define as the matrix with in the position and zero everywhere else, where . We simply use to denote . We often use the wellknown matrix identity , where is the Kronecker delta. For more details on elementary matrices, we refer [5, Chapter 11].
3.1.1. Elementary matrices for
We index rows and columns by . The elementary matrices are defined as follows:
We write as , where and are matrices. As , then we have and . Let us note the effect of multiplying by elementary matrices in the following table.
Row operations  Column operations  

ER1  row and  EC1  column and 
row  column  
ER2  row and  EC2  column and 
row  column  
ER3  row and  EC3  column and 
row  column 
3.1.2. Elementary matrices for
We index rows and columns by . The elementary matrices are as follows:
We write as , where and are matrices. Let us note the effect of multiplying by elementary matrices in the following table.
Row operations  Column operations  
ER1  row and  EC1  column and 
row  column  
ER2  row and  EC2  column and 
row  column  
ER3  row and  EC3  column and 
row  column  
ER1a  row  EC1a  column 
ER2a  row  EC2a  column 
3.1.3. Elementary matrices for
We index rows and columns by . The elementary matrices are defined as follows:
We write an element as , where and are matrices, and are matrices, and are matrices, . Let us note the effect of multiplying by elementary matrices in the following table.
Row operations  Column operations  

ER1  row and  EC1  column and 
row  column  
ER2  row and  EC2  column and 
row  column  
ER3  row and  EC3  column and 
row  column  
ER4  row and  EC4  column and 
row  column  
ER4a  row and  EC4a  column and 
row  column 
3.2. Gaussian elimination
To explain the steps of our Gaussian elimination algorithm we need some lemmas. In this subsection, we prove these lemmas.
Lemma 3.1.
Let be of size with the number of s equal to . Let be a matrix of size such that is symmetric (resp. skewsymmetric) then is of the form , where is an symmetric (resp. skewsymmetric), and (resp. ). Furthermore, if then is symmetric (resp. skewsymmetric).
Proof.
First, observe that the matrix . Since the matrix is symmetric (resp. skewsymmetric), then is symmetric (resp. skewsymmetric), and (resp. ). Also if then is symmetric (resp. skewsymmetric). ∎
Corollary 3.2.
Let be either in or .

If is a diagonal matrix , then the matrix is of the form , where is an symmetric if , and is skewsymmetric with if .

If is a diagonal matrix , then the matrix is of the form , where is an symmetric matrix if , and is skewsymmetric if .
Proof.
We use the condition that satisfies , and is symmetric (using , as is diagonal) when , and is skewsymmetric when . Then Lemma 3.1 gives the required form for . ∎
Corollary 3.3.
Let or , where , then the matrix is of the form , where is a symmetric matrix of size if , and skewsymmetric with if .
Proof.
We use the condition that satisfies and to get is symmetric if , and skewsymmetric if . Again Lemma 3.1 gives the required form for . ∎
Lemma 3.4.
Let . Then,

if and only if and , and

if and only if and .
Proof.

Let then satisfies . Then this implies and .
Conversely, if satisfies the given condition then clearly .

This follows by similar computation.
∎
Lemma 3.5.
Let be of size , where and be a matrix such that is symmetric (resp. skewsymmetric). Then , where each is of the form for some or of the form for some (resp. each is of the form for some ).
Proof.
Since the matrix is symmetric (resp. skewsymmetric), then the matrix is of the form , where is symmetric (resp. skewsymmetric), (resp. ) and is a row of size . Clearly, is a sum of the matrices of the form . ∎
Lemma 3.6.
For ,

The element is a product of elementary matrices.

The element is a product of elementary matrices.

The element is a product of elementary matrices.
Proof.

We have .

We produce these elements inductively. First we get , and . Set . Then compute . So inductively we get is a product of elementary matrices.

We have .
∎
Lemma 3.7.
The element is a product of elementary matrices.
Proof.
First we compute
Comments
There are no comments yet.