Subquadratic-Time Algorithms for Normal Bases

05/05/2020 ∙ by Mark Giesbrecht, et al. ∙ University of Waterloo 0

For any finite Galois field extension 𝖪/𝖥, with Galois group G = Gal(𝖪/𝖥), there exists an element α∈𝖪 whose orbit G·α forms an 𝖥-basis of 𝖪. Such a α is called a normal element and G·α is a normal basis. We introduce a probabilistic algorithm for testing whether a given α∈𝖪 is normal, when G is either a finite abelian or a metacyclic group. The algorithm is based on the fact that deciding whether α is normal can be reduced to deciding whether ∑_g ∈ G g(α)g ∈𝖪[G] is invertible; it requires a slightly subquadratic number of operations. Once we know that α is normal, we show how to perform conversions between the power basis of 𝖪/𝖥 and the normal basis with the same asymptotic cost.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

For a finite Galois field extension , with Galois group , an element is called normal if the set of its Galois conjugates forms a basis for

as a vector space over

. The existence of a normal element for any finite Galois extension is classical, and constructive proofs are provided in most algebra texts (see, e.g., (Lang, 2002, Section 6.13)).

While there is a wide range of well-known applications of normal bases in finite fields, such as fast exponentiation (e.g., (Gao et al., 2000)), there also exist applications of normal elements in characteristic zero. For instance, in multiplicative invariant theory, for a given permutation lattice and related Galois extension, a normal basis is useful in computing the multiplicative invariants explicitly (Jamshidpey, Lemire & Schost, 2018).

A number of algorithms are available for finding a normal element in characteristic zero and in finite fields. Because of their immediate applications in finite fields, algorithms for determining normal elements in this case are most commonly seen. A fast randomized algorithm for determining a normal element in a finite field , where is the finite field with elements for any prime power and integer , is presented by von zur Gathen & Giesbrecht (1990), with a cost of operations in . A faster randomized algorithm is introduced by Kaltofen & Shoup (1998), with a cost of operations in . In the bit complexity model, Kedlaya and Umans showed how to reduce the exponent of to , by leveraging their quasi-linear time algorithm for modular composition (Kedlaya & Umans, 2011). Lenstra (1991) introduced a deterministic algorithm to construct a normal element which uses operations in . To the best of our knowledge, the algorithm of Augot & Camion (1994) is the most efficient deterministic method, with a cost of operations in .

In characteristic zero, Schlickewei & Stepanov (1993) gave an algorithm for finding a normal basis of a number field over with a cyclic Galois group of cardinality which requires operations in . Poli (1994) gives an algorithm for the more general case of finding a normal basis in an abelian extension which requires operations in . More generally in characteristic zero, for any Galois extension of degree with Galois group given by a collection of matrices, Girstmair (1999) gives an algorithm which requires operations in to construct a normal element in .

In this paper we present a new randomized algorithm that decides whether a given element in either an abelian or a metacyclic extension is normal, with a runtime subquadratic in the degree of the extension. The costs of all algorithms are measured by counting arithmetic operations in at unit cost. Questions related to the bit-complexity of our algorithms are challenging, and beyond the scope of this paper.

Our main conventions are the following. Let be a finite Galois extension presented as , for an irreducible polynomial of degree , with of characteristic zero. Then,

  • elements of are written on the power basis , where ;

  • elements of are represented by their action on .

In particular, for given by means of , and , the fact that is an -automorphism implies that is equal to , the polynomial composition of at (reduced modulo ).

Our algorithms combine techniques and ideas of von zur Gathen & Giesbrecht (1990) and Kaltofen & Shoup (1998): is normal if and only if the element is invertible in the group algebra . However, writing down involves elements in , which precludes a subquadratic runtime. Instead, knowing , the algorithms use a randomized reduction to a similar question in , that amounts to applying a random projection to all entries of , giving us an element . For that, we adapt algorithms from (Kaltofen & Shoup, 1998) that were developed for Galois groups of finite fields.

Having in hand, we need to test its invertibility. In order to do so, we present an algorithm in the abelian case which relies on the fact that is isomorphic to a multivariate polynomial ring modulo an ideal , where ’s are positive integers. For metacyclic groups, we exploit the block-Hankel structure of the matrix of multiplication by .

These latter questions on the cost of arithmetic operations in

are closely related to that of Fourier transform over

, and it is worth mentioning that there is a vast literature on fast algorithms for Fourier transforms (over the base field ). Relevant to our current context, consider (Clausen & Müller, 2004) and (Maslen et al., 2018) and references therein for details. At this stage, it is not clear how we can apply these methods in our context (where we work over an arbitrary , not necessarily algebraically closed).

This paper is written from the point of view of obtaining improved asymptotic complexity estimates. Since our main goal is to highlight the exponent (in

) in our runtime analyses, costs are given using the soft-O notation: is in if it is in , for some constant .

The first main result of this paper is the following theorem; we use a constant that describes the cost of certain rectangular matrix products (see the end of this section). Under Assumption 1, if is either abelian or metacyclic, one can test whether is normal using operations in , where . The algorithms are randomized of the Monte Carlo type. Once is known to be normal, we also discuss the cost of conversion between the power basis of and its normal basis . Again inspired by previous work of Kaltofen & Shoup (1998), we obtain the following results. Under Assumption 1, if is either abelian or metacyclic and is known to be normal, we can perform basis conversion between the power basis of and its normal basis using operations in . The algorithms are randomized of the Monte Carlo type. In both theorems, the runtime is barely subquadratic, and the exponent is obtained through fast matrix multiplication algorithms that are most likely impractical for reasonable . However, these results show in particular that we can perform basis conversions without writing down the normal basis itself (which would require elements in ). Both above algorithms are randomized of the Monte Carlo type. In our model, this means that they are allowed to draw random elements for a prescribed subset of , and for a control parameter

, produce the correct answer with probability greater than

(see Remark 2).

Section 2 of this paper is devoted to definitions and preliminary discussions. In Section 3, a subquadratic-time algorithm is presented for the randomized reduction of our main question to invertibility testing in ; this algorithm applies to any finite polycyclic group, and in particular to abelian and metacyclic groups. In Section 4, we show that the problems of testing invertibility in and performing divisions can be solved in quasi-linear time for an abelian group; for metacyclic groups, we give a subquadratic time algorithm based on structured linear algebra algorithms (this will finish the proof of Theorem 1). Finally, Section 5 proves Theorem 1.

Our algorithms make extensive use of known algorithms for polynomial and matrix arithmetic; in particular, we use repeatedly the fact that polynomials of degree in , for any field of characteristic zero, can be multiplied in operations in  (Schönhage & Strassen, 1971). As a result, arithmetic operations in can all be done using operations in  (von zur Gathen & Gerhard, 2013). We also assume that generating a random element in takes constant time.

For matrix arithmetic, we will rely on some non-trivial results on rectangular matrix multiplication initiated by Lotti & Romani (1983). For , we denote by a constant such that over any ring, matrices of sizes by can be multiplied in ring operations (so is the usual exponent of square matrix multiplication, which we simply write ). The sharpest values known to date for most rectangular formats are by Le Gall & Urrutia (2018); for , the best known value is by Le Gall (2014). Over a field, further matrix operations (such as inversion) can also be done in base field operations.

Part of the results of this paper (Theorem 1 for abelian groups) were already published in the conference paper (Giesbrecht et al., 2019).

2 Preliminaries

One of the well-known proofs of the existence of a normal element for a finite Galois extension, as for example reported by Lang (2002, Theorem 6.13.1), suggests a randomized algorithm for finding such an element. Assume is a finite Galois extension with Galois group . If is a normal element, then

(1)

implies . For each , applying to equation (1) yields

(2)

Using (1) and (2), one can form the linear system , with and where, for ,

(3)

Classical proofs then proceed to show that there exists with .

This approach can be used as the basis of a procedure to test if a given is normal, by computing all the entries of the matrix and using linear algebra to compute its determinant; using fast matrix arithmetic this requires operations in , that is operations in . This is at least cubic in ; the main contribution of this paper is to show how to speed up this verification.

Before entering that discussion, we briefly comment on the probability that be a normal element: if we write , the determinant of is a (not identically zero) homogeneous polynomial of degree in . If the ’s are chosen uniformly at random in a finite set , the Lipton-DeMillo-Schwartz-Zippel lemma implies that the probability that be normal is at least .

If is cyclic generated by an element , with and for all , von zur Gathen & Giesbrecht (1990) avoid computing a determinant by computing the GCD of and . In effect, this amounts to testing whether is invertible in the group ring , which is isomorphic to . This is a general fact: for any , matrix above is the matrix of left multiplication by the orbit sum

where we index rows by and columns by their inverses . In terms of notation, for any field (typically, we will take either or ), and in , we will write for the left multiplication matrix by in , using the two bases shown above. In other words, the matrix of (3) is .

The previous discussion shows that being normal is equivalent to being a unit in . This point of view may make it possible to avoid linear algebra of size over , but writing itself still involves elements in . The following lemma is the main new ingredient in our algorithm: it gives a randomized reduction to testing whether a suitable projection of in is a unit.

For , is invertible if and only if

is invertible for a generic -linear projection . For a fixed , any entry of can be written as

(4)

and for , the corresponding entry in can be written , with . Replacing these ’s by indeterminates ’s, the determinant becomes a polynomial in Viewing in , we have , which is non-zero by assumption. Hence, is not identically zero, and the conclusion follows.

Assume is not invertible. Following the proof of Jamshidpey et al. (2018, Lemma 4), we first show that there exists a non-zero in the kernel of .

The elements of act on rows of entrywise and the action permutes the rows the matrix. Assume is the group homomorphism such that for all , where is the -th row of .

Since is singular, there exists a non-zero such that ; we choose having the minimum number of non-zero entries. Let such that . Define . Then, which means for . For , we have Since this holds for any , we conclude that , hence is in the kernel of . On the other hand since the -th entry of is one, the -th entry of is zero. Thus the minimality assumption on v shows that , equivalently , and hence .

Now we show that is not invertible for all choices of . By Equation (4), we can write

Since has entries in , yields for . Hence,

for any ’s in , and is not invertible for any . Our algorithm can be sketched as follows: given in , choose a random , and let

(5)

Note that is equal to , that is, the multiplication matrix by in , where, as above, we index rows by and columns by . Then, the previous lemma can be rephrased as follows: For , is normal if and only if is invertible in for a generic -linear projection . Thus, once is known, we are left with testing whether it is a unit in . In the next two sections, we address the respective questions of computing , and testing its invertibility in . If is not normal, is not a unit. In this case, the proof of Lemma 2 established that is not a unit for any , so our algorithm always returns the correct answer in this case.

If is normal, the polynomial in the proof of Lemma 2, is a homogeneous polynomial of degree in . Thus, if we choose the coefficients of uniformly at random in any fixed finite subset , by the Lipton-DeMillo- Schwartz-Zippel lemma, we return the correct answer with probability at least .

3 Computing projections of the orbit sum

In this section we present an algorithm to compute when is polycyclic (we give a definition of this family of groups and recall some well known results about them in Subsection 3.2). To motivate our algorithm, we start by the simple case of a cyclic group. We will see that they follow closely ideas used by Kaltofen & Shoup (1998) over finite fields.

Suppose , so that given in and , our goal is to compute

(6)

Kaltofen & Shoup (1998) call this the automorphism projection problem and gave an algorithm to solve it in subquadratic time, when is the -power Frobenius . The key idea in their algorithm is to use the baby-steps/giant-steps technique: for a suitable parameter , the values in (6) can be rewritten as

First, we compute all for . Then we compute all for , where the ’s are themselves linear mappings . Finally, a matrix product yields all values .

The original algorithm of Kaltofen & Shoup (1998) relies on the properties of the Frobenius mapping to achieve subquadratic runtime. In our case, we cannot apply these results directly; instead, we have to revisit the proofs of (Kaltofen & Shoup, 1998, Lemmata 3 and 4), now considering rectangular matrix multiplication. Our exponents involve the constant , for which we have the upper bound : this follows from the upper bounds on and given by Le Gall & Urrutia (2018), and the fact that is convex (Lotti & Romani, 1983). In particular, . Note also the inequality for , since describes products with input and output size .

3.1 Multiple automorphism evaluation and applications

The key to the algorithms below is the remark following Assumption 1, which reduces automorphism evaluation to modular composition of polynomials. Over finite fields, this idea goes back to von zur Gathen & Shoup (1992), where it is credited to Kaltofen.

For instance, given (by means of ), we can deduce (again, by means of its image at ) as ; this can be done with operations in using Brent and Kung’s modular composition algorithm (Brent & Kung, 1978). The algorithms below describe similar operations along these lines, involving several simultaneous evaluations. In this subsection, we work under Assumption 1 and we make no special assumption on .

Given in and in , with , we can compute with operations in . (Compare (Kaltofen & Shoup, 1998, Lemma 3)) As noted above, for , , with . Let , , and rewrite as

where the ’s are polynomials of degree less than . The next step is to compute , for . There are products in to perform, so this amounts to operations in .

Having ’s in hand, one can form the matrix , where each column is the coefficient vector of (with entries in ); this matrix has rows and columns. We also form

where is the coefficient vector of . This matrix has rows and columns.

Compute ; as per our definition of exponents , this can be done in operations in , and the rows of this matrix give all . The last step to get all is to write them as Using Horner’s scheme, this takes operations in , which is operations in . Since , the leading exponent in all costs seen so far is .

Consider in , positive integers and elements in , for , . If and , we can compute

using operations in . Define . For in and non-negative integers , define

Assume then that for some in , we know

we show how to compute

Since our input is , it will be enough to go through this process for all values of to obtain the output of the algorithm.

For a given index , and for define further

in particular, and . Hence, given , it is enough to show how to compute , for indices . This is done by writing

with

The automorphisms can be computed iteratively by modular composition; the bottleneck is the application of to a subset of . Using Lemma 3.1, since has elements, this takes operations in .

For a given index , this is repeated times. Adding up for all indices , this amounts to repetitions, which is by assumption; the conclusion follows.

We now present dual versions of the previous two lemmas (note that Kaltofen & Shoup (1998) also have such a discussion). Seen as an -linear map, the operator admits a transpose, which maps an -linear form to the -linear form . The transposition principle (Kaminski et al., 1988; Canny et al., 1989) implies that if a linear map can be computed in time , its transpose can be computed in time . In particular, given linear forms and in , transposing Lemma 3.1 shows that we can compute in time . The following lemma sketches the construction.

Given -linear forms and in , with , we can compute using operations in . Given by its values on the power basis , is represented by its values at , with .

Let and be as in the proof of Lemma 3.1. Compute the “giant steps” , and for and , deduce the linear forms defined by for all in . Each of them can be obtained by a transposed multiplication in time  (Shoup, 1995, Section 4.1), so that the total cost thus far is .

Finally, multiply the matrix with entries the coefficients of all (as rows) by the matrix with entries the coefficients of (as columns) to obtain all values , for an . This can be accomplished with operations in .

From this, we deduce the transposed version of Lemma 3.1, whose proof follows the same pattern.

Consider in , positive integers and -linear forms , for , . If and , we can compute

using operations in . We proceed as in Lemma 3.1, reversing the order of the steps. Using the same index set as before, define, for in and non-negative integers

For , assuming that we know

we compute

This time, for , we set

where for a non-negative integer , is obtained by setting to zero the coefficients of in the base-two expansion of .

Starting from , we compute all for , since . This is done essentially as in Lemma 3.1, but using Lemma 3.1 this time, in order to do right-composition by . The cost analysis is as in Lemma 3.1.

3.2 Computing the orbit sum projection for polycyclic groups

Our main algorithm in this section applies to a family of groups known as polycyclic; see (Holt et al., 2005, Chapter 8) for more details on such groups.

Our group is called polycyclic if it has a normal series

where is cyclic; without loss of generality, we assume that holds for all , so that is , with . Finitely generated nilpotent or abelian groups are polycyclic. In general any finite solvable group is polycyclic; our key families of examples in the next section (abelian and metacyclic groups) thus fit into this category.

If is polycyclic then, up to renumbering, its elements can be written as

where and . Elements of , or are written as polynomials , with for all , and coefficients in either or .

Suppose that is polycyclic, with notation as above. For in and , , as defined in (5), can be computed using operations in . Our goal is to compute

(7)

for all indices such that holds for ; here, is an -linear projection .

Our construction is inspired by that sketched in the cyclic case. Define to be the unique index in such that and Then, all elements in (7) can be computed with the following steps, the sum of whose costs proves the proposition.

Step 1. Apply Lemma 3.1, with for all , to get

for all indices such that holds for and . This amounts to taking , and for in the lemma. For the lemma to apply, we have to check that the product of these indices is . Indeed, this product is at most

Hence, the lemma applies, and the cost of this step is .

Step 2. Compute , for as above. The cost is that of modular compositions, which is negligible compared to the cost of the previous step.

Step 3. Use Lemma 3.1 with for all , to compute

for all indices and for . This amounts to using the lemma with indices , and for . Again, we have to verify that is . Indeed, we have

By definition, we have , so . Because we assume , the second term is also at most , so the product is at most . Hence, Lemma 3.1 applies, and computes all