The present paper is dedicated to the description of algorithms for fast arithmetics in skew polynomial rings. Since they were first introduced by Ore, skew polynomials and their variants have been widely studied in several areas of mathematics. In particular, skew polynomials over finite fields have various applications in coding theory , cryptography see , for -adic Galois representations . Fast arithmetics for manipulating these objects is useful for such applications, and has been improved over time since the first breakthrough paper on computational skew polynomials over finite fields, due to Giesbrecht .
Let be a field and let be a finite extension of , endowed with the endomorphism . We assume that has order and that . We consider the ring of skew polynomials with coefficients in . This is a non commutative ring where the relation holds for all (for more detail about the definitions, see section 1.1). The main problem addressed in this paper is the fast multiplication of elements of . The complexity of algorithms is described in terms of the number of elementary operations in with respect to the degree of the skew polynomials to be multiplied, and the degree of over .
State of the art. The naïve method for multiplication of skew polynomials of degree yields an algorithm that has complexity operations in . In , this complexity is improved to . Let denote the exponent of matrix multiplication. The authors of the present paper gave several algorithms for multiplication in , with best complexity achieved for . The most recent results by Puchinger and Wachter-Zeh  give a bound of operations in for multiplication in , which improves on the previous results  when , which is the most relevant case for applications in coding theory (see , §4.2). In the context of differential operators (which share many similarities with skew polynomials), Benoit, Bostan and Van der Hoeven have obtained a complexity of (see , Theorem 1) for multiplication in . We expect that this complexity should be doable in as well, but we have only achieved it for .
Contributions of the paper. This paper’s main algorithm improves the complexity of the best known algorithms for multiplication in to when . For , this gives a complexity of operations in . This is quasi-optimal in the sense that matrix multiplication can be reduced to skew polynomial multiplication (this is for example a consequence of Proposition 1.6 below), so that any improvement on the exponent of skew polynomial multiplication would lead to a similar improvement for matrix multiplication. We also design a new algorithm for multiplication of polynomials of small degree in , whose complexity is .
We also show that our method can be used to improve the best known complexities of various related problems, such as multi-point evaluation, minimal subspace polynomial, and interpolation which are studied in . We also improve the complexities for greatest common divisors and least common multiples.
Organization of the paper. The first section of the paper focuses on elementary operations for skew polynomials with normal bases: evaluation and interpolation. More precisely, if , then is an endomorphism of the -algebra , and the map is a morphism of -algebras. In this section, we describe how we can compute efficiently using a normal basis and, conversely, how to recover (the reduction modulo of) from the datum of (see Proposition 1.6). We also look into more detail how the can solve the same evaluation/interpolation problems with of small degree at only the first elements of a normal basis.
In the second section, we present our algorithm for fast multiplication of skew polynomials. First, we study how the multiplication can be done efficiently modulo through evaluation/interpolation on a normal basis and matrix multiplication. We generalize this study to multiplication modulo for any irreducible polynomial . This allows us to give an algorithm for multiplication of skew polynomials of degree that works in operations in (where denotes the complexity of multiplication of square matrices of size ).
In the third section, we give several applications to fast arithmetics for skew polynomials. We show how we can perform general multi-point evaluation, minimal subspace polynomial, and interpolation, as well as usual operations on skew polynomials such as (extended) Euclidean division, greatest common divisor, least common multiple.
1 Fast evaluation
In this section, we present the notion of skew polynomials, and we study the problems of their evaluation and interpolation using normal bases.
1.1 Definitions and notations
Let be a field and let be an étale -algebra (since is a field, this just means that is isomorphic to a product of field extensions of ). Let be an automorphism of . We assume that has finite order and that . The ring of skew polynomials with coefficients in is the ring whose underlying group is and whose multiplication is determined by the relation
The ring is not commutative unless .
Examples. The following situations are examples of the general setting that we are considering:
, and is the shift operator ,
(Extensions of finite fields) , and is the Frobenius endomorphism of ,
(Cyclotomic extensions) and where is a primitive -th root of unity and is prime; is a generator of the Galois group (which is the cyclic group ).
(Kummer extensions) contains a primitive -th root of , for some suitable and takes to .
The two last examples are addressed in  and have applications to space-time codes.
Usually, is assumed to be a field extension of . We are considering the more general context of an étale -algebra because it is stable under base change: if is étale and is an extension of , then is étale over (but it is not a field in general, even if is). This feature is used mostly in Section 2.1.2, and does not make the classical results any more difficult to prove.
A normal basis of is a basis of over such that (the indices being taken modulo ).
Proposition 1.3 (, Satz 1).
Assuming has order and , has a normal basis.
The problem of the construction of normal bases has been widely studied, see for example  for the case of finite fields, and  for the case of number fields. In both cases of cyclotomic extensions and Kummer extensions, it is easy to exhibit a normal basis: in the cyclotomic case, the basis starting with does the job while in the Kummer case, one can take:
From now on, we assume that we have fixed a normal basis of together with a working basis in which the elements of are represented. Let be the matrix of change of basis from the working basis to the normal basis. We assume that the multiplication in and the application of can be both performed in operations in in the working basis.
1.2 Evaluation and interpolation
on a normal basis
We introduce a relation between polynomials that allows to evaluate the linear map associated to a skew polynomial at the elements of the normal basis .
is a homomorphism of -algebras. It induces an isomorphism of -algebras:
The first map is a homomorphism because for all , in . Since has order , lies in the kernel of this map, so is well-defined. Both and are
-vector spaces of dimension, hence it suffices to prove injectivity. By Artin’s Lemma on independence of characters, is a linearly independent family over , so that if for some of degree , then . ∎
Lemma 1.4 shows that multiplication of skew polynomials modulo is essentially the same as multiplication of matrices over , assuming that the isomorphism can be computed efficiently (in both ways). We now address this question.
Throughout this paper, we will denote for if and .
Let be a new (commutative) variable and consider the classical polynomial ring . Let be the polynomial whose coefficients are the elements of the normal basis.
Let and let . Let and let . Then
By linearity, it is enough to check that the relation holds when for . Let . We have , where indices are taken modulo .
On the other hand, doing the calculations modulo , . ∎
Multiplication in can be performed in operations in .
Let . Let be the commutative polynomials with the same coefficients as respectively. Let and . Both and can be computed in operations in . Now let (resp. ) be the matrix whose -th column is the decomposition of the -th coefficient of (resp. ) in the working basis. By Proposition 1.6, (resp ) is the matrix of (resp. ) where the codomain in endowed with the normal basis and the codomain is endowed with the working basis. Set ; this product can be computed within operations in . We know that is the matrix of where again the codomain in endowed with the normal basis and the codomain is endowed with the working basis. Let
and compute , which can also be computed in operations in . Then, again by Proposition 1.6, . This shows that the global complexity of this computation is . ∎
In Section 2, we will generalize this algorithm and show how it yields a fast multiplication algorithm for skew polynomials (not only in the modular case).
1.3 Evaluation and interpolation at
an incomplete normal basis
Evaluation. We shall see later how we can compute the product of two skew polynomials of small degree by determining how their product acts on elements of a normal basis. With this motivation in mind, let us describe how we can compute efficiently the image of the first few elements of a normal basis under the action of the skew polynomial . Recall that, using Proposition 1.6 with , and writing , we know that
where . Let , and let of degree . We are interested in computing only for .
Let of degree and let for . Let and . Then, for :
where and (the products being taken modulo ).
Since , and for , we are left with the formula:
and both sums correspond precisely to the coefficients of and respectively. ∎
Let of degree , then the collection of can be computed in operations in .
By Lemma 1.8, the evaluation of at can be obtained by two multiplications of (classical) polynomials of degree with coefficients in , hence with complexity operations in . ∎
Interpolation. Still bearing in mind the aim of multiplying two skew polynomials by composing the corresponding linear maps, we are interested in the following question of interpolation: given values , find of degree such that for all .
Let us explain first how the solution to this problem can be computed when . In this case, the skew polynomial we are looking for is the so-called minimal subspace polynomial corresponding to the span . A generic fast algorithm for solving this problem has been proposed by Puchinger and Wachter-Zeh in , Theorem 26; it has complexity operations in . In the special case we are considering, we shall see that this can be improved to .
Let , so that . If is such that for , then there exists of degree such that . Of course, the converse is also true, and this equation is equivalent to:
with and . The latter equation can be solved thanks to the extended Euclidean algorithm. Indeed, computing the gcd of and and stopping after the first remainder of degree , we get a relation of the form:
with and , which yields a solution to the problem when . This computation can be done in operations in thanks to the half-gcd algorithm (see , Theorem 11.5).
In the general case, let , and let . We are looking for with degree and with degree such that . This equation is equivalent to .
Let , and for , let be the remainder of the Euclidean division of by . Then for , .
Consider the map
It is well-defined, linear, and both sides have the same dimension over . Moreover, the determinant of this map is nonzero if and only if (see , §4.1). Therefore, it is sufficient to prove that is injective.
Let us consider in the kernel of . By definition, , so that , where . By Proposition 1.6, the skew polynomial (whose coefficients are the coefficients of ) evaluates to at . Hence, it is a left multiple of the minimal subspace polynomial of . Since is linearly independent over , has degree (it is a generator of the kernel of the -linear map mapping to ). In particular, since , , so and is injective. Hence and has the required degree. ∎
Let and . Then there exists , with , and such that
Moreover, Algorithm SmallDegreeInterpolation outputs and for a cost of operations in .
Sketch of the proof.
The result follows from the correctness of Algorithm 1, but is also a theoretical consequence of Lemma 1.10. Indeed, this lemma shows that there exists a linear combination of , whose higher degree terms have coefficients , and the bounds on the degrees follow from the fact that for , with , . Algorithm 1 is an adaptation of the half-gcd algorithm, which computes simultaneously the sequence of the remainders in the extended Euclidean division or and , and the combination of and that has the given higher degree terms. ∎
2 Fast multiplication
In this section, we study the problem of multiplying efficiently two elements both of degree . The complexity is the number of operations in , given as a function of and .
2.1 Modular multiplication
2.1.1 Multiplication modulo
We consider the ring . Let , and let . We are now going to describe an algorithm for multiplication in modulo .
factors as an isomorphism .
This maps to , thus mapping to . ∎
Multiplication in can be performed in operations in .
We could use the proof of Corollary 2.2 directly to design an algorithm for multiplication modulo . Such an algorithm would require computing and each time we use it to compute . Alternatively, we can slightly modify the basis on which we are evaluating the corresponding maps, which can provide a gain if there are many multiplications to do modulo .
Let , and let . Let , and for , , such that is a basis of over . By construction, we have for , , and . For example, if is a normal basis of over , then and defines a suitable basis. Now, let .
Let and let . Let . Let . Then
The proof is similar to that of Proposition 1.6. By linearity, it is enough to check that the relation holds for for . Let . We have :
On the other hand, doing the calculations modulo :
Hence, for all , so for all . ∎
Algorithm ModMult below makes precise the algorithmical content of Proposition 2.3; it uses a primitive that takes as input a tuple and outputs the matrix whose -th column are the coordinates of is the working basis.
Algorithm ModMult computes the product in in operations in .
Multiplication of polynomials in modulo requires operations in . Multiplication of matrices of size in requires operations in . Hence the global complexity is operations in . ∎
2.1.2 Multiplication modulo
Let be a finite extension. Define ; it is an étale -algebra endowed with the endomorphism that extends and has order .
The algebra is not necessarily a field (for instance, when , it splits as a product ). It is the reason why we needed to place this paper in the more general setting of étale algebras.
Let . Set . We assume that . Let be the minimal polynomial of . We want to generalize the results of §2.1.1 to multiplication modulo (in §2.1.1, we have , and ). Note that if is a normal basis of , then is a normal basis of .
The canonical morphism : induces an isomorphism
First note that is a two-sided ideal of , and that the canonical morphism induces a morphism which maps to , hence the latter surjective. Moreover, by -linearity, lies in the kernel of this map. We then get a surjective morphism of -algebras . Since both sides have dimension over , this morphism is an isomorphism. ∎
We are now back exactly in the situation of Section 2.1.1, where has been replaced by and by : all the computations can be carried out the same way, and passing back through the isomorphism of Lemma 2.6, we can perform fast multiplication modulo . The algorithm is as follows:
Algorithm 3 computes the product in with operations in .
2.2 Reconstruction with CRT
Let be two skew polynomials. We recall that our aim is to design a fast algorithm for computing the product . We set .
Multiplication in large degree. We first assume that the polynomial has degree larger than . In this case, the idea is to evaluate the modulo various using Algorithm ModMultZ and then to reconstruct the result using a non commutative version of the Chinese Remainder Theorem. The precise result we need is given by the following Proposition.
Let be pairwise coprime polynomials, and let . Then the natural map:
is an isomorphism of -algebras.
Since the domain and the codomain have the same dimension over , it is enough to prove the surjectivity. For between and , consider