1. Introduction
The computation of the resultant of two univariate polynomials is an important task in computer algebra and it is used for various purposes in algebraic number theory and commutative algebra. It is wellknown that, over an effective field , the resultant of two polynomials of degree at most can be computed in ([vzGG03, Section 11.2]), where is the number of operations required for the multiplication of polynomials of degree at most . Whenever the coefficient ring is not a field (or an integral domain), the method to compute the resultant is given directly by the definition, via the determinant of the Sylvester matrix of the polynomials; thus the problem of determining the resultant reduces to a problem of linear algebra, which has a worse complexity.
In this paper, we focus on polynomials over principal Artinian rings, and we present an algorithm to compute the resultant of two polynomials. In particular, the method applies to polynomials over quotients of Dedekind domains, e.g. or quotients of valuation rings in adic fields, and can be used in the computation of the minimum and the norm of an ideal in the ring of integers of a number field, provided we have a element presentation of the ideal.
The properties of principal Artinian rings play a crucial role in our algorithm: we recall them in Section 2 and we define the basic operations that we will need in the algorithm. The algorithm we present does not involve any matrix operations but performs only operations on the polynomials in order to get the result. Inspired by the subresultant algorithm, we define a division with remainder between polynomials in the case the divisor is primitive, which leads us immediately to an adjusted version of the Euclidean algorithm. As a result, we obtain algorithms to compute the reduced resultant, a pair of Bézout coefficient and the resultant of two polynomials. The runtime of the algorithms depends heavily on the properties of the base ring, in particular on the number of maximal ideals and on their multiplicity. The asymptotic cost of our method is , improving upon the asymptotic complexity of the direct approach, which consists of a computation of the echelon normal form of the Sylvester matrix. In the special case of , we present a detailed analysis of the asymptotic cost of our method in terms of bitoperations, yielding a bitcomplexity in
Finally, we illustrate an algorithm to compute the greatest common divisor of polynomials over a adic field based on the ideas developed in the previous sections.
Acknowledgments
The authors were supported by Project II.2 of SFBTRR 195 ‘Symbolic Tools in Mathematics and their Application’ of the German Research Foundation (DFG).
2. Preliminaries
Our aim is to describe asymptotically fast algorithms for the computation of (reduced) resultants and Bézout coefficients. In this section we describe the complexity model that we use to quantify the runtime of the algorithms.
2.1. Basic operations and complexity
Let be a principal ideal ring. By we denote multiplication time for , that is, two polynomials in of degree at most can be multiplied using at most many additions and multiplications in . We will express the cost of our algorithms in terms of the number of basic operations of , by which we mean any of the following operations:

Given , return , , and true or false depending on whether or not.

Given , decide if and return in case it exists.

Given , return true or false depending on whether is a unit or not.

Given , return such that and .

Given two ideals , return a principal generator of the colon ideal .
A common strategy of the algorithms we describe is the reduction to computations in nontrivial quotient rings. The following remark shows that working in quotients is as efficient as working in the original ring .
Remark 2.1.
Let and consider the quotient ring . We will represent elements in by a representative, that is, by an element of . It is straight foward to see that a basic operation in can be done using at most basic operations in :

Computing , , is trivial. In order to decide if , it is enough to test whether .

Deciding if divides in is equivalent to the computation of a principal generator of and testing if divides .

Deciding if an element is a unit is equivalent to testing .

Given , , a principal generator is given by the image of a principal generator of . If , then and are such that .

Given two ideals of , the colon ideal is generated by the principal generator of .
2.2. Principal Artinian rings
The rings we will be most interested in are principal Artinian rings, that is, unitary commutative rings whose ideals are principal and satisfy the descending chain condition. These have been studied extensively and their structure is wellknown, see [AM69, McL73, Bou06]. Prominent examples of these rings include nontrivial quotient rings , and nontrivial quotients of residually finite Dedekind domains. By [Bou06, chap. IV, §2.5, Corollaire 1], a principal Artinian ring has only finite many maximal ideals and there exist minimal positive integers such that the canonical map is an isomorphism of rings. We denote by the canonical projection onto the th component. For every index , the ring is a local principal Artinian ring. We call the nilpotency index of and denote by the maximum of the nilpotency indices of the maximal ideals of . Note that is equal to the nilpotency index of the nilradical of . We will keep this notation for the rest of the paper whenever we work with a principal Artinian ring.
When investigating polynomials over , the following wellknown characterization of nilpotent elements will be very helpful, see [Bou06, chap. II, §2.6, Proposition 13].
Lemma 2.2.
An element is nilpotent if and only if .
As is a principal ideal ring, given elements of , there exists an element generating the ideal . We call a greatest common divisor of . Such an element is uniquely defined up to multiplication by units. By abuse of notation we will denote by any such element. Similarly, denotes a least common multiple of , that is, a generator of the intersection of the ideals generated by each . As is in general not a domain, quotients of elements are not welldefined. To keep the notation lightweight, still for two elements with we will denote by an element with (the element is uniquely defined up to addition by an element of the annihilator ).
The strategy for the (reduced) resultant algorithm will be to split the ring as soon as we encounter a nonfavorable leading coefficient of a polynomial. The (efficient) splitting of the ring is based on the following simple observation.
Proposition 2.3.
Let be a zerodivisor, which is not nilpotent. Using many basic operations in , we can find an idempotent element such that

The canonical morphism is an isomorphism with inverse .

The image of in is nilpotent and the image of in is invertible.
Proof.
For consider the ideal of . Since is Artinian, there exists such that . In particular , that is, is idempotent. (Note that we can always take .) Consider . Since is a generator of the idempotent ideal , we can find such that . Then satisfies , , and therefore (1) and (2) follow. The cost of finding is the computation of the power of , one multiplication and one division. ∎
Definition 2.4.
Let be an element. We say that is a splitting element if it is a nonnilpotent zero divisor.
3. Polynomials over principal Artinian rings
We now discuss theoretical and practical aspects of polynomials over a principal Artinian ring . Note that due to the presence of zerodivisors the ring structure of is more intricated than in the case of integral domains. For example, it is not longer true that every invertible element in is a nonzero constant or that every polynomial can be written as the product of its content and a primitive polynomial. In this section, we show how to overcome these difficulties and describe asymptotically fast algorithms to compute inverses, modular inverses and division with remainder (in case it exists).
3.1. Basic properties
We recall some theoretical properties of polynomials in . For the sake of completness, we include the short proofs.
Definition 3.1.
Let be a polynomial. We define the content ideal of to be the ideal of generated by the coefficients of . We say that is primitive if . By abuse of notation, we will often denote by a generator of this ideal.
Lemma 3.2.
Let be primitive polynomials. Then the product is primitive.
Proof.
Assume that the product is not primitive and let be its content ideal. Then is contained in a maximal ideal of , so . By assumption we have , yielding a contradiction since is an integral domain. ∎
However, in general, due to the presence of idempotent elements, it is not true that if we write a polynomial as for some , then is primitive.
Example 3.3.
Consider the nonprimitive polynomial over , for which we clearly have and . As , we can set and write .
Nevertheless, in case the content of is nilpotent, we can say something about the content of .
Lemma 3.4.
Let be a nonzero polynomial with nilpotent content. Let be any polynomial such that . Then is not nilpotent.
Proof.
Assume by contradiction that is nilpotent. Now it holds that . Since and are nilpotent ideals, this is a contradiction. ∎
Next we give the wellknown characterization of units and nilpotent elements of . We include a proof, since it gives a bound on the degree of the inverse of an invertible polynomial. Recall that is the nilpotency index of the nilradical of .
Proposition 3.5.
Let be a polynomial.

The polynomial is nilpotent if and only if its content is nilpotent.

The polynomial is a unit if and only if the constant term of is a unit in and is nilpotent.

If is invertible, then the degree of the inverse is bounded by .
Proof.
(1): Assume that is nilpotent. Since divides in , also is nilpotent. Vice versa, if is nilpotent, the projection of to a residue field is nilpotent too, so it must be zero. Hence all the coefficients of are in the intersection of all the maximal ideals of , which coincides with the set of nilpotent elements by Lemma 2.2. As the content is then generated by nilpotent elements, it is nilpotent.
(2): Assume that the constant term is a unit of and that is nilpotent. Without loss of generality, since being a unit or nilpotent in is invariant under multiplication with elements from . Since for a sufficiently large , we get showing that indeed is a unit. Vice versa, assume that is a unit. Then for every prime ideal of we have that the image of in is a unit. In particular, as is a domain, all nonconstant coefficients of are contained . Since this holds for all prime ideals , by Proposition 3.5 the nonconstant coefficients are nilpotent.
(3): If is nilpotent, then . Thus the claim follows as in the proof of part (2). ∎
Remark 3.6.
The bound from Proposition 3.5 is sharp. If is a prime, then the inverse of the polynomial over is .
Proposition 3.5 allows us to use classical Hensel lifting (see [vzGG03, Algorithm 9.3]) to compute the inverse of units in . For the sake of completeness, we include the algorithm.
Algorithm 3.7.
Given a unit , the following steps return the inverse .

Define as the inverse of the constant term of .

While :

Set .

Increase .


Return .
Proposition 3.8.
Algorithm 3.7 is correct and computes the inverse of using basic operations in .
Proof.
See [vzGG03, Theorem 9.4]. ∎
3.2. Quotient and remainder
We now consider the task of computing divisions with remainder.
Remark 3.9.
Let be polynomials. If has invertible leading coefficient, one can use wellknown asymptotically fast algorithms to find such that and (see for example [vzGG03, Algorithm 9.5]). This can be done using basic operations in .
Things are more complicated when the leading coefficient of is not invertible. Under certain hypotheses, we can factorize the polynomial as the product of a unit and a polynomial with invertible leading coefficient.
Proposition 3.10.
Assume that , is a primitive polynomial of degree and that there exists such that for the coefficient is nilpotent and is invertible. Then there exists a unit of degree and a polynomial with invertible leading coefficient, such that . The polynomials and can be computed using basic operations in .
Proof.
This is just an application of Hensel lifting. More precisely, consider the ideal of . Since is generated by nilpotent elements, it is nilpotent and . Consider the polynomials and in . By construction and . Thus , are coprime modulo and . Furthermore, the leading coefficient of is invertible. Therefore, by means of Hensel lifting and since is nilpotent, we can lift the factorization of modulo to factorization of in . The lifting can be done using [vzGG03, Algorithm 15.10]. As in our case is not monic, the degree of the lift of will increase during the lifting process, but since the polynomial has invertible leading coefficient, the degree of will be . The cost of every step in the lifting process is , as it involves a constant number of additions, multiplication and divisions between polynomials of degree at most . As the number of steps we need is at most , the claim follows. ∎
Example 3.11.
In , satisfies the hypotheses of Proposition 3.10. The corresponding factorization is .
Proposition 3.12.
Let be polynomials of degree at most and assume that is primitive. Then using at most basic operations in we can find such that and , where is the number of maximal ideals of .
Proof.
Assume first that satisfies the hypotheses of Proposition 3.10. Then we can compute a factorization with monic of degree and a unit. As is monic, we can perform division with remainder of by and find such that and . Multiplying by the inverse of , we get . By Remark 3.9 and Proposition 3.8, the division needs and the inversion basic operations respectively. As the degree of is bounded by and the degree by , the final multiplication of with requires basic operations. Thus the costs are in .
Now, we deal with the general case. In particular has trivial content but it does not satisfy the assumption of Proposition 3.10. This means that the first nonnilpotent coefficient of is a splitting element. Therefore, by Proposition 2.3, using basic operations we can find an isomorphism of with the direct product of two nontrivial quotient rings. In the quotients, the coefficient will be either nilpotent or invertible by Proposition 2.3. If is invertible in the quotient , then the projection of to satisfies the assumption of Proposition 3.10. Thus the division can be performed using basic operations. In case is nilpotent in the quotient , we need to repeat this process until the first nonnilpotent coefficient of the polynomial will be invertible. This has to happen eventually, as the content of the polynomial is trivial and the number of maximal ideals is finite. As at every step the degree of the term of the polynomial which is nonnilpotent decreases, the splitting can happen at most times. At the end, we reconstruct the quotient and the remainder by means of the Chinese remainder theorem. It follows that the algorithm requires basic operations in . ∎
3.3. Modular inverses
Finally note that using a similar strategy as in Algorithm 3.7 we can compute modular inverses.
Algorithm 3.13.
Given a unit and a polynomial with invertible leading coefficient, the following steps return the inverse modulo .

Define as the inverse of the constant term of .

While :

Set .

Increase .


While :

Set .

Increase .


Return .
Lemma 3.14.
Algorithm 3.13 is correct and computes the inverse of modulo using basic operations in .
Proof.
The correctness follows from [vzGG03, Theorem 9.4] as above. The complexity result follows from the fact that the degrees of the polynomials that we compute during the algorithm are bounded by . ∎
4. Resultants and reduced resultants via linear algebra
In this section, we will describe algorithms to compute the resultant, reduced resultant and the Bézout coefficients of univariate polynomials over an arbitrary principal ideal ring , which in this section is not assumed to be Artinian. The algorithms we present here will be based on linear algebra over , for which the complexity is described in [Sto00]. Note that in [Sto00] a slightly different notions of basic operations is used, which can be used to derive the basic operations from Section 2.1. For the sake of simpiclity in this section we will use the term basic operations to refer to basic operations as described in [Sto00].
We start by recalling the definition of the objects that we want to compute. Let be polynomials of degree of and respectively. Recall that the Sylvester matrix of the pair is the matrix representing the linear map
where , with respect to the canonical basis .
Definition 4.1.
Let be polynomials. We define the resultant of to be the determinant of the Sylvester matrix, and the reduced resultant of as the ideal . Two elements are called Bézout coefficients of the reduced resultant of and , if they satisfy and . As usual, by abuse of notation, we will call any generator of a reduced resultant of and denote it by .
4.1. Reduced resultant.
We begin by showing that, similar to the resultant, also the reduced resultant can be characterized in terms of invariants of the Sylvester matrix (at least in the case that one of the leading coefficients is invertible).
Lemma 4.2.
Let , , , such that and assume that the leading coefficient of or is invertible. Then we can find , such that , and .
Proof.
Without loss of generality, we may assume that is monic. Thus we can use polynomial division to write and . Now . Let . Then we have and, since is monic, . This shows as claimed. ∎
Recall that the strong echelon form of a matrix over a principal ideal ring is the same as the Howell form with reordered rows. In case of a principal ideal domain, the strong echelon form is the same as a Hermite normal form, where the rows are reordered, such that all the pivot entries are on the diagonal, see ([How86, Sto00, FH14]). In case the matrix has full rank, it is just the last diagonal entry. We will make use only of the following property of the upper right strong echelon form of a matrix : If is contained in the row span of and for some , then is in the row span of the rows. In particular, if , then is a multiple of the last row of , that is, is a multiple of the last diagonal entry of .
Proposition 4.3.
Let be polynomials of degree respectively, and the upper right strong echelon form of . Assume that one of the leading coefficients of and is invertible. Then .
Proof.
Under the isomorphism , the row span of is mapped onto . The statement now follows from Lemma 4.2 and the properties of the strong echelon form. ∎
Corollary 4.4.
If is a domain, , the matrix nonsingular and one of the leading coefficients of invertible, then is generated by the last diagonal entry of the upper right Hermite normal form of .
Remark 4.5.

Proposition 4.3 is in general not correct if both leading coefficients are not invertible, since cofactors satisfying the degree conditions given in Lemma 4.2 may not exists. For example, let be a prime and consider and in . As is invertible by Proposition 3.5, the ideal is and the reduced resultant is one. However, there are no constants such that . The Sylvester matrix is equal to
and has Howell and strong echelon form equal to
Thus
while
Corollary 4.6.
Let be polynomials of degree respectively. Assume that one of the leading coefficients of and is invertible. Both a reduced resultant and Bézout coefficients of a reduced resultant of can be computed using many basic operations in , where is the exponent of matrix multiplication.
Resultants.
For the sake of completeness, we also state the corresponding result for the resultant. Note that here, in contrast to the reduced resultant, we do not need any assumption on the leading coefficients of the polynomials. While the resultant can easily be computed as the determinant of the Sylvester matrix, a pair of Bézout coefficients can be found via linear algebra.
Proposition 4.7.
Let be two polynomials of degree , respectively. Both the resultant and Bézout coefficients for the resultant can be computed using many basic operations in .
5. Resultants and polynomial arithmetic
In this section, we show how to compute the resultant, reduced resultant and Bézout coefficients of univariate polynomials over a principal Artinian ring by directly manipulating the polynomials, avoiding the reduction to linear algebra problems. At the same time, this will allow us to get rid of the assumption on the leading coefficients of the polynomials, as present in Corollary 4.6. For the rest of this section we will denote by a principal Artinian ring.
5.1. Reduced resultant
Let be polynomials. The basic idea of the reduced resultant algorithm is to use that for every in order to make the degree of the operands decrease. Thus the computation reduces to the following base cases:
Lemma 5.1.
Let be constant polynomials and a polynomial with invertible leading coefficient and . Then the following hold:

and ,

and .
Proof.
(1): Clear. (2): Since is contained in , we can reduce to the computation of , where is the projection of modulo . As has invertible leading coefficient, . ∎
Unfortunately, this process may fail, mainly for the following reasons: the leading coefficient of the divisor is a zero divisor or the polyomials are not primitive. We now describe a strategy to overcome these issues.
Reduction to primitive polynomials
First of all, we show how to reduce to the case of primitive polynomials. Let , be polynomials of positive degree. We want to show that either we find a splitting element or we reduce the computation to primitive polynomials. In some case, we will need to change the ring over which we are working. To clarify this, when necessary, we will specify the ring in which we are computing the reduced resultant by writing it as a subscript, for example, denotes the reduced resultant over . For a quotient we will denote by a lift of the reduced resultant of the polynomials .
Lemma 5.2.
Let , be polynomials and assume is of positive degree with invertible leading coefficient. Let such that . Then .
Proof.
Let be the reduced resultant of and and write with . Since , the polynomial has invertible leading coefficient and is a constant, both and lie in the ideal generated by . Indeed, the additivity of the degree of the product holds if one of the polynomials has invertible leading coefficient. Thus
giving the result. ∎
Lemma 5.3.
Let , be two polynomials of positive degree. Assume that is primitive and write , where . Suppose that neither , nor any of the coefficients of are splitting elements. Denoting by a factorization from Proposition 3.10, we have .
Proof.
Lemma 5.4.
Let be two nilpotent polynomials. Let be a generator of and let , be polynomials such that and . Then and either or is not nilpotent.
Proof.
As , the relation between the reduced resultants holds. We now prove that either or is not nilpotent. Let be polynomials such that and . Looking at the content, we obtain and . Therefore, . If both , are nilpotent, we therefore get a contradiction. ∎
We use these three cases either to split the ring or to reduce to the case of primitive polynomials.
Algorithm 5.5.
Input: , nonconstant.
Output: or .

If and are primitive, return .

If is a splitting element, return .

If is a splitting element, return .

If and are nilpotent, then:

Compute .

Compute , .

Apply Algorithm 5.5 to . If it returns an element , return . Otherwise if it returns then return .


If is not nilpotent, swap and .

Find the term of of highest degree which is not nilpotent.

If is not invertible, return .
Proposition 5.6.
Let , be two polynomials of degree at most . Then Algorithm 5.5 either returns with and primitive polynomials such that or returns a splitting element of . The algorithm requires basic operations.
Proof.
If both polynomials are primitive, the statement is trivial. Assume now that one of the polynomials is primitive, say . If the content of is a zero divisor, the algorithm is clearly correct. Otherwise, the algorithm follows the proof of Lemma 5.3 and performs a recursive call on the resulting polynomials. By Lemma 3.4, in the recursive call, either both polynomials are primitive or the content of one of them is a splitting element, as desired.
Let us now assume that both polynomials are not primitive. If one of them is not nilpotent, its content is a splitting element of . Therefore the only case that remains is when both polynomials are nilpotent. By means of Lemma 5.4, we can reduce to the case when at least one of the two polynomials is not nilpotent, and we have already dealt with this case above. Therefore the claim follows.
We now analyse the runtime. All the operations except for the factorization in Step (7a) using Proposition 3.10 are at most linear in . If we are in the case of the factorization of Proposition 3.10, then in the recursive call we have a monic polynomial and a polynomial which is not nilpotent. Therefore the recursive call will take at most linear time in and the algorithm requires in this case operations. ∎
Chinese remainder theorem
Algorithm 5.5 returns in some cases a splitting element . In such a case, we use Proposition 2.3 and we continue the computation over the factor rings. Therefore we need to explain how to recover the reduced resultant over from the one computed over the quotients.
Lemma 5.7.
Let be two polynomials and let be a nontrivial idempotent. Denote by and the projections onto the components and respectively. Then for . In particular we have .
Proof.
Let be the reduced resultant of ; as , there exist such that . Therefore, applying we get and therefore . On the other hand, let . Then the Chinese remainder theorem implies that there exists and such that and , as desired. ∎
The main algorithm
Using Algorithm 5.5, we may assume that the input polynomials for the reduced resultant algorithm are primitive. In order to compute the reduced resultant of two polynomials and , we want to perform a modified version of the Euclidean algorithm on , . During the computation, we will potentially split the base ring using Proposition 2.3 and reconstruct the result using Lemma 5.7. We will now describe the computation of the reduced resultant. Before stating the algorithm, we briefly outline the basic idea. Let us assume that .

If has invertible leading coefficient, we can divide by with the standard algorithm (Remark 3.9).

If the leading coefficient of is not invertible, we want to apply Proposition 3.10. If the first nonnilpotent coefficient is invertible, then we get a factorization where is a unit and is monic. Therefore and satisfy the the hypotheses of item (1). If it is not invertible, then it is a splitting element and Proposition 2.3 applies.
We repeat this until the degree of one of the polynomials drops to . In this case, we can just use one of the base cases from Lemma 5.1.
Summarizing, we get the following recursive algorithm to compute the reduced resultant.
Algorithm 5.8 (Reduced resultant).
Input: Polynomials , in .
Output: A reduced resultant .

If or is constant, use Lemma 5.1 to return .

If , swap and .

Apply Algorithm 5.5 to .

If Step (2) returns an element (which is necessarily a splitting element), then:

Compute a nontrivial idempotent using Proposition 2.3.

Recursively compute , and return .


Now Step (2) returned .

If , then return .

Now , so that and both and are primitive.

Let be the term of of highest degree which is not nilpotent.

If is not invertible, then is a splitting element and we proceed as in Step (3) with replaced by and multiply the result by .
Theorem 5.9.
Algorithm 5.8 is correct and terminates. If the degree of and is bounded by , then the number of basic operations is in .
Proof.
We first discuss the correctness of the algorithm. At every recursive call we either have that:

the polynomials we produce generate the same ideal as the starting ones; in this case correctness is clear;

the reduced resultant of the input polynomials is the is the same as the reduced resultant of the polynomials we produce in output up to a constant, as stated in Proposition 5.6;

we find a splitting element and continue the computation over the residue rings, following Lemma 5.7.
Termination is straightforward too, as at every recursive call we either split the ring, pass to a residue ring or the sum of the degrees of the polynomials decreases and these operations can happen only a finite number of times. Let us analyse the complexity of the algorithm. Algorithm 5.5 costs at most operations by Proposition 5.6 and it is the most expensive operation that can be performed in every recursive call. The splitting of the ring can happen at most times and every time we need to continue the computation in every quotient ring. The recursive call in Step will start again with Step , as the input polynomials are primitive. Therefore, as passing to the quotient ring takes a constant number of operations, it does not affect the asymptotic cost of the algorithm. The recursive call in Step can happen at most time. Summing up, the total cost of the algorithm is operations. ∎
Example 5.10.
We consider the polynomials and over . The polynomials are primitive and monic, so we go directly to step of the algorithm and we divide by , so that we get . As the second polynomial has now content , we can use it as a splitting element. This means that we need to continue the computation over and . In the first of these rings, as is invertible, we divide by , obtaining the ideal . Thus . Let us now consider the second ring, . Here is nilpotent, so we get as is monic. By dividing, and therefore . Therefore, . Applying the Chinese remainder theorem, we therefore get .
Remark 5.11.
When one of the input polynomials has invertible leading coefficient, at every recursive call of Algorithm 5.8 one of the two polynomials will still have invertible leading coefficient.
Computation of Bézout coefficients
Let be polynomials. We want find two polynomials such that . To this end, in the same way as in the Euclidean algorithm, we will keep track of the operations that we perform during the computation of the reduced resultant. In order to describe the algorithm, we just need to explain how to obtain cofactors in the base case and how to update cofactors during the various operations of Algorithm 5.8. We begin with the base case, which follows trivially from Lemma 5.1.
Lemma 5.12.
Let be polynomials.

If and are constant, let such that as ideals of . Then .

If has positive degree and is constant, then .
In particular, if or is constant, then Bézout coefficients for the reduced resultant can be computed using basic operations.
An easy calculation shows the following:
Lemma 5.13.
Let
Comments
There are no comments yet.