# Division and Slope Factorization of p-Adic Polynomials

We study two important operations on polynomials defined over complete discrete valuation fields: Euclidean division and factorization. In particular, we design a simple and efficient algorithm for computing slope factorizations, based on Newton iteration. One of its main features is that we avoid working with fractional exponents. We pay particular attention to stability, and analyze the behavior of the algorithm using several precision models.

## Authors

• 14 publications
• 3 publications
• 14 publications
• ### The Tropical Division Problem and the Minkowski Factorization of Generalized Permutahedra

Given two tropical polynomials f, g on R^n, we provide a characterizatio...
08/01/2019 ∙ by Robert Alexander Crowell, et al. ∙ 0

• ### Toward an Optimal Quantum Algorithm for Polynomial Factorization over Finite Fields

We present a randomized quantum algorithm for polynomial factorization o...
07/25/2018 ∙ by Javad Doliskani, et al. ∙ 0

• ### Euclidean Affine Functions and Applications to Calendar Algorithms

We study properties of Euclidean affine functions (EAFs), namely those o...
02/13/2021 ∙ by Cassio Neri, et al. ∙ 0

• ### On Division Polynomial PIT and Supersingularity

For an elliptic curve E over a finite field _q, where q is a prime power...
01/08/2018 ∙ by Javad Doliskani, et al. ∙ 0

• ### A fast algorithm for computing Bell polynomials based on index break-downs using prime factorization

Establishing an interesting connection between ordinary Bell polynomials...
04/17/2020 ∙ by Hamed Taghavian, et al. ∙ 0

• ### On A Polytime Factorization Algorithm for Multilinear Polynomials Over F2

In 2010, A. Shpilka and I. Volkovich established a prominent result on t...
05/21/2018 ∙ by Pavel Emelyanov, et al. ∙ 0

• ### Tropical Polynomial Division and Neural Networks

In this work, we examine the process of Tropical Polynomial Division, a ...
11/29/2019 ∙ by Georgios Smyrnis, et al. ∙ 0

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

Polynomial factorization is a fundamental problem in computational algebra. The algorithms used to solve it depend on the ring of coefficients, with finite fields, local fields, number fields and rings of integers of particular interest to number theorists. In this article, we focus on a task that forms a building block for factorization algorithms over complete discrete valuation fields: the decomposition into factors based on the slopes of the Newton polygon.

The Newton polygon of a polynomial over such a field is given by the convex hull of the points and the point . The lower boundary of this polygon consists of line segments of slope . The slope factorization of expresses as a product of polynomials with degree whose roots all have valuation . Our main result is a new algorithm for computing these .

Polynomial factorization over local fields has seen a great deal of progress recently [pauli:10a] [guardia-nart-pauli:12a] [guardia-montes-nart:08a] [montes:99a] following an algorithm of Montes. Slope factorization provides a subroutine in such algorithms [pauli:10a, Section 2]. For the most difficult inputs, it is not the dominant contributor to the runtime of the algorithm, but in some circumstances it will be. We underline moreover that the methods introduced in this paper extend partially to the noncommutative setting and appear this way as an essential building block in several decomposition algorithms of -adic Galois representations and -adic differential equations [caruso:16].

Any computation with -adic fields must work with approximations modulo finite powers of , and one of the key requirements in designing an algorithm is an analysis of how the precision of the variables evolve over the computation. We work with precision models developed by the same authors [caruso-roe-vaccon:14a, Section 4.2], focusing on the lattice and Newton models. As part of the analysis of the slope factorization algorithm, we describe how the precision of the quotient and remainder depend on the input polynomials in Euclidean division.

Main Results. Suppose that the Newton polygon of has a break at . Set , and

 Ai+1 =Ai+(ViP%Ai) Bi+1 =P//Ai+1 Vi+1 =(2Vi−V2iBi+1)%Ai+1.

Our main result is Theorem 4.1, which states that the sequence converges quadratically to a divisor of . This provides a quasi-optimal simple-to-implement algorithm for computing slope factorizations. We moreover carry out a careful study of the precision and, applying a strategy coming from [caruso-roe-vaccon:14a], we end up with an algorithm that outputs optimal results regarding to accuracy.

In order to prove Theorem 4.1, we also determine the precision of the quotient and remainder in Euclidean division, which may be of independent interest. These results are found in Section 3.2.

Organization of the paper. After setting notation, in Section 2 we recall various models for tracking precision in polynomial arithmetic. We give some background on Newton polygons and explain how using lattices to store precision can allow for extra diffuse -adic digits that are not localized on any single coefficient.

In Section 3, we consider Euclidean division. We describe in Theorem 3.2 how the Newton polygons of the quotient and remainder depend on numerator and denominator. We use this result to describe in Proposition 3.3 the precision evolution in Euclidean division using the Newton precision model. We then compare the precision performance of Euclidean division in the jagged, Newton and lattice models experimentally, finding different behavior depending on the modulus.

Finally, in Section 4 we describe our slope factorization algorithm, which is based on a Newton iteration. Unlike other algorithms for slope factorization, ours does not require working with fractional exponents. In Theorem 4.1 we define a sequence of polynomials that will converge to the factors determined by an extremal point in the Newton polygon. We then discuss the precision behavior of the algorithm.

Notations. Throughout this paper, we fix a complete discrete valuation field ; we denote by the valuation on it and by its ring of integers (i.e. the set of elements with nonnegative valuation). We assume that is normalized so that it is surjective and denote by a uniformizer of , that is an element of valuation . Denoting by a fixed set of representatives of the classes modulo and assuming , one can prove that each element in can be represented uniquely as a convergent series:

 x=+∞∑i=val(x)aiπiwithai∈S. (1)

The two most important examples are the field of -adic numbers and the field of Laurent series over a field . The valuation on them are the -adic valuation and the usual valuation of a Laurent series respectively. Their ring of integers are therefore and respectively. A distinguished uniformizer is and whereas a possible set is and respectively. The reader who is not familiar with complete discrete valuation fields may assume (without sacrifying too much to the generality) that is one of the two aforementioned examples.

In what follows, the notation refers to the ring of univariate polynomials with coefficients in . The subspace of polynomials of degree at most (resp. exactly ) is denoted by (resp. ).

## 2 Precision data

Elements in (and a fortiori in ) carry an infinite amount of information. They thus cannot be stored entirely in the memory of a computer and have to be truncated. Elements of are usually represented by truncating Eq.(1) as follows:

 x=N−1∑i=vaiπi+O(πN) (2)

where is an integer called the absolute precision and the notation means that the coefficients for are discarded. If and , the integer is the valuation of and the difference is called the relative precision. Alternatively, one may think that the writing (2) represents a subset of which consists of all elements in for which the ’s in the range are those specified. From the metric point of view, this is a ball (centered at any point inside it).

It is worth noting that tracking precision using this representation is rather easy. For example, if and are known with absolute (resp. relative) precision and respectively, one can compute the sum (resp. the product ) at absolute (resp. relative) precision . Computations with -adic and Laurent series are often handled this way on symbolic computation softwares.

### 2.1 Precision for polynomials

The situation is much more subtle when we are working with a collection of elements of (e.g. a polynomial) and not just a single one. Indeed, several precision data may be considered and, as we shall see later, each of them has its own interest. Below we detail three models of precision for the special case of polynomials.

Flat precision. The simplest method for tracking the precision of a polynomial is to record each coefficient modulo a fixed power of . While easy to analyze and implement, this method suffers when applied to polynomials whose Newton polygons are far from flat.

Jagged precision. The next obvious approach is to record the precision of each coefficient individually, a method that we will refer to as jagged precision. Jagged precision is commonly implemented in computer algebra systems, since standard polynomial algorithms can be written for generic coefficient rings. However, these generic implementations often have suboptimal precision behavior, since combining intermediate expressions into a final answer may lose precision. Moreover, when compared to the Newton precision model, extra precision in the middle coefficients, above the Newton polygon of the remaining terms, will have no effect on any of the values of that polynomial.

Newton precision. We now move to Newton precision data. They can be actually seen as particular instances of jagged precision but there exist for them better representations and better algorithms.

###### Definition 2.1

A Newton function of degree is a convex function which is piecewise affine, which takes a finite value at and whose epigraph have extremal points with integral abscissa.

###### Remark 2.2

The datum of is equivalent to that of and they can easily be represented and manipulated on a computer.

We recall that one can attach a Newton function to each polynomial. If , we define its Newton polygon as the convex hull of the points () together with the point at infinity and then its Newton function as the unique function whose epigraph is . It is well known [dwork-geratto-sullivan:Gfunctions, Section 1.6] that:

 NP(P+Q)⊂%Conv(NP(P)∪NP(Q))NP(PQ)=NP(P)+NP(Q)

where Conv denotes the convex hull and the plus sign stands for the Minkowski sum. This translates to:

 \rm NF(P+Q)≥\rm NF(P)\raisebox−0.284528ptto25.87pt\vboxto15.75pt\pgfpicture\makeatletterto0.0pt\pgfsys@beginscope\definecolorpgfstrokecolorrgb0,0,0\pgfsys@color@rgb@stroke000\pgfsys@color@rgb@fill000\pgfsys@setlinewidth0.4pt\nullfontto0.0pt\pgfsys@beginscope\pgfsys@beginscope\pgfsys@stroke@opacity0\pgfsys@fill@opacity0\pgfsys@moveto−5.121496pt0.0pt\pgfsys@lineto5.121496pt0.0pt\pgfsys@stroke\pgfsys@invoke\pgfsys@endscope\pgfsys@beginscope\pgfsys@setlinewidth0.2pt\pgfsys@moveto0.0pt0.0pt\pgfsys@moveto3.129803pt0.0pt\pgfsys@curveto3.129803pt1.728542pt1.728542pt3.129803pt0.0pt3.129803pt\pgfsys@curveto−1.728542pt3.129803pt−3.129803pt1.728542pt−3.129803pt0.0pt\pgfsys@curveto−3.129803pt−1.728542pt−1.728542pt−3.129803pt0.0pt−3.129803pt\pgfsys@curveto1.728542pt−3.129803pt3.129803pt−1.728542pt3.129803pt0.0pt\pgfsys@closepath\pgfsys@moveto0.0pt0.0pt\pgfsys@stroke\pgfsys@invoke\pgfsys@endscope\pgfsys@beginscope\pgfsys@moveto0.0pt0.0pt\pgfsys@moveto3.129803pt0.0pt\pgfsys@curveto3.129803pt1.728542pt1.728542pt3.129803pt0.0pt3.129803pt\pgfsys@curveto−1.728542pt3.129803pt−3.129803pt1.728542pt−3.129803pt0.0pt\pgfsys@curveto−3.129803pt−1.728542pt−1.728542pt−3.129803pt0.0pt−3.129803pt\pgfsys@curveto1.728542pt−3.129803pt3.129803pt−1.728542pt3.129803pt0.0pt\pgfsys@closepath\pgfsys@moveto0.0pt0.0pt\pgfsys@clipnext\pgfsys@discardpath\pgfsys@invoke\hbox{{\pgfsys@beginscope{}{{}{}{{ {}{}}}{ {}{}} {{}{{}}}{{}{}}{}{{}{}}{}{}{}{}{} { }{{{{}}\pgfsys@beginscope{}\pgfsys@transformcm{0.75}{0.0}{0.0}{0.75}{-8.437371% pt}{-1.874972pt}{}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}% \pgfsys@color@rgb@stroke{0}{0}{0}{}\pgfsys@color@rgb@fill{0}{0}{0}{}\hbox{{+% }} }}{}{}\pgfsys@endscope}}} {}{}{}\pgfsys@endscope}}\pgfsys@endscope\pgfsys@endscope\hss\pgfsys@discardpath\pgfsys@endscope\hss\endpgfpicture\rm NF(Q)\rm NF(PQ)=\rm NF(P)\raisebox−0.284528ptto25.87pt\vboxto15.75pt\pgfpicture\makeatletterto0.0pt\pgfsys@beginscope\definecolorpgfstrokecolorrgb0,0,0\pgfsys@color@rgb@stroke000\pgfsys@color@rgb@fill000\pgfsys@setlinewidth0.4pt\nullfontto0.0pt\pgfsys@beginscope\pgfsys@beginscope\pgfsys@stroke@opacity0\pgfsys@fill@opacity0\pgfsys@moveto−5.121496pt0.0pt\pgfsys@lineto5.121496pt0.0pt\pgfsys@stroke\pgfsys@invoke\pgfsys@endscope\pgfsys@beginscope\pgfsys@setlinewidth0.2pt\pgfsys@moveto0.0pt0.0pt\pgfsys@moveto3.129803pt0.0pt\pgfsys@curveto3.129803pt1.728542pt1.728542pt3.129803pt0.0pt3.129803pt\pgfsys@curveto−1.728542pt3.129803pt−3.129803pt1.728542pt−3.129803pt0.0pt\pgfsys@curveto−3.129803pt−1.728542pt−1.728542pt−3.129803pt0.0pt−3.129803pt\pgfsys@curveto1.728542pt−3.129803pt3.129803pt−1.728542pt3.129803pt0.0pt\pgfsys@closepath\pgfsys@moveto0.0pt0.0pt\pgfsys@stroke\pgfsys@invoke\pgfsys@endscope\pgfsys@beginscope\pgfsys@moveto0.0pt0.0pt\pgfsys@moveto3.129803pt0.0pt\pgfsys@curveto3.129803pt1.728542pt1.728542pt3.129803pt0.0pt3.129803pt\pgfsys@curveto−1.728542pt3.129803pt−3.129803pt1.728542pt−3.129803pt0.0pt\pgfsys@curveto−3.129803pt−1.728542pt−1.728542pt−3.129803pt0.0pt−3.129803pt\pgfsys@curveto1.728542pt−3.129803pt3.129803pt−1.728542pt3.129803pt0.0pt\pgfsys@closepath\pgfsys@moveto0.0pt0.0pt\pgfsys@clipnext\pgfsys@discardpath\pgfsys@invoke\hbox{{\pgfsys@beginscope{}{{}{}{{ {}{}}}{ {}{}} {{}{{}}}{{}{}}{}{{}{}}{}{}{}{}{} { }{{{{}}\pgfsys@beginscope{}\pgfsys@transformcm{0.75}{0.0}{0.0}{0.75}{-8.437371% pt}{-1.874972pt}{}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}% \pgfsys@color@rgb@stroke{0}{0}{0}{}\pgfsys@color@rgb@fill{0}{0}{0}{}\hbox{{× }} }}{}{}\pgfsys@endscope}}} {}{}{}\pgfsys@endscope}}\pgfsys@endscope\pgfsys@endscope\hss\pgfsys@discardpath\pgfsys@endscope\hss\endpgfpicture\rm NF(Q)

where the operations and are defined accordingly. There exist classical algorithms for computing these two operations whose complexity is quasi-linear with respect to the degree.

In a similar fashion, Newton functions can be used to model precision: given a Newton function of degree , we agree that a polynomial of degree at most is given at precision when, for all , its -th coefficient is given at precision (where is the ceiling function). In the sequel, we shall write and use the notation (where the coefficients are given by truncated series) to refer to a polynomial given at precision .

It is easily checked that if and are two polynomials known at precision and respectively, then is known at precision and is known at precision .

###### Definition 2.3

Let . We say that the Newton precision on is nondegenerate if and for all extremal point of .

We notice that, under the conditions of the above definition, the Newton polygon of is well defined. Indeed, if is any polynomial whose Newton function is not less than , we have .

Lattice precision. The notion of lattice precision was developed in [caruso-roe-vaccon:14a]. It encompasses the two previous models and has the decisive advantage of precision optimality. As a counterpart, it might be very space-consuming and time-consuming for polynomials of large degree.

###### Definition 2.4

Let

be a finite dimensional vector space over

. A lattice in is a sub--module of generated by a -basis of .

We fix an integer . A lattice precision datum for a polynomial of degree is a lattice lying in the vector space . We shall sometimes denote it in order to emphasize that it should be considered as a precision datum. The notation then refers to any polynomial in the -affine space . Tracking lattice precision can be done using differentials as shown in [caruso-roe-vaccon:14a, Lemma 3.4 and Proposition 3.12]: if denotes any strictly differentiable function with surjective differential, under mild assumption on , we have:

 f(Papp(X)+H)=f(Papp(X))+f′(Papp(X))(H)

where denotes the differential of at . The equality sign reflets the optimality of the method.

As already mentioned, the jagged precision model is a particular case of the lattice precision. Indeed, a precision of the shape corresponds to the lattice generated by the elements (). This remark is the origin of the notion of diffused digits of precision introduced in [caruso-roe-vaccon:15a, Definition 2.3]. We shall use it repeatedly in the sequel in order to compare the behaviour of the three aforementioned precision data in concrete situations.

## 3 Euclidean division

Euclidean division provides a building block for many algorithms associated to polynomials in one variable. In order to analyze the precision behavior of such algorithms, we need to first understand the precision attached to the quotient and remainder when dividing two polynomials. In the sequel, we use the notation and for the polynomials satisfying and .

### 3.1 Euclidean division of Newton functions

###### Definition 3.1

Let and be two Newton functions of degree and respectively. Set . Letting be the greatest affine function of slope with and , we define:

Figure 1 illustrates the definition: if and are the functions represented on the diagram, the epigraph of is the blue area whereas that of is the green area translated by . It is an easy exercise (left to the reader) to design quasi-linear algorithms for computing and