# On sets of linear forms of maximal complexity

We present a uniform description of sets of m linear forms in n variables over the field of rational numbers whose computation requires m(n - 1) additions.

• 2 publications
• 5 publications
• 1 publication
12/09/2021

### Explicit Bounds for Linear Forms in the Exponentials of Algebraic Numbers

In this paper, we study linear forms λ = β_1e^α_1+⋯+β_me^α_m, ...
12/03/2020

### Coloured Graphical Models and their Symmetries

Coloured graphical models are Gaussian statistical models determined by ...
12/04/2019

### Derandomization and absolute reconstruction for sums of powers of linear forms

We study the decomposition of multivariate polynomials as sums of powers...
11/14/2006

### Working times in atypical forms of employment: the special case of part-time work

In the present article, we attempt to devise a typology of forms of part...
02/09/2019

### Approximation of subsets of natural numbers by c.e. sets

The approximation of natural numbers subsets has always been one of the ...
09/21/2021

### An Add-On for Empowering Google Forms to be an Automatic Question Generator in Online Assessments

This research suggests an add-on to empower Google Forms to be an automa...
08/05/2019

### Characterizations of Two-Points and Other Related Distributions

We provide new characterizations of two-points and some related distribu...

## 1. Introduction

### 1.1. Motivation and background

Evaluating a set of a linear forms is a natural computation task that frequently appears in both theory and applications. For a matrix

 (1.1) Δ=⎛⎜ ⎜ ⎜ ⎜ ⎜⎝δ1,1δ1,2⋯δ1,nδ2,1δ2,2⋯δ2,n⋮⋮⋱⋮δm,1δm,2⋯δm,n⎞⎟ ⎟ ⎟ ⎟ ⎟⎠

and a column vector

 (1.2) \boldmathx=(x1,…,xn)T

linear forms are presented as a matrix-vector product

 (1.3) ⎛⎜ ⎜ ⎜ ⎜ ⎜⎝δ1,1δ1,2⋯δ1,nδ2,1δ2,2⋯δ2,n⋮⋮⋱⋮δm,1δm,2⋯δm,n⎞⎟ ⎟ ⎟ ⎟ ⎟⎠⎛⎜ ⎜ ⎜ ⎜⎝x1x2⋮xn⎞⎟ ⎟ ⎟ ⎟⎠=Δ\boldmathx=(δs,1x1+⋯+δs,nxn)ms=1,

in which the matrix entries are fixed values and the vector entries are varying inputs and computations are by means of linear algorithms. As expected, the complexity of a linear algorithm is its number of additions and we are interested in sets of linear forms of high complexity.

We denote the additive complexity of (1.3) by and call it the complexity of .

Obviously, the set of linear forms (1.3) can be computed in additions. However, in finite fields, this trivial upper bound is not the best possible. Namely, over a finite field of elements, it can be computed in additions see [8, Theorem 1], where the implied constants are absolute. On the other hand (in finite fields), when , there exist , and , for which any computation of (1.3) requires additions, cf. [8, Section 5]. In fact, this lower bound holds for almost all matrices with , see [4, Appendix B.2] for the precise statement and the proof by a counting argument. Thus, for each pair of positive integers and such that , the entries of such a matrix can be effectively computed, but to describe them uniformly (in and ) is a very difficult open problem. Even no example of a non-linear complexity is known from the literature.

The situation is quite different when the underlying field is infinite. By a transcendence degree argument, it is easy to see that, over the field of real numbers , say, when the entries of are algebraically independent, the computation of (1.3) requires additions (cf. [2, Section 5.2]). This leads to a natural question: what about the field of rational numbers ? As it has been remarked in [4, Appendix B.3], almost all matrices (1.1) are of such complexity and a specific example of such a matrix is the main result of [4].

Namely, it has been shown in [4] that, if the entries of a matrix are algebraically independent and is “sufficiently close” to , then also

. However, an estimate of the above “sufficiently close” and, as a corollary, a uniform description of such matrices

is based on very non-trivial number-theoretic tools [9] and also involves lengthy and somewhat tedious calculations.

In a few words, the construction of in [4] consists of four stages and is as follows.

First, by following the proof for the algebraically independent case, it is shown that matrices with are defined by polynomials possessing a rather simple structure and a theorem of Perron [6] is used to bound the polynomials’ degree and height. Using these polynomials, it is shown that if is a real matrix whose entries are algebraically independent (implying ), and is a matrix over that is sufficiently close to (in the Frobenius norm), then as well. The rest is to construct such and .

The construction of uses an effective version of the Lindemann–Weierstrass [9] theorem in transcendental number theory. In particular, the entries of are real numbers of the form , where , .

Then, the construction of that is sufficiently close to , is by truncating the Taylor expansions of .

Finally, is converted into an integer matrix of complexity . Because of the approximation precision required by Sert’s theorem, the entries of are triple exponential in the matrix size, implying that their, say, binary representation is double exponential in .

To the best of our knowledge, this is the only example of a set of linear forms over of a non-linear complexity. In fact, this set is of the largest possible complexity.

### 1.2. New construction

In this paper we present an example of an integer matrix of complexity whose entries are double exponential in . Thus, the binary representation is of an exponential size, which is one exponent less than the size of the example from [4].

It is convenient to identify matrices and -dimensional vectors with , obtained by the concatenation of their rows.

Namely, we prove the following result.

###### Theorem 1.1.

Let

 Ω=⎛⎜ ⎜ ⎜ ⎜ ⎜⎝ω1,1ω1,2⋯ω1,nω2,1ω2,2⋯ω2,n⋮⋮⋮ωm,1ωm,2⋯ωm,n⎞⎟ ⎟ ⎟ ⎟ ⎟⎠={a1,…,aN}

be an integer matrix, with integer entries , where , satisfying

 2N2NN2⩾a1>N2NN2, (2N)2ℓNN2+ℓN−ℓ⩾aℓ>N2ℓNN2+ℓN−ℓ,ℓ=2,…,N−1, aN⩾(2N)2N2N2−N+1.

Then .

Note that and, for the choice

 aN=(2N)2N2N2−N+1,

the entries

 {ωs,t: s=1,…,m, t=1,…,n}={a1,…,aN}

of are double exponential in .

This paper is organized as follows. Section 2 consists of two parts. Section 2.1 contains the definition of a linear algorithm and its associated graph and in Section 2.2, we introduce normalized linear algorithms and state some simple basic complexity results. The proof of Theorem 1.1 is presented in Section 3. We conclude the paper with a short remark concerning the size of our example.

## 2. Background from the complexity theory

### 2.1. Linear algorithms and their associated graphs

A linear algorithm over a field in indeterminates consists of a sequence of operations , , where

• are the algorithm coefficients;

• is the algorithm variable that does not appear in a previous step;

• and are either indeterminates (namely, belong to the set ) or the algorithm variables appearing in a previous step (that is, if and are the algorithm variables appearing at step , then ).

With each algorithm variable in a linear algorithm we associate the following linear form :

• if is an indeterminate , then is ;

• if is the left-hand side of an operation , then is the linear form .

A linear algorithm computes a linear form , if there is a variable, or an indeterminate, of the algorithm and a constant such that (thus, linear algorithms compute linear forms up to scaling by a constant). A linear algorithm computes a set linear forms

 L(x1,…,xn)={ℓs(x1,…,xn): s=1,…,m}

if it computes each form .

The number of the variables and the number of linear forms is fixed throughout this paper.

###### Definition 2.1.

The complexity of a linear algorithm is the length of its sequence of operations.

###### Definition 2.2.

The (additive) complexity of a set of linear forms is the minimal complexity of a linear algorithm that computes the set.

It is known from [10] that if a set of linear forms over an infinite field can be computed in additions by a straight-line algorithm (see [1, Section 12.2]), then it also can be computed in additions by a linear algorithm. In other words, multiplications and divisions “cannot replace additions.”

With a linear algorithm we associate a labelled directed acyclic graph , whose set of vertices is the union of and the set of the variables of and there is an edge from vertex to vertex , if there is an operation of the form or the form . In the former case, the edge is labelled and, in the latter case, it is labelled , see Figure 2.1 below.

We denote the label of edge by .

By definition, and the number of vertices of of the in-degree is .

Let be a path of edges in . The weight of is defined, recursively, as follows.

• If is of length zero, then ; and

• , where is the path extended with edge .

The following correspondence between linear algorithms and their associated graphs is well-known from the literature, see, for example, [2, Remark 13.19].

###### Lemma 2.3.

Let

 A={ui←αiuji+βiuki: i=1,…,|A|}

be a linear algorithm and let denote the set of all paths of edges from the indeterminate to the algorithm variable in . Then

 ℓ(ui)=n∑t=1⎛⎝∑π∈ΠA(xt,ui)w(π)⎞⎠xt,i=1,…,|A|.

### 2.2. Normalized linear algorithms

In this section we introduce a subclass of linear algorithms called normalized linear algorithms. These algorithms have the same computation power, but are more convenient for dealing with complexity issues.

###### Definition 2.4.

A linear algorithm is normalized if in each its operation

 ui←αiuji+βiuki

the coefficient of is . The coefficient of , that may be , is called a proper coefficient.

We say that a label is proper if it is a proper coefficient of the algorithm.

The result below immediately follows from Definition 2.4 and the definition of the associated graph of an algorithm .

###### Lemma 2.5.

The additive complexity of a normalized linear algorithm equals to the number of proper labels of its associated graph .

Furthermore, we also have the following result, given in [4, Proposition 6].

###### Lemma 2.6.

For each linear algorithm there is a normalized linear algorithm of the same complexity that computes the same set of linear forms.

From now on, by Lemma 2.6, we assume that all linear algorithms under consideration are normalized.

## 3. Proof of Theorem 1.1

### 3.1. Outline

The proof is based on

• an explicit form of the Perron theorem [6] on annihilating polynomials, see Lemma 3.3;

• a relationship between complexity of linear forms and zeros of some multivariate polynomials, see Lemma 3.4;

• a new construction of a reasonably small integer vector which provides a common non-zero to a large family of multivariate polynomials, see Lemma 3.6.

### 3.2. Bound on annihilating polynomials

To formulate a fully explicit form of the Perron theorem [6] we introduce the following definition below.

###### Definition 3.1.

We say that is an annihilating polynomial of , , if is a non-zero polynomial and

 P(P1(X1,…,XN−1),…,PN(X1,…,XN−1))=0.

###### Lemma 3.2.

Let

 Pk(X1,…,XN−1)∈Z[X1,…,XN−1]

with , . Then there exists an annihilating polynomial of such that

 degP⩽d1×⋯×dNmin{d1,…,dN}.

We also use for the naive height of a polynomial over (in one or several variables), that is, the largest absolute value of ist coefficients.

We recall the following result given in [4, Proposition 23].

###### Lemma 3.3.

Let be an annihilating polynomial of

 Pk(X1,…,XN−1)∈Z[X1,…,XN−1],k=1,…,N.

There exists another annihilating polynomial

 Q(Z1,…,ZN)∈Z[Z1,…,ZN]

of of degree and height

 degQ⩽degP, H(Q)⩽1+((degP+NN)NdmaxdegPhdegPmax)(degP+NN)−1,

respectively, where

 dmax=max{degPk: k=1,…,N}, hmax=max{H(Pk): k=1,…,N}.

### 3.3. Complexity of linear forms and vanishing of polynomials

The first step in the proof of Theorem 1.1 is similar to that in [4]. It also resembles some previous results of this type, see, for example, [2, Lemma 9.28], however it seems to be new.

###### Lemma 3.4.

If , then for some nonzero polynomial with integer coefficients of degree and height

 degP⩽NN−1, H(P)⩽N2NN2

respectively, where , we have .

###### Proof.

Recall that we represent a linear from by the product , where is the (column) vector of the indeterminates as in (1.2). Similarly, we represent a set of linear forms

 ℓs(x1,…,xn)=n∑t=1δs,txt,s=1,…,m,

by a matrix-vector product , where the th row of the matrix is the row vector of the coefficients of , see (1.3).

Let be a linear algorithm that computes (1.3) and let be its associated labelled graph.

Let and , , be the algorithm variables and the respective constants such that

 ℓs(x1,…,xn)=n∑t=1δs,txt=γsℓ(uis).

Then, by Lemma 2.3,

 (3.1) δs,t=γs∑π∈ΠA(xt,uis)w(π)=Ps,t(γs,β1,…,β|A|)

for some polynomials in variables, and . It follows from Lemma 2.3 that and , and .

If the number of the proper labels is less than , then the total number of and variables is less than — see Lemma 2.5, implying that these polynomials are algebraically dependent.

Let be a polynomial with integer coefficient such that

 (3.2) P(P1,1(Y1,X1,…,X|A|),…,Pm,n(Ym,X1,…,X|A|))=0.

Thus, using simple estimates of binomials coefficients, we derive from Lemmas 3.2 and 3.3, that we may assume that

 degP⩽NN−1andH(P)⩽N2NN2

(we also refer to [4, Corollary 24] for the detailed calculations).

It follows from (3.1) and (3.2)

implying

 P(δ1,1,…,δm,n)=0,

which concludes the proof. ∎

### 3.4. Zeros and non-zeros of polynomials

The following bound on zeros of polynomials is very well known, see, for example, [5, Theorem 4.2].

###### Lemma 3.5.

Let be a nonzero polynomial and let be a complex root of . Then .

We now establish our main technical tool. We note that results of similar flavour about non-vanishing of polynomials can be found in [3], but the result below seems to be new.

###### Lemma 3.6.

Let be a polynomial with integer coefficients of degree and height . Then, for integers

we have .

###### Proof.

The proof is by induction on . The case plainly follows from the Lemma 3.5, since . Assume that and that the result holds for .

Write

 P(X1,…,XN)=d∑j=0Pj(X1,…,XN−1)XjN.

The polynomials have degree at most and height at most . One of them is not zero. From the induction hypothesis, we deduce that the integers with are not all . Hence the polynomial

 Q(X)=P(a1,…,aN−1,X)=d∑j=0Pj(a1,…,aN−1)Xj∈Z[X]

is not zero. Its degree is at most . We claim that the height of is strictly less than .

Let us write, for ,

 Pj(X1,…,XN−1)=∑j1+⋯+jN−1⩽d−jpj1,…,jN−1,jXj11⋯XjN−1N−1

with , . We have

 |Pj(a1,…,aN−1)| ⩽∑j1+⋯+jN−1⩽d−j|pj1,…,jN−1,j|aj11⋯ajN−1N−1 ⩽H∑j1+⋯+jN−1⩽d−jaj11⋯ajN−1N−1 ⩽Hd−j∑ℓ=0(a1+a2+⋯+aN−1)ℓ.

Denote

 S=a1+a2+⋯+aN−1.

In the case , we have and

which proves the claim on the height of .

Assume now . Since , we have , , , hence

 SS−1⩽109

and

 (3.3) d∑ℓ=0Sℓ

From and from , by a straightforward induction we conclude that

 aℓ⩾2dℓ−1,ℓ=1,…,N.

Using that for any integer , we derive

As a consequence, we have

hence

 S<(1+25d)aN−1⩽(32)1/daN−1.

Recalling (3.3), we see that

Hence we obtain the desired claim that the height of is at most

Using the assumption together with the Lemma 3.5, we conclude and the result follows. ∎

### 3.5. Concluding the proof

We just checks that the conditions of Lemma 3.6 are satisfied if one selects the integers satisfying

 2H⩾a1>H, (2H)ℓdℓ+1⩾aℓ>(2H)ℓdℓ,ℓ=2,…,N−1, aN⩾(2H)NdN.

Indeed, these inequalities yield

 a2⩾(2H)2N2⩾(2H)N+1⩾2HaN1

and, for ,

Taking

 d=NN−1andH=N2NN2

as for the polynomial of Lemma 3.4 we conclude the proof.

## 4. Concluding remark

Even though, if the degree and the height of the “annihilating” polynomial from the proof of Lemma 3.4 can be reduced, the entries of in Theorem 1.1 remain double exponential in . Of course it would be interesting to find an example, it it exists, that is only polynomial in the matrix size. However, it is to be expected that this challenge requires a different approach.

## References

• [1] A. V. Aho, J. E. Hopcroft and J. D. Ulhnan, The design and analysis of computer algorithms, Addison-Wesley, Reading, MA, 1974. Zbl MR
• [2] P. Bürgisser, M. Clausen and A. Shokrollahi, Algebraic complexity theory, Springer, Berlin, 1997. Zbl MR
• [3] L. Fukshansky, ‘Integral points of small height outside of a hypersurface’, Monatsh. Math., 147 (2006), 25–41. Zbl MR
• [4] M. Kaminski and I. E. Shparlinski, ‘Sets of linear forms which are hard to compute’, Proc. 46th Intern. Symp. on Math. Found. of Comp. Sci. (MFCS), Schloss Dagstuhl - Leibniz-Zentrum LIPIcs, vol. 202, F. Bonchi and S.J. Puglisi, eds., Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 2021, 66:1–66:22.
• [5] M. Mignotte, Mathematics for computer algebra, Springer-Verlag, Berlin, 1992. Zbl MR
• [6] O. Perron, Algebra I (Die Grundlagen), Walter de Gruyter, Berlin, 1927. Zbl MR
• [7] A. Ploski, ‘Algebraic dependence of polynomials after O. Perron and some applications’, Computational Commutative and Non-Commutative Algebraic Geometry, NATO Science Series, III: Computer and Systems Sciences, vol. 196, IOS Press, Amsterdam, 2005, 167–173. Zbl MR
• [8] J. E. Savage, ‘An algorithm for the computation of linear forms’, SIAM J. Comp., 3 (1974), 150–158. Zbl MR
• [9]

A. Sert, ‘Une version effective du théorème de Lindemann–Weierstrass par les déterminants d’interpolation’,

J. Number Theory, 76 (1999), 94–119. Zbl MR
• [10] V. Strassen, ‘Vermeidung von Divisionen’, J. Reine Angew. Math., 264 (1973), 184–202. Zbl MR