DeepAI
Log In Sign Up

Computing eigenvalues of semi-infinite quasi-Toeplitz matrices

03/12/2022
by   D. A. Bini, et al.
0

A quasi-Toeplitz (QT) matrix is a semi-infinite matrix of the form A=T(a)+E where T(a) is the Toeplitz matrix with entries (T(a))_i,j=a_j-i, for a_j-i∈ℂ, i,j≥ 1, while E is a matrix representing a compact operator in ℓ^2. The matrix A is finitely representable if a_k=0 for k<-m and for k>n, given m,n>0, and if E has a finite number of nonzero entries. The problem of numerically computing eigenpairs of a finitely representable QT matrix is investigated, i.e., pairs (λ, v) such that A v=λ v, with λ∈ℂ, v=(v_j)_j∈ℤ^+, v 0, and ∑_j=1^∞ |v_j|^2<∞. It is shown that the problem is reduced to a finite nonlinear eigenvalue problem of the kind WU(λ)β=0, where W is a constant matrix and U depends on λ and can be given in terms of either a Vandermonde matrix or a companion matrix. Algorithms relying on Newton's method applied to the equation WU(λ)=0 are analyzed. Numerical experiments show the effectiveness of this approach. The algorithms have been included in the CQT-Toolbox [Numer. Algorithms 81 (2019), no. 2, 741–769].

READ FULL TEXT VIEW PDF
02/08/2021

Geometric means of quasi-Toeplitz matrices

We study means of geometric type of quasi-Toeplitz matrices, that are se...
07/05/2019

Rational Krylov and ADI iteration for infinite size quasi-Toeplitz matrix equations

We consider a class of linear matrix equations involving semi-infinite m...
07/23/2019

Solving quadratic matrix equations arising in random walks in the quarter plane

Quadratic matrix equations of the kind A_1X^2+A_0X+A_-1=X are encountere...
07/07/2019

The energy of a simplicial complex

A finite abstract simplicial complex G defines a matrix L, where L(x,y)=...
07/29/2009

An Explicit Construction of Gauss-Jordan Elimination Matrix

A constructive approach to get the reduced row echelon form of a given m...
09/20/2022

Quasi-Perron-Frobenius property of a class of saddle point matrices

The saddle point matrices arising from many scientific computing fields ...
03/07/2021

Multilevel approximation of Gaussian random fields: Covariance compression, estimation and spatial prediction

Centered Gaussian random fields (GRFs) indexed by compacta such as smoot...

1 Introduction

A quasi-Toeplitz (QT) matrix is a semi-infinite matrix that can be written as where is Toeplitz, i.e., for a given sequence , and is compact, that is, is the limit of a sequence of semi-infinite matrices of finite rank. Here, convergence means that , where

is the operator norm induced by the vector norm

, for , and the value of depends on the specific context where the mathematical model originates.

Matrices of this kind are encountered in diverse applications related to semi-infinite domains. For instance, the analysis of queuing models, where buffers have infinite capacity, leads to QT matrices where the compact correction reproduces the boundary conditions of the model while the Toeplitz part describes the inner action of the stochastic process. A typical paradigm in this framework is given by random walks in the quarter plane. Some references in this regard can be found in the books [5], [26], [28], and in the more recent papers [24], [30], [31]. Another classical and meaningful example concerns the class of matrices that discretize boundary value problems by means of finite differences. In this case, the Toeplitz part of the QT matrix describes the inner action of the differential operator, while the compact correction expresses the boundary conditions imposed on the differential system. In this framework, it is worth citing the two books [19], [20], that are a relevant reference on a very close subject concerning generalized locally Toeplitz matrices (GLT) and their applications, where a rich literature is cited.

Computational aspects in the solution of matrix equations with QT matrices in bidimensional random walk have been recently investigated in [6], [7], [11], while generalizations including probabilistic models with restarts are analyzed in [8]. Other applications of QT matrices have been considered in [3], [4], [10], concerning matrix functions and means, and in [25], [32] concerning Sylvester equations. Important sources of theoretical properties of QT matrices are given in the books [12], [13], and [16]. In [9] a suitable Matlab toolbox, the CQT-toolbox, has been introduced for performing arithmetic operations with QT matrices including the four arithmetic operations and the more relevant matrix factorizations.

1.1 Main results

In this paper, we deal with the computation of the eigenvalues of QT matrices, a topic that was not covered in the CQT-Toolbox of [9]. Namely, we are interested in the design and analysis of algorithms for computing the eigenvalues

and the corresponding eigenvectors

of a given QT matrix , that is, is such that and where . Here is the set of vectors such that . For the sake of simplicity, in the following we will set and use to denote the -norm. The attention is restricted to the case where is finitely representable, i.e., , where is a band Toeplitz matrix determined by a finite number of parameters for , is a matrix having infinitely many rows and columns but a finite number of nonzero entries. A matrix of this kind represents a bounded linear operator from in . We associate with the matrix the Laurent polynomial .

Recall that the spectrum of a bounded operator is the set of such that is not invertible, and the essential spectrum is the set of such that is not Fredholm. We wish to point out that, not all the points of the spectrum or of the essential spectrum are necessarily eigenvalues of . Moreover, while for a Toeplitz matrix the set of eigenvalues does not contain isolated points and can be explicitly determined by the image of the unit circle through the Laurent polynomial and by the winding number of (see [12]), for a general QT matrix having a nontrivial compact correction the set of eigenvalues may contain a continuous part and a discrete part, the latter is formed by a set of isolated eigenvalues. As an example, see Figure 1.

Figure 1: Isolated eigenvalues (blue dots) and dense set of eigenvalues (green area) of the QT matrix associated with and with the correction , , , , for , elsewhere.

We prove that any isolated eigenvalue of a QT matrix is the solution of a finite nonlinear eigenvalue problem of the form

where is a constant matrix and is a matrix-valued function whose size and entries depend on in an implicit way. Here are integers depending on the given matrix , while is the number of zeros , of modulus less than 1 of the Laurent polynomial . It is well-known that the value of is given by where is the winding number of . Thus, it takes constant values on each connected component of the set (see Figure 2 for an example). Note that while depends on , it is locally constant on , and thus we will not write explicitly the dependence on .

We consider two different forms of : the Vandermonde version and the Frobenius version. In the former version, can be chosen as the Vandermonde matrix with entries , , , provided that for . In the latter, is the truncation to size of the matrix (we adopted the Matlab notation where “;” separates block rows of the matrix), where is the -th power of the companion (Frobenius) matrix associated with the monic polynomial .

This formulation of the problem allows us to detect those components that constitute continuous sets of eigenvalues (for ), and to design numerical algorithms for computing the isolated eigenvalues of (for ) by solving the corresponding nonlinear eigenvalue problem. Nonlinear eigenvalue problems have recently received much attention in the literature. Here we refer to the survey paper [22], to the subsequent paper [21], to the more recent works [18] and [23], and to the references there in.

Our algorithms follow the classical approach of applying Newton’s iteration, as done in [18] and [22], to the scalar equation , where by relying on the Jacobi identity . Here, the main problem is to exploit the specific features of the function through the design of efficient algorithms to compute and in both the Vandermonde and in the Frobenius formulation. This analysis leads to the algorithmic study of some interesting computational problems such as computing the winding number of , or computing the coefficients of the polynomial factor having zeros of modulus less than 1 together with their derivatives with respect to , or computing and the derivative of for , with respect to . We will accomplish the above tasks by relying on the combination of different computational tools such as Graeffe’s iteration [29], the Wiener-Hopf factorization of computed by means of the cyclic reduction algorithm [5], and the Barnett factorization of [1].

The algorithms based on the Vandermonde and on the Frobenius versions require either the computation of the zeros of the Laurent polynomial and the selection of those zeros having modulus less than 1, or the computation of the coefficients of the factor . In principle, the latter approach is less prone to numerical instabilities and avoids the theoretical difficulties encountered when there are multiple or clustered zeros. This fact is confirmed by numerical tests and analysis.

Our procedure uses Newton’s iteration as an effective tool for refining a given approximation to an eigenvalue. In order to numerically compute all the eigenvalues we have combined Newton’s iteration with a heuristic strategy based on choosing as starting approximations the eigenvalues of the

matrix given by the leading principal submatrix of of sufficiently large size. In fact, we may show that for any , the -pseudospectrum of gets closer to any isolated eigenvalue of as gets large.

One could argue that a large value of would provide an approximation of the isolated eigenvalues of , directly. Nevertheless, our approach requires only a rough approximation of the isolated eigenvalues and thus a smaller value of , followed by Newton’s iteration, to compute the eigenvalues with the same accuracy. Numerical experiments show the effectiveness of this approach: examples are shown where in order to obtain full-precision approximations of the eigenvalues of from the eigenvalues of would require large values of (of the order of millions), while starting Newton’s iteration with the eigenvalues of for moderate values of (of the order of few hundreds) provides very accurate approximations in few steps.

1.2 Paper organization

The paper is organized as follows. In Section 2 we recall some preliminary properties that are useful in the subsequent analysis. In particular, Section 2.1 deals with the eigenvalues of while Section 2.2 deals with the eigenvalues of . Section 3 concerns the reduction of the original eigenvalue problem for QT operators to the form of a nonlinear eigenvalue problem for finite matrices in the Frobenius and in the Vandermonde versions. Section 4 concerns further algorithmic issues. In particular, an efficient method for computing the winding number of a Laurent polynomial is designed based on the Graeffe iteration; the problem of computing a factor of the polynomial together with its derivative with respect to is analyzed relying on the Barnett factorization and on the solution of a linear system associated with a resultant matrix; morever, in the same section we prove the regularity of the function to which Newton’s iteration is applied. In Section 5 we investigate on the relationships between the isolated eigenvalues of and the eigenvalues of when gets large. The results of some numerical experiments are reported in Section 6. Finally, Section 7 draws the conclusions and describes some open problems.

The algorithms, implemented in Matlab, have been added to the CQT-Toolbox of [9]. The main functions are eig_single and eig_all. The former computes a single eigenvalue of a QT matrix starting from a given approximation, and, optionally, an arbitrary number of components of the corresponding eigenvector, the latter provides the computation of all the eigenvalues. Other related functions integrate the package. More information, together with the description of other auxiliary functions and optional parameters can be found at https://numpi.github.io/cqt-toolbox while the software can be downloaded at https://github.com/numpi/cqt-toolbox.

2 Preliminaries

Let be a Laurent polynomial where for , and . Define , , the Toeplitz matrix associated with . Given a semi-infinite matrix , such that for , or , the matrix represents a bounded linear operator from the set to itself. Denote by the set of bounded linear operators from to itself and by the unit circle in the complex plane.

Recall that is invertible if there exists such that , where is the identity on . Moreover, is Fredholm if there exists such that and are compact, i.e., is invertible modulo compact operators. Recall also that for the spectrum of is defined as

while the essential spectrum is defined as

so that .

It is well known that for a Laurent polynomial , is invertible in if and only if for and (see [13, Corollary 1.11]), where is the winding number of the curve .

In the case where is a Laurent polynomial, we may write

(1)

where is the first derivative of . Notice that is constant for in each connected component of the set . Consequently, we have (see [13, Corollary 1.12])

(2)

moreover,

(3)

We say that is an eigenpair (eigenvalue, eigenvector) if and .

2.1 Eigenvalues of

The following results from [13] characterize the eigenpairs of the Toeplitz operator . In this statement and throughout the paper, we used a slightly different notation with respect to [13]. Namely, we denote the entries of as , while in the classical literature they are denoted as

. The reason is that this notation is more suited to fit the context of Markov chains and queueing models where these matrices play an important role.

Lemma 2.1.

[13, Proposition 1.20] Let . For a Laurent polynomial , a point is an eigenvalue of as an operator on if and only if . Moreover, the kernel of has dimension and if then is exponentially decaying.

If a similar result can be given. Let be the distinct zeros of of modulus 1 and multiplicity , respectively. Define

(4)

so that is a Laurent polynomial having no zero on . Then we have the following.

Lemma 2.2.

[13, Proposition 1.22] Let . For a Laurent polynomial , a point is an eigenvalue of as an operator on if and only if . Moreover, the kernel of has dimension and if then is exponentially decaying.

Observe that according to the above lemmas, the eigenvalues of belong to the set , that, in turn, can be explicitly described by means of (2).

Let be a connected component of the set . The function is constant on , and this means that if the winding number is then all the values are eigenvalues of of (geometric) multiplicity , while if then no is eigenvalue of . We recall Proposition 1.25 from [13].

Figure 2: Connected components of for the Laurent polynomial . In green and orange the components with winding numbers 1 and 2, respectively. In white the components with winding number 0.
Lemma 2.3.

If is in the boundary of , and for , then , where is defined in (4).

From the above results it follows that (compare with Corollary 1.26 in [13]) if lies on the boundary of such that for then cannot be an eigenvalue of . That is, the eigenvalues of belong necessarily to those components for which and to their boundaries. Therefore cannot have isolated eigenvalues.

2.2 Eigenvalues of

From the definition of spectrum and of essential spectrum it follows that

for any compact operator . In fact, to prove the equality, if is a QT matrix, then is not Fredholm iff and are not compact for any bounded operator . That is, iff and are not compact. This is equivalent to say that and are not compact, i.e., is not Fredholm.

Another interesting property is given by the following.

Proposition 2.4.

If is a QT matrix, then .

The above result is an immediate consequence of the following

Lemma 2.5.

If the QT matrix is invertible on , then is also invertible on .

Proof.

Since is invertible, then . This implies so that for all . To show is invertible, it is sufficient to show that the winding number of is 0, that is, . To this end, suppose wind and , then is a Fredholm operator and it follows from [33, Theorem 2.8] and [13, Theorem 1.9] that the index of is . On the other hand, since is invertible, it follows that where is the dimension of the kernel of and is the dimension of the cokernel of . It follows from [13, page 9] that the index of A is , from which we get a contradiction. Hence, we must have wind. ∎

Observe that, in general, does not imply , as the following example shows. Denote by the tridiagonal Toeplitz matrix associated with the Laurent polynomial . Let , so that , . Set , where , so that . Then since is not invertible (but it is Fredholm), while since is invertible being and invertible operators. That is, adding a compact correction to there may be eigenvalues of not belonging to .

The following two results are useful for our analysis.

Proposition 2.6.

Let and . Then the Laurent polynomial has zeros of modulus less than 1.

Proof.

Since , from (1) we get , which implies that the number of zeros and the number of poles of in the open unit disk are such that . Since , it follows . ∎

A similar result holds for .

Proposition 2.7.

Let and suppose that has zeros of modulus 1 with multiplicities , let be the Laurent polynomial in (4). Then, has zeros of modulus less than 1, where .

3 Computational analysis

In this section, we aim at the design and analysis of numerical algorithms for computing the eigenvalues of thefinitely representable QT matrix belonging to a given connected component of , together with the corresponding eigenvectors. For the sake of simplicity, the case is not treated in this paper.

If then the spectrum and the essential spectrum of are explicitly known (see (2), and (3)). Moreover, an eigenvalue , together with its multiplicity, can be explicitly characterized in terms of the winding number , if (see Lemma 2.1). Therefore the case of interest is .

Recall the following notations: , while is the row size of the non zero part of the correction . We set , and denote the number of zeros of modulus less than 1 of the Laurent polynomial . In view of Proposition 2.6 we have , moreover is constant for . Finally, for a given matrix , we denote by the leading principal submatrix of of size , i.e., the submatrix formed by the entries in the first rows and in the first columns. If we write in place of .

3.1 Reduction to a nonlinear eigenvalue problem

Consider an eigenpair of so that . Observe that the condition for can be written as the linear difference equation

(5)

whose characteristic polynomial is . The dimension of the space of solutions of (5) that belong to depends on and coincides with the number of roots of with modulus less than . Our two approaches differ in the way the basis of the latter space is chosen.

If , is a basis of the space of solutions, then we may write the eigenvector as a linear combination of , i.e., . Therefore, we may say that is an eigenpair for if and only if and the conditions are satisfied.

The latter conditions form a nonlinear system in equations and unknowns which can be written as

(6)

In fact, and the components of , normalized such that , form a set of unknowns. It is clear that the system (6) is in the form of a nonlinear eigenvalue problem (NEP).

This system has a non-trivial solution for a given if and only if is eigenvalue of corresponding to the eigenvector . Notice that, for , this system has always a solution since the matrix has more columns than rows so that and the multiplicity of is given by .

If , equation (6) provides a balanced nonlinear eigenvalue problem that we are going to analyze.

If and if the pair solves (6), then it solves also the balanced nonlinear eigenvalue problem

(7)

formed by the first equations of (6). Thus, we may look for solutions of (7), and, if any, we may check if these are also solutions of (6).

We may express the NEP (6) in a more convenient form by using the Toeplitz structure of . This is the subject of the next section.

3.2 A different formulation

Let be the shift matrix defined by , elsewhere. Then for any solution of the linear difference equation (5), the shifted vector is still a solution for any . Moreover, if then also , and if and are linearly independent, then also and are linearly independent. To show the latter implication, assume that there exists a linear combination such that . Then, for . But since , we find that , i.e., that is a contradiction.

Therefore, if the columns of are a basis of the space of the solutions in , then also the columns of form a basis of the same space. This implies that the columns of are linear combinations of the columns of . That is, there exists a non singular matrix such that whence we have .

If we multiply the rows from to of the Toeplitz matrix by we get

Observing that , we may rewrite the identity as

Since , we get

On the other hand, relying once again on the property , we find that

so that

(8)

The possibly nonzero rows of the matrix are the first rows, which form the matrix , i.e., . It is interesting to observe that the matrix is obtained by shifting the columns of to the right of places. This implies that the matrix takes one of the following forms

depending on whether or , respectively, where , and has size while has size . In other words, the submatrices and do not overlap. This fact allows us to rewrite (6) as a set of equations in unknowns in the more convenient form

(9)

Another observation is that multiplying equation (9

) on the left by any invertible matrix provides an equivalent formulation of the NEP. In particular, if

, consider the rank revealing QR factorization of the matrix , assume that and denote the matrix formed by the first rows of so that and we may write .

Multiplying (9) to the left by , where is the transposed Hermitian of , yields

(10)

Observe that is a constant matrix of full rank, with rows, while is a matrix depending on . The eigenvalue problem for QT matrices is reduced to the NEP (10) which can take different forms according to the way a basis of the solution of the difference equation (5) is chosen.

We may conclude with the following result.

Theorem 3.1.

Let be a connected component of the set , let and . Let be a matrix whose columns form a basis of the space of solutions of the difference equation (5) belonging to . If then all are eigenvalues of . If then is eigenvalue of corresponding to the eigenvector iff there exists which solves the nonlinear eigenvalue problem of (10). In this case, .

3.3 Choosing a basis: Vandermonde and Frobenius versions

Let the zeros of be simple and ordered as

and let be the Vandermonde matrix associated with . The columns of provide a basis of the set of solutions of the difference equation (5) that belong to , so that is an eigenvector of corresponding to if and only if there exists such that and (6) is satisfied. The same argument can be applied in the case of confluent zeros considering a generalized Vandermonde matrix.

The formulation (10) where is the (generalized) Vandermonde matrix associated with the roots of is referred to as the Vandermonde version of the problem. It is well known that the zeros of a polynomial are severely ill-conditioned if they are clustered. This may make the choice of the basis , given by the columns of the Vandermonde matrix, unsuited in some problems. A way to overcome this issue is to consider the Frobenius version of the NEP obtained in the following way.

For the sake of notational simplicity, in the following we write in place of . For simple roots, write the Vandermonde matrix in the form , with , and define . Recall that , where denotes the companion (Frobenius) matrix associated with the polynomial , see for instance [5]. Here, . For multiple roots, a similar construction can be made with the generalized Vandermonde matrix and where is a block diagonal matrix whose diagonal blocks are Jordan blocks associated with the distinct roots of having modulus smaller than .

Denote so that the columns of provide a different basis of the set of solutions of the linear difference equation (5). The NEP (10) can be equivalently rewritten as

(11)

We refer to (11) as the Frobenius version of the problem. Observe that in the Frobenius form, it is not relevant if the roots of are multiple or numerically clustered, in fact the matrix exists and can be computed independently of the location of the roots of .

Notice that if , then the matrix can be partitioned into blocks as and can be rewritten in terms of a matrix power series as . The following result provides information in this regard [5, Chapter 3].

Theorem 3.2.

Assume that , where , has roots , such that and denote . Define for where we assume if or . Let be the Frobenius matrix associated with the factor . Then is the unique solution of the matrix equation

(12)

having minimum spectral radius , moreover, .

Notice that the blocks defined in the above theorem are obtained by partitioning the Toeplitz matrix into blocks which are themselves Toeplitz. Moreover, since is a banded matrix, then for sufficiently large. In the literature, there are several effective algorithms for the numerical computation of , based on fixed point iterations or on doubling techniques. We refer the reader to [2], [5], [14], and [15], for more details.

4 The numerical algorithms

In this section we describe our algorithms to refine a given approximation of an eigenvalue of , while in Section 5 we will discuss how to get the initial approximation. The algorithms require: a function such that the fixed point iteration converges locally to the eigenvalue , solution of the NEP (10), and a choice of the basis of the solutions of (5) belonging to .

The general scheme is reported in the Template Algorithm 1. This algorithm, for an initial approximation of the eigenvalue, provides either a more accurate approximation to the corresponding eigenpair, or a message with the following possible cases: 1) all the elements in the set are eigenvalues; 2) the generated sequence exited from ; 3) it holds and the approximated solution solves the first equations but not the full NEP (10); 4) convergence did not occur after the maximum number of allowed iterations.

0:  the coefficients of and the correction matrix ; an error bound ; an initial approximation ; an upper bound maxit to the number of iterations; a function defining a fixed point iteration to solve the NEP (10); a rule to generate .
0:  An approximation to an eigenvalue , the vector providing an approximation to the corresponding eigenvector according to (10), together with a message.
1:  Construct the matrix of (10) together with the scalar ; compute , and . Set .
2:  while maxit do
3:      Compute , .
4:     if then output ‘out of and stop; otherwise set .
5:     if then output ‘continuous set of eigenvalues’ and stop.
6:     if then perform one step of the fixed point iteration ; set , compute and according to (10), compute the residual error ; if , output ‘isolated eigenvalue (p=q)’ together with , and exit, otherwise continue from step 3;
7:     if then perform one step of the fixed point iteration applied to (10) restricted to the first components. Check if the residual error in the first components is less than . If not, continue from step 3, otherwise check if rank is less than . If so, output ‘isolated eigenvalue (p<q)’ together with and , and exit; otherwise output ‘non converging sequence (p<q)’ and exit;
8:     Stop if the maximum number of iterations maxit has been reached; in this case, output ‘Maximum number of iterations exceeded’.
9:  end while
Algorithm 1 Template Algorithm

Now, we deal with algorithmic issues encountered in the design of the fixed point iterations to solve the nonlinear eigenvalue problem (10). This analysis is needed to design algorithms to implement the function used in the Template Algorithm 1. Without loss of generality, we assume that the nonlinear eigenvalue problem is balanced. This case is encountered if or if where we consider the subset of the first equations in (10).

We essentially analyze Newton’s iteration applied to the determinantal versions of the problem, that is, , , in the Vandermonde and in the Frobenius forms, respectively. Before doing that, we discuss on how to compute the winding number of , since this is a fundamental step in the design of the overall algorithm.

4.1 Computing the winding number

The winding number of the Laurent polynomial can be computed in different ways. The most elementary one is to express as , where is the number of zeros of of modulus less than 1. Any root-finding algorithm applied to the polynomial can be used for this purpose, for instance, the command roots of Matlab provides approximations to all the roots of , and we may count how many roots have modulus less than 1. This approach has the drawback that polynomial roots are ill-conditioned when clustered, so that we may encounter instability if there are clusters of roots of modulus close to 1.

A second approach is based on equation (1) that expresses as ratio of two integrals. The integrals can be approximated by the trapezoid rule at the Fourier points using two FFTs. In this case, the presence of roots of the polynomial close to the unit circle may lead to a large number of Fourier points with a consequent slow down of the CPU time.

A third approach, that is the one we have implemented, relies on Graeffe’s iteration [29], that is based in the following observations. Given a polynomial of degree , the polynomial is formed by monomials of even degree, i.e., there exists a polynomial of degree such that . Therefore, the roots of are the square of the roots of . Consider the sequence defined by the Graeffe iteration with initial value . It turns out that the winding number of is constant. Moreover, if has zeros of modulus less than 1 and zeros of modulus greater than 1, then the limit for of is . Here is the coefficient of maximum modulus of . This means that there exists an index such that the coefficient of in has modulus greater than , where is the sum of the moduli of all the coefficients of . In view of Rouché theorem, the latter inequality is a sufficient condition to ensure that has roots of modulus less than 1.

Indeed, if there are zeros of modulus 1 then this procedure might not terminate. Therefore, if the number of Graeffe iterations exceeds a given upper bound, then the explicit computation of the polynomial roots is performed. These arguments support Algorithm 2 for counting the number of roots of a polynomial of modulus less than 1.

0:  The coefficients of a polynomial of degree ; an upper bound maxit to the number of iterations.
0:  Either the number of zeros of of modulus less than 1, or the message ‘failure’.
1:  Set