    Global properties of eigenvalues of parametric rank one perturbations for unstructured and structured matrices

General properties of eigenvalues of A+τ uv^* as functions of τ∈ or τ∈ or τ=^θ on the unit circle are considered. In particular, the problem of existence of global analytic formulas for eigenvalues is addressed. Furthermore, the limits of eigenvalues with τ→∞ are discussed in detail. The following classes of matrices are considered: complex (without additional structure), real (without additional structure), complex H-selfadjoint and real J-Hamiltonian.

Authors

06/16/2020

Preserving spectral properties of structured matrices under structured perturbations

This paper is devoted to the study of preservation of eigenvalues, Jorda...
06/25/2020

A sorting algorithm for complex eigenvalues

We present SPEC-RE, a new algorithm to sort complex eigenvalues, generat...
01/21/2008

Complex Eigenvalues for Binary Subdivision Schemes

Convergence properties of binary stationary subdivision schemes for curv...
06/04/2020

Numerical methods for accurate computation of the eigenvalues of Hermitian matrices and the singular values of general matrices

This paper offers a review of numerical methods for computation of the e...
05/18/2019

A note on variance bounds and location of eigenvalues

We discuss some extensions and refinements of the variance bounds for bo...
03/06/2020

Updating structured matrix pencils with no spillover effect on unmeasured spectral data and deflating pair

This paper is devoted to the study of perturbations of a matrix pencil, ...
12/24/2004

Global minimization of a quadratic functional: neural network approach

The problem of finding out the global minimum of a multiextremal functio...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction

The eigenvalues of matrices of the form , viewed as a rank one parametric perturbation of the matrix , have been discussed in a vast literature. We mention the classical works of Lidskii , Vishik and Lyusternik , as well as the more general treatment of eigenvalues of perturbations of the matrix in the books by Kato  and Baumgärtel . Recently, Moro, Burke and Overton returned to the results of Lidskii in a more detailed analysis , while Karow obtained a detailed analysis of the situation for small values of the parameter  in terms of structured pseudospectra. Obviously, parametric perturbations appear in many different contexts. The works most closely related to the current one concern rank two perturbations by Kula, Wojtylak and Wysoczański , matrix pencils by De Terán, Dopico and Moro  and Mehl, Mehrmann and Wojtylak [28, 29] and matrix polynomials by by De Terán and Dopico .

While the local behaviour of eigenvalues is fully understood, the global picture still has open ends, cf. e.g. the recent paper by C.K. Li and F. Zhang . The main problem here is that the eigenvalues cannot be defined neither analytically nor uniquely, even if we restrict the parameter to real numbers. As is well-known the problem does not occur in the case of Hermitian matrices where an analytic function of

with Hermitian values has eigenvalues and eigenvectors which can be arranged such that they are analytic as functions of

(Rellich’s theorem) . Other cases where the difficulty is detoured appear, e.g., in a paper by Gingold and Hsieh , where it is assumed that all eigenvalues are real, or in the series of papers of de Snoo (with different coauthors) [8, 9, 36, 37] where only one distinguished eigenvalue (the so called eigenvalue of nonpositive type) is studied for all real values of .

Let us review now our current contribution. To understand the global properties with respect to the complex parameter we will consider parametric perturbations of two kinds: , where , or , where . The former case was investigated already in our earlier paper , we review the basic notions in Section 2. However, we have not found the latter perturbations in the literature. We study them in Section 3, providing elementary results for further analysis.

Joining these two pictures together leads to new results on global behaviour of the eigenvalues in Section 4. Our main interest lies in generic behaviour of the eigenvalues, i.e., we address a question what happens when a matrix

(possibly very untypical and strange) is fixed and two vectors

are chosen numerically (we intentionally do not use the word ‘randomly’ here). One of our main results (Theorem 11) shows that the eigenvalues of can be defined globally as analytic functions in this situation for real . On the contrary, if one restricts only to real vectors this is no longer possible (Theorem 13).

In Section 5 we study the second main problem of the paper: the limits of eigenvalues for large values of the parameter. Although similar results can be found in the literature we have decided to provide a full description, for all possible (not only generic) vectors . This is motivated by our research in the following Section 6, where we apply these results to various classes of structured matrices. We also note there the classes for which a global analytic definition of eigenvalues in not possible (see Theorem 24). In Section 7 we apply the general results to the class of matrices with nonnegative entries.

Although we focus on parametric rank one perturbations, we mention here that the influence of a possibly non-parametric rank one perturbation on the invariant factors of a matrix has a rich history as well, see, e.g., the papers by Thompson  and M. Krupnik . Together with the the works by Hörmander and Melin , Dopico and Moro , Savchenko [34, 35] and Mehl, Mehrmann, Ran and Rodman [23, 24, 25] they constitute a linear algebra basis for our research, developed in our previous paper . What we add to these methods is some portion of complex analysis, by using the function and its holomorphic properties. This idea came to us through multiple contacts and collaborations with Henk de Snoo (cf. in particular the line of papers [14, 36, 37]), for which we express our gratitude here.

2. Preliminaries

If is a complex matrix (in particular, a vector) then by we define the entrywise complex conjugate of , further we set . We will deal with rank one perturbations

 B(τ)=A+τuv∗,

with , . The parameter is a complex variable, we will often write it as and fix either one of and . We review now some necessary background and fix the notation.

Let a matrix be given. We say that a property (of a triple ) holds for generic vectors if there exists a finite set of nonzero complex polynomials of variables, which are zero on all not enjoying the property. Note that the polynomials might depend on the matrix . In some places below a certain property will hold for generic . This happens as in the current paper we consider the perturbations , while in  was used (even for complex vector ) . In any case, i.e., either generic or generic, the closure of the set of ‘wrong’ vectors has an empty interior.

By we denote the minimal polynomial of . Define

 (1) puv(λ)=v∗mA(λ)(λIn−A)−1u

and observe that it is a polynomial, due to the formula for the inverse of a Jordan block (cf. ). Let be the (mutually different) eigenvalues of , and corresponding to the eigenvalue , let be the sizes of the Jordan blocks of . We shall denote the degree of the polynomial by , so

 l=r∑j=1nj,1.

Then

 (2) degpuv(λ)≤l−1

and equality holds for generic vectors , see .

It can be also easily checked (see  or ) that the characteristic polynomial of satisfies

 (3) det(λIn−B(τ)) = det(λIn−A)⋅(1−τv∗(λIn−A)−1u) = det(λIn−A)mA(λ)(mA(λ)−τpuv(λ)).

Therefore, the eigenvalues of which are not eigenvalues of , are roots of the polynomial

 (4) pB(τ)(λ)=mA(λ)−τpuv(λ).

Note that some eigenvalues of may be roots of this polynomial as well. Saying this differently, we have the following inclusion of spectra of matrices

 (5) σ(B(τ))∖σ(A)⊆p−1B(τ)(0)⊆σ(B(τ)),τ∈C,

but each of these inclusions may be strict. Further, let us call an eigenvalue of frozen by if it is an eigenvalue of for every complex . Directly from (3) we see that each frozen eigenvalue is either a zero of , then we call it structurally frozen, or an eigenvalue of both and , and then we call it accidentally frozen. Note that, due to a rank argument, is structurally frozen if and only if it has more than one Jordan block in the Jordan canonical form. Although being structurally frozen obviously does not depend on , the Jordan form of at these eigenvalues may vary for different , which was a topic of many papers, see, e.g., [15, 34, 32].

In contrast, generically and do not have a common zero , i.e., a slight change of leads to defrosting of (which explains the name accidentally). In spite of this, we still need to tackle such eigenvalues in the course of the paper. The main technical problem is shown by the following, almost trivial, example.

Example 1.

Let , where has a single eigenvalue at with a possibly nontrivial Jordan structure and let . The eigenvalues of are clearly and and the eigenvalue is accidentally frozen. Observe that if we define then for there is a sudden change in the Jordan structure of at .

To handle the evolution of eigenvalues of without getting into the trouble indicated above we introduce the rational function

 (6) Q(λ):=v∗(λIn−A)−1u=puv(λ)mA(λ).

It will play a central role in the analysis. Note that is a rational function with poles in the set of eigenvalues of , but not each eigenvalue is necessarily a pole of . More precisely, if () is an accidentally frozen eigenvalue of then does not have a pole of the same order as the multiplicity of as a root of , i.e, in the quotient there is pole-zero cancellation.

Proposition 2.

Let , let , let and assume that is not an eigenvalue of . Then is an eigenvalue of of algebraic multiplicity if and only if

 (7) Q(λ0)=1τ0, Q′(λ0)=0,…,Q(κ−1)(λ0)=0, Q(κ)(λ0)≠0.

If this happens, then has geometric multiplicity one, i.e., has a Jordan chain of size at . Finally, is not an eigenvalue of for all .

Remark 3.

If condition (7) should be read as , . In this case the implicit function theorem tells us that the eigenvalues can be defined analytically in the neigbourhood of . If then the analytic definition is not possible and the eigenvalues expand as Puiseux series, that is, they behave locally as the solutions of , see, e.g., [3, 17, 18].

Remark 4.

One may be also tempted to define the eigenvalues via solving the equation at being an accidentally frozen eigenvalue of for which does not have a pole at . This would be, however, a dangerous procedure, as might get involved in a larger Jordan chain. For example let

 B(τ)=[110τ]

with an accidentally frozen eigenvalue 1 and . Here for we get a Jordan block of size 2, but clearly the eigenvalues in a neighbourhood of and do not behave as plus the square roots of . For this reason we will avoid the accidentally frozen eigenvalues.

Remark 5.

Note that in case and have no common zeroes, i.e., there are no accidentally frozen eigenvalues, can be expressed in terms of and as follows

 (8) Q′(λ)=p′uv(λ)mA(λ)−puv(λ)m′A(λ)mA(λ)2,

where cancellation of roots between numerator and denominator occurs in an eigenvalue of when corresponding to that eigenvalue there is a Jordan block of size bigger than one.

Proof of Proposition 2.

For the proof of the first statement we start from the definition of . Note that is necessarily non zero, and so is non-zero as well. If is an eigenvalue of which is not an eigenvalue of , then, since , we have from (6) that , which proves the first equation in (7).

Furthermore, from the definition of we have is identically zero. So, for any also the -th derivative is zero. By the Leibniz rule this gives

 p(ν)uv(λ)−ν∑j=0(νj)Q(j)(λ)m(ν−j)A(λ)=0.

We rewrite this slightly as follows:

 (9) p(ν)uv(λ)−Q(λ)m(ν)A(λ)=ν∑j=1(νj)Q(j)(λ)m(ν−j)A(λ).

Now, if is an eigenvalue of algebraic multiplicity of and not an eigenvalue of , then for we have . Take in (9), and set :

 p′uv(λ0)−1τ0mA(λ0)=Q′(λ0)mA(λ0).

Since it now follows that when , while when . Now proceed by induction. Suppose we have already shown that for . Then set in (9) to obtain, using the induction hypothesis, that

 0=p(k+1)uv(λ0)−Q(λ0)m(k+1)A(λ0)=Q(k+1)(λ0)mA(λ0).

Once again using the fact that , we have that . Finally, for in (9), and using what we have shown so far in this paragraph, we have

 0≠p(κ)uv(λ0)−Q(λ0)m(κ)A(λ0)=Q(κ)(λ0)mA(λ0),

and so (7) holds.

Conversly, suppose (7) holds. Then by the definition (6) of we have , so by (4) is an eigenvalue of . Moreover, by (9) we have for , while . Hence, is an eigenvalue of of algebraic multiplicity , completing the proof of the first statement.

For the proof of the second statement, note that as is invertible, any rank one perturbation of can have only a one dimensional kernel. Therefore, the Jordan structure of the perturbation at is fixed. The last statement for follows from the assumption that and for directly from (7). ∎

The statements in Proposition 2 can also be seen by viewing as a realization of the (scalar) rational function . From that point of view the connection between poles of the function and eigenvalues of , respectively, zeroes of the function and eigenvalues of is well-known. For an in-depth analysis of this connection, even for matrix-valued rational matrix functions, see , Chapter 8. We provided above an elementary proof of the scalar case for the reader’s convenience.

Note the following example, now more involved than the one in Remark 4.

Example 6.

In this example we return to the consideration of accidentally frozen eigenvalues. Let . Then we have:

 mA(λ) =(λ−1)2(λ−2)=λ3−4λ2+5λ−2, Q(λ) =1λ−1,Q′(λ)=−1(λ−1)2, puv(λ) =(λ−1)(λ−2)=λ2−3λ+2.

Also , which has eigenvalues and . Note that both and are, by definition, accidentally frozen eigenvalues, although their character is a rather different.

Let us consider Proposition 2 for this example. Note that has no zeroes, which tells us that there are no double eigenvalues of which are not eigenvalues of . However, note that the zeros of and are not disjoined. In particular,

 p′uv(λ)mA(λ)−puv(λ)m′A(λ)=(λ−1)2(λ−2)2,

which detects the double eigenvalue of at and a double semisimple eigenvalue of at , however, as can be seen from (8) the roots of this polynomial are cancelled by the roots of .

3. Angular parameter

In this section we will study the perturbations of the form

 A+teiθuv∗,θ∈[0,2π),

where is a parameter. More precisely, we will be interested in the evolution of the sets

 σ(A,u,v;t)=⋃0≤θ<2πσ(A+teiθuv∗)

with the parameter .

It should be noted that the sets are strongly related to the pseudospectral sets as introduced in e.g., , Definition 2.1. In fact they can be viewed as the boundaries of pseudospectral sets for the special case of rank one perturbations. The interest in , see in particular the beautiful result in Theorem 4.1 there, is in the small asymptotics of these sets. Our interest below is hence more in the intermediate values of and in the large asymptotics of these sets.

By we denote the (mutually different) zeroes of , note that some of them might happen to be accidentally frozen eigenvalues, a slight modification of Example 6 is left to the reader, see also Remark 9 below. We define as

 tj=1|Q(zj)|, j=1,…,d.

We group some properties of the sets in one theorem. Below by a smooth closed curve we mean a –diffeomorphic image of a circle.

Theorem 7.

Let and let be two nonzero vectors, then the following holds.

• For , the set consists of a union of smooth closed algebraic curves that do not intersect mutually.

• For the set is locally diffeomorphic with the interval, except the intersection points at those for which (possibly there are several such ’s).

• For generic and for all the point is a double eigenvalue of , for . Two of the curves meet for at the point . These curves are at the point not differentiable, they make a right angle corner, and meet at right angles as well.

•  (10) σ(A)∪⋃t>0σ(A,u,v;t)∪Q−1(0)=C.
• The function is continuous in the Hausdorff metric for .

• converges to with .

Proof.

Statements (i) and (ii) become clear if one observes that

 σ(A,u,v;t)={z∈C:1|Q(z)|=t},t>0,

i.e., it is a level sets of the modulus of a rational function . Since these level sets can also be written as the set of all points for which it is clear that for each they are algebraic curves. For () the curves have no self-intersection and hence are smooth.

Let us now prove (iii). First note that for generic there are no accidentally frozen eigenvalues, as remarked in the end of Section 2. Hence, every eigenvalue of of multiplicity , which is not an eigenvalue of , is necessarily a zero of of multiplicity , see Proposition 2. However, by Theorem 5.1 of  for generic all eigenvalues of which are not eigenvalues of are of multiplicity at most two, and by Proposition 2 the geometric multiplicity is one. Therefore the meeting points are at with , . The behaviour of the eigenvalue curves concerning right angle corners follows from the local theory on the pertubation of an eigenvalue of geometric and algebaic multiplicity two for small values of (see e.g., the results of , but in particular, because of the connection with pseudospectral see ).

To see (iv) let be neither an eigenvalue of nor a zero of . Then for some , hence . Statement (v) follows from Proposition 2.3 part (c) in . To see (vi) note that , as an absolute value of a holomorphic function, does not have any local extreme points on and it converges to infinity with . ∎

In Section 5 we will study in detail the rate of convergence in point (v) above.

Example 8.

Consider the matrix

 A=⎡⎢ ⎢ ⎢⎣−2000000000410004⎤⎥ ⎥ ⎥⎦

and the vectors

 u=⎡⎢ ⎢ ⎢⎣−0.2+0.7i1.5−1.2i1.5+0.5i1.5+1.5i⎤⎥ ⎥ ⎥⎦\ and\ v=⎡⎢ ⎢ ⎢⎣0.5+0.3i1−0.8i0.8+0.9i−0.3−1.2i⎤⎥ ⎥ ⎥⎦.

In Figure 1 one may find the graph of the corresponding function , and a couple of curves at values of where double eigenvalues occur. Observe that these curves are often called level curves or contour plots of the function . Figure 1. The plot of |1/Q(λ)| and the curves σ(A,u,v,t) for the values of t for which there is a double eigenvalue.
Remark 9.

Observe that one may easily construct examples with given. Let

 A=⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣010⋯0⋮⋱⋱⋱⋮⋮⋱⋱00⋯⋯01a1a2⋯⋯an⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦,u=⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣0⋮⋮01⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦,v=⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣¯¯¯a1¯¯¯a2⋮⋮¯¯¯an⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦

with . Then for the matrix is equal to the Jordan block with eigenvalue zero, and hence has an eigenvalue of multiplicity . By a construction similar to Example 6 we may also make this eigenvalue accidentally frozen.

Example 10.

In a concrete example, let

 A=⎡⎢⎣0100011−11⎤⎥⎦,u=⎡⎢⎣001⎤⎥⎦,v=⎡⎢⎣1−11⎤⎥⎦.

Then so the eigenvalues of are . Further with roots at , and the zeroes of are the zeroes of which has roots at and at . The corresponding values of are, respectively, and . The eigenvalues of are plotted for the values with and in the graph below. Figure 2. Eigenvalue curves (right) showing a triple eigenvalue at zero for τ=1 and double eigenvalues at 1±√2i for τ=4√3. On the left the graph of 1/|Q(λ)| with the same eigenvalue curves plotted in the ground plane. Green stars indicate the eigenvalues of A, blue stars the roots of puv(λ) and triangles the zeroes of Q′(λ)

4. Eigenvalues as global functions of the parameter

We return now to the problem of defining the eigenvalues as functions of the parameter. Recall that stands for the degree of the minimal polynomial of . We start with the case where we consider the parameter to be real.

Theorem 11.

Let and be fixed. Then for all except some closed set with empty interior the following holds.

• The eigenvalues of

 B(τ)=A+τuv∗,τ∈(0,+∞),

which are not eigenvalues of , can be defined uniquely (up to ordering) as functions of the parameter .

• The remaining part of the spectrum of consists of structurally frozen eigenvalues of , i.e., there are no accidentally frozen eigenvalues (see formula (5) and the paragraphs following it for definitions).

• For , one has for all .

• The functions can be extended to analytic functions in some open complex neighbourhood of .

Proof.

First let us write explicitly for which all the statements will hold. Due to Proposition 2 and Remark 3 the necessary and sufficient condition for this is the following: there are no accidentally frozen eigenvalues and for all zeros of . We will now show that given arbitrary which do not satisfy the above condition one may construct , lying arbitrarily close to such that the condition holds on some open neighbourhood of . We will do this in two steps. First let us choose such that there are no accidentally frozen eigenvalues, i.e., there are no common eigenvalues of and . By  one may pick and arbitrarily close to and the desired property will hold in some small neighbourhood of . Furthermore, will also obey this property for all . Note that one may find arbitrarily small enough, so that with ( small enough) and one has for . ∎

Observe that the statement is essentially stronger and the proof is much easier than in Theorem 6.2 of .

Proposition 12.

The statements of Theorem 11 are also true for the angular parameter, i.e., if one replaces by , in all statements.

Proof.

The equivalent condition for all the statements is in this case: there are no accidentally frozen eigenvalues and for all zeros of . Hence, in the last step of the proof we need to replace by with small enough. ∎

However, note that if we replace the complex numbers by the real numbers the statement is false, as the following theorem shows.

Theorem 13.

Let and be such that for some an analytic definition of eigenvalues of is not possible due to

 Q(x)=1/τ,Q′(x)=0,Q′′(x)≠0

for some which is not an eigenvalue of , cf. Remark 3. Then for all , , with , and sufficiently small the analytic definition of eigenvalues of is not possible due to existence of , , depending continuously on with

 ˜Q(~x)=1/~τ0,˜Q′(~x)=0,˜Q′′(~x)≠0,

where corresponds to the perturbation as in (6).

Proof.

Recall the formulas (6) and (8) and set

 (11) q0(λ)=p′uv(λ)mA(λ)−puv(λ)m′A(λ),

so that

 (12) Q′(λ)=q0(λ)mA(λ)2.

By assumption we have that and . We also get that , as otherwise . We define analogously as , i.e.,

 ˜Q′(λ)=~q0(λ)m~A(λ).

Both polynomials and have coefficients depending continuously on the entries of . As is a simple zero of the polynomial , which is additionally real on the real line, we have that there is a real near such that and for , , , as in the statement. Defining finishes the proof. ∎

Remark 14.

To give a punch line to Theorem 13 we make the obvious remark that satisfying the assumptions do exist. For each such the set of vectors for which a double eigenvalue appears has a nonempty interior in , contrary to the complex case discussed in Theorem 11.

Note another reason for which the eigenvalues cannot be defined globally analytically for real matrices.

Proposition 15.

Assume that the matrix has no real eigenvalues and let be two arbitrary nonzero real vectors. Then for some an analytic definition of eigenvalues of is not possible due to

 Q(x)=1/τ,Q′(x)=0

for some , cf. Remark 3.

Proof.

Note that is real and differentiable on the real line, due to the assumptions on . As with , one has a local real extreme point of . ∎

5. The eigenvalues of A+τuv∗ for large |τ|

We shall also be concerned with the asymptotic shape of the curves . The proof of the following result was given in : let be an complex matrix, let be generic complex -vectors. Asymptotically, as , these curves are circles, one with radius going to infinity centered at the origin, and the others with radius going to zero, and centers at the roots of . The result will be restated in a more precise form below, in Theorem 17, part (v). For this we first prove the following lemma.

Lemma 16.

Let . Then

 (13) puv(λ) =l−1∑i=0⎛⎜ ⎜ ⎜⎝∑k−j=i+1k,j≥0mkv∗Aju⎞⎟ ⎟ ⎟⎠λi.
Proof.

Recall that . Expanding in Laurent series for we obtain

 puv(λ)=mA(λ)v∗(λIn−A)−1u=l∑k=0∞∑j=0mkv∗Ajuλk−j−1.

Put and interchange the order of summation to see that

 puv(λ)=l−1∑i=−∞⎛⎜ ⎜ ⎜⎝∑k−j−1=ik,j≥0mkv∗Aju⎞⎟ ⎟ ⎟⎠λi.

However, is a polynomial in , hence the sum from to vanishes, and we arrive at formula (13). ∎

Next, we analyze the roots of the polynomial as . We have already shown in  that if , then of these roots will approximate the roots of , while one goes to infinity. The condition obviously holds for generic , however the next theorem presents the full picture in view of later applications to structured matrices.

Theorem 17.

Let , and let denote the degree of the minimal polynomial . Assume also that

 (14) v∗u=⋯=v∗Aκ−1u=0,v∗Aκu≠0,

for some and put

 v∗Aκu=rκeiθκ.

Then

• is of degree ;

• eigenvalues of converge to the roots of as ;

• there are eigenvalues of which go to infinity with as

 λj(reiθ)=κ+1√rrκei(1κ+1(θ+θκ)+2jκ+1π)+O(1),j=1,2,…,κ+1,

where is fixed, and for all of them we have

 dλjdτ=v∗Aκulλκj+O(λ−(k+1)).

so these eigenvalues can be parametrized by a curve

 Γ(θ)=(rrκ)1κ+1exp(iθ)+O(1),(r→∞);
• as one has, after possibly renumerating the eigenvalues , that

 λj(reiθ)→λj+1(r),j=1,…,κ,λκ+1(reiθ)→λ1(r);
• additionally, let denote the roots of the polynomial with multiplicities respectively . Denote

 v∗(ζjIn−A)−(kj+1)u=ρjeiθj,j=1,…,ν.

Then for sufficiently large can be parametrized by disjoint curves , where the eigenvalues which go to infinity trace out a curve

 Γν+1(θ)=(rrκ)1κ+1exp(iθ)+O(1)

while the eigenvalues near trace out a curve which is of the form

 Γj(θ)=ζj+|τ|−1kjρ−1kjjeiθ+O(|τ|−2kj),0≤θ≤2π,

with .

Proof.

Statement (i) results directly from Lemma 16.

Statement (ii) is a consequence of the fact that the characteristic polynomial of equals