# A Root-Free Splitting-Lemma for Systems of Linear Differential Equations

We consider the formal reduction of a system of linear differential equations and show that, if the system can be block-diagonalised through transformation with a ramified Shearing-transformation and following application of the Splitting Lemma, and if the spectra of the leading block matrices of the ramified system satisfy a symmetry condition, this block-diagonalisation can also be achieved through an unramified transformation. Combined with classical results by Turritin and Wasow as well as work by Balser, this yields a constructive and simple proof of the existence of an unramified block-diagonal form from which formal invariants such as the Newton polygon can be read directly. Our result is particularly useful for designing efficient algorithms for the formal reduction of the system.

05/03/2017

### On Drinfel'd associators

In 1986, in order to study the linear representations of the braid group...
06/03/2022

### Convergence Analysis of the Deep Splitting Scheme: the Case of Partial Integro-Differential Equations and the associated FBSDEs with Jumps

High-dimensional parabolic partial integro-differential equations (PIDEs...
12/31/2019

### Exact splitting methods for kinetic and Schrödinger equations

In [8], some exact splittings are proposed for inhomogeneous quadratic d...
11/21/2018

### Linear Differential Equations as a Data-Structure

A lot of information concerning solutions of linear differential equatio...
02/05/2018

### Differential Equation Axiomatization: The Impressive Power of Differential Ghosts

We prove the completeness of an axiomatization for differential equation...
02/04/2020

### An all-at-once preconditioner for evolutionary partial differential equations

In [McDonald, Pestana and Wathen, SIAM J. Sci. Comput., 40 (2018), pp. A...
04/30/2020

## 1 Introduction

When studying the formal reduction of a system of linear differential equations

 xdydx=A(x)y (1)

where

is a vector with

components and a square formal meromorphic power series matrix of dimension of the form

 A(x)=x−r∞∑j=0Ajxj(A0≠0)

with pole order , the structure of the leading matrix allows to reduce the problem to several problems of smaller size whenever

has several eigenvalues. The well-known

Splitting Lemma [9] states that if is block-diagonal

 A0=(A11000A220)

with the additional condition that and have no common eigenvalue, there exists a formal transformation matrix

 T(x)=∞∑j=0Tjxj(T0=I) (2)

such that the change of variable transforms the system (1) into a new system

 xdzdx=B(x)z (3)

where

 B=(B1100B22)

is of same pole order and block-diagonal with the same block partition as in . The matrix is computed by

 B=T[A]:=T−1AT−xT−1dTdx. (4)

Using the Splitting Lemma it is hence sufficient to study the case where the leading matrix in (1) has only one eigenvalue. Using an exponential shift of the form where is the unique eigenvalue of one can (and we will throughout this paper) assume that is nilpotent.

Several methods for finding transformation matrices which again lead to non-nilpotent leading matrices have been suggested [1, 2, 3, 7, 8, 9]. It can be shown that this, combined with the Splitting Lemma, gives rise to a recursive procedure which decomposes the initial system into new systems for which one has either or . The structure of the matrix of a formal fundamental matrix solution of the system

 Y(x)=F(x)xΛeW(x) (5)

can be determined uniquely through this method. Here is an invertible formal meromorphic matrix power series in a fractional power of , is a constant complex matrix commuting with and is a diagonal matrix containing polynomials in the same fractional power of without constant terms.

For the purposes of this paper, it is useful to distinguish between the following two types of transformation matrices:

1. Matrices containing formal meromorphic power series in the variable , whose determinant is not the zero series. We will refer to this type of transformations as root-free transformations. Two systems linked as in (4) by such a root-free transformation shall be called meromorphically equivalent or short equivalent.

2. Matrices having coefficients which are formal meromorphic power series in a fractional power of , whose determinant is not the zero series. We will call these transformations ramified transformations. If is a ramified transformation, the smallest integer such that is root-free is called the ramification index of . We shall also say that is a -meromorphic transformation and takes a system into a -meromorphically equivalent system.

In this paper, we are interested in the situation where one cannot find transformations of the first type in order to obtain a system with a non-nilpotent leading matrix. In other words, the introducing of a ramification is necessary. This can be stated in terms of formal solutions by saying that the dominant (negative) power of in the matrix , or alternatively the biggest slope of the Newton polygon of the system, is a rational number [2, 5].

In this case, the methods in [1, 3, 8, 9] apply a series of root-free, ramified and Shearing-transformations (in [2, 7] a different strategy is employed). A Shearing-transformation is a transformation of the form

 S(x)=⎛⎜ ⎜ ⎜ ⎜ ⎜⎝xp1/qxp2/q⋱xpn/q⎞⎟ ⎟ ⎟ ⎟ ⎟⎠

where and . In [1] it is shown that it is always possible to achieve this by using exponential shifts and a transformation of the form

 T(x)=R(x)S(x) (6)

where is a root-free transformation having a finite number of nonzero terms and is a ramified Shearing-transformation. The transformed system

 xdydx=^A(x)y (7)

has a coefficient matrix of the form

 ^A(x)=x−r∞∑j=p^Ajxj/q(^Ap≠0,p≥0) (8)

where is relatively prime to and has several eigenvalues. Applying the Splitting Lemma to (7) then yields a -meromorphic transformation taking the system into a new system whose coefficient matrix is block-diagonal. Hence the remaining computations are carried out on matrices containing ramified power series.

One may ask under which conditions there exists also a root-free transformation which achieves a block-diagonalisation of the original system (1) without introducing ramifications, and how to compute such a transformation.

We shall give a sufficient condition for the existence of such a root-free transformation and also provide a constructive method for computing its coefficients. Denote by the set of eigenvalues of a complex square matrix . We will prove the following theorem:

###### Theorem 1.1 (“Root-Free Splitting Lemma”).

Consider the system (1) and assume there exists a Shearing-transformation of ramification index taking the system into one of the form (7) such that its leading matrix is similar to a block-diagonal matrix

 ^Bp=⎛⎝^B11p00^B22p⎞⎠

and suppose that for all and it holds for all . Then there exists a root-free transformation with the following properties:

1. transforms the system (1) into an equivalent system with block-diagonal coefficient matrix

 B=(B1100B22)

where the block sizes match those in the matrix .

2. The matrix has the leading matrix and the same pole order as .

3. A finite number of coefficients of the root-free transformation can be computed from a finite number of the coefficients of the system (1).

The classical Splitting Lemma can be seen as a particular case of this theorem by putting

as the identity matrix and

.

This paper is organised as following: in Section 2, we review the classical Splitting Lemma. In the following section we introduce a special class of systems and give a variant of the Splitting Lemma, particular to this class. Using this, we will give the proof of Theorem 1.1 in Section 4 and illustrate the benefits of our theorem concerning the formal reduction in Section 5 on an example.

Notations: Throughout the paper, empty entries in matrices are supposed to be filled with 0. We write for a (block-)diagonal matrix whose diagonal entries are the . The valuation of a polynomial or formal power series (with possibly negative or fractional exponents) is the smallest occurring power in the variable . Other definitions of notations are made as they appear in the text.

## 2 Review of the Splitting Lemma

As we have previously mentioned, the Splitting Lemma is a well-known result. Its proof is carried out in a constructive fashion and gives a method for computing the coefficients of the transformation matrix as in (2), see for example [1, 2, 9]. We repeat it here for reason of completeness. Also, we will formulate it for -meromorphic systems in preparation of the proof of Lemma 3.4.

###### Lemma 2.1.

Consider the system (7) and assume that is block-diagonal

 ^Ap=⎛⎜⎝^A11p00^A22p⎞⎟⎠

such that

 spec(^A11p)∩spec(^A22p)=∅.

Then there exists a formal -meromorphic transformation of the form

 ^T(x)=∞∑j=0^Tjxj/q(^T0=I)

such that the transformed system is block-diagonal with the same block partition as in .

Proof  We use a transformation of the special form

 ^T(x)=(I^U(x)^V(x)I)

with . Denote by the matrix . Inserting the series expansion for and and comparing coefficients gives the recursion formula

 ^Ap^Th−^Th^Ap=h∑j=1(^Th−j^Bj+p−^Aj+p^Th−j)+((p+h)/q−r)^Tp+h−qr,h>0 (9)

where for . Equation (9) is of the form

 ^Ap^Th−^Th^Ap=^Bh+p−^Ah+p+^Rh (10)

where

 ^Rh=h−1∑j=1(^Th−j^Bj+p−^Aj+p^Th−j)+((p+h)/q−r)^Tp+h−qr

depends only on with and with . Using the special form of

 ^Th=(0^Uh^Vh0),^Bh=(^B11h00^B22h)

and decomposing into block-structure accordingly gives the following system of equations:

 ^B11p+h+^R11h = 0, (11) ^B22p+h+^R22h = 0 (12)

where and are unknown, and

 ^A11p^Uh−^Uh^A22p = ^R12h, (13) ^A11p^Vh−^Vh^A22p = ^R21h

with unknowns and . Given , the first two equations (11) and (12) can be solved by setting and . The remaining equations can be solved uniquely for and because the matrices and have no eigenvalues in common, see e.g. [4].

## 3 On (ω,P)-Commutative Systems

In this section, we study a particular class of -meromorphic systems. Starting point of our considerations was [1, Lemma 5, Section 3.3] observing that a system transformed by a Shearing-transformation has a special structure. We will state this more generally and give conditions under which this special structure is preserved by the Splitting-Lemma.

###### Definition 3.1.

Let be a positive integer, and . We call a formal -meromorphic matrix as in (8) -commutative if

 ^AjP=ωjP^Aj(j≥p).

A system of the form (7) is called -commutative if its coefficient matrix is -commutative.

Remark 3.1.  The considerations in [1] correspond, in our notation, to the case of a -commutative system where is an invertible diagonal matrix. However, this restriction is not necessary in this section and we will develop our theory first for arbitrary matrices .

Remark 3.2.  Two complex matrices and satisfying are called -commutative in [6]. In terms of their notation, the th coefficient of a -commutative matrix and the matrix are -commutative.

The following lemma gives an alternative characterisation for -commutative systems which will be useful later. Note that similar concepts are used in [1].

###### Lemma 3.1.

Consider a system of the form (7). Then the following two statements are equivalent:

1. is -commutative.

2. .

Proof  A direct calculation shows:

 ^AjP = ωjP^Aj∀j≥p ⟺x−r∞∑j=p^Ajxj/qP = x−r∞∑j=pe2πij/qP^Ajxj/q ⟺^A(x)P = P^A(e2πix).

It is also straightforward to see that we have

###### Lemma 3.2.

Consider a system of the form (7) and suppose is -commutative where denotes the identity matrix. Then is an unramified formal meromorphic power series matrix.

We make the following definition: for two eigenvalues and of we define an equivalence relation by

 λ1∼lλ2⟺∃k∈{0,…,q−1}:λ1=ωlkλ2

and denote by the set

 {[λ]∼l|λ∈spec(^Ap)}

where we will, slightly abusing notation, identify with .

Given , it is clear that the matrix is -commutative.

###### Lemma 3.3.

Let as in (8) be -commutative and let and be two eigenvalues of with . Then there exists such that is -commutative and

 ^Bp=⎛⎝^B11p00^B22p⎞⎠,~P=(~P1100~P22)

with , , and .

Proof  The existence of a matrix so that satisfies the conditions of the lemma can be seen easily from elementary properties of matrix decomposition. We will use techniques similar as in [6] in order to show that has the required block diagonal structure. Let

 ~P=(~P11~P12~P21~P22)

where the block partition matches that in the matrix . Inserting into the equation yields in particular the two conditions

 ^B11p~P12−ωp~P12^B22p=0

and

 ^B22p~P21−ωp~P21^B11p=0.

The first of these two equations is of the form

 ^B11pX−Xωp^B22p=0.

The assumption implies that the matrices and have no eigenvalue in common. The above equation therefore has the unique solution . A very similar argument applies to the second equation. This proves the lemma.

Remark 3.3.  The matrices , and are in general not uniquely determined.

We now show that for a -commutative system which has block-diagonal structure as in Lemma 3.3, application of the Splitting Lemma preserves the property of being -commutative.

###### Lemma 3.4 (“Splitting Lemma for (ω,P)-Commutative Systems”).

Consider the system (7) and assume that is -commutative with and block-diagonal with blocks of same dimension

 ^Ap=⎛⎜⎝^A11p00^A22p⎞⎟⎠,P=(P1100P22)

such that

 ωp-spec(^A11p)∩ωp-spec(^A22p)=∅.

Then there exists a -commutative -meromorphic transformation of the form

 ^T(x)=∞∑j=0Tjxj/q(T0=I) (14)

such that the transformed system is -commutative and block-diagonal with the same block partition as in and .

Proof  The existence of the transformation is given by Lemma 2.1, the classical Splitting Lemma. What remains to show is that and the transformed system are -commutative. Denote by the coefficient matrix of the transformed system. Using the notations as in (9) and (10), we will show that the following relations hold:

 ^RkP = ωp+kP^Rk, (15) ^TkP = ωkP^Tk, (16) ^Bp+kP = ωp+kP^Bp+k (17)

for . The case holds trivially by putting since and . Let be an arbitrary positive integer. We will see that if the above relations hold for then they hold for . The claim follows then by induction.

We compute

 ^RhP = h−1∑j=1(^Th−j^Bj+p−^Aj+p^Th−j)P+((p+h)/q−r)^Tp+h−qrP = h−1∑j=1ωp+hP(^Th−j^Bj+p−^Aj+p^Th−j)+((p+h)/q−r)ωp+hP^Tp+h−qr = ωp+hP^Rh

where we have used (16) and (17) for and the assumption that is -commutative. This proves (15) for .

We decompose into blocks accordingly to the block structure of and and find using (11)

 ^B11p+kP11=−^R11kP11=−ωp+kP11^R11k=ωp+k^B11p+kP11.

We can show an analogous relationship for and using (12). Hence we can see that (17) holds for .

It remains to show (16), which is equivalent to showing

 ^UhP22 = ωhP11^Uh, (18) ^VhP11 = ωhP22^Vh. (19)

We will only show that the first of these two equations holds, the second can be dealt with similarly. Multiplying (13) with on the left and with on the right and combining the two equations yields

 ^A11p(^UhP22−ωhP11^Uh)−(^UhP22−ωhP11^Uh)ωp^A22p=0.

This equation is of the form

 ^A11pX−Xωp^A22p=0.

The assumption implies that the matrices and have no eigenvalue in common. The above equation therefore has the unique solution , from which we conclude (18). This completes the proof of the lemma.

Remark 3.4.  We observe that the two block matrices in the transformed system are -commutative and -commutative respectively.

## 4 A Root-Free Splitting Lemma

We define a generalised Shearing-transformation as a transformation of the form where is a Shearing-transformation and .

###### Proposition 4.1.

Consider a system as in (7) with leading matrix and let . The following statements are equivalent:

1. There exists a system as in (1) and a generalised Shearing-transformation of ramifications index such that .

2. The system (7) is -commutative, the matrix is similar to a diagonal matrix and . Furthermore, if is an eigenvalue of with multiplicity , the numbers are all eigenvalues of the same multiplicity .

Proof  We proof : let be the generalised Shearing-transformation. Since where with , we find with

 ~P^A(e2πix) = ~P~S−1(e2πix)A(e2πix)~S(e2πix)−~Px~S−1(e2πix)~S′(e2πix) = ^A(x)~P

showing that is -commutative where satisfies the stated properties. The claimed symmetry in the spectrum of can be shown as in the proofs of [1, Lemma 5, Section 3.3] and [6, Theorem 5] since we have and is a primitive th root of unity.

In order to prove the converse direction, we first assume that with and define the Shearing-transformation

 S(x)=diag(x−α1/q,x−α2/q,…,x−αn/q).

We observe that . Transform the given system using this transformation and denote the coefficient matrix of the transformed system by . We compute

 B(e2πix) = S−1(e2πix)^A(e2πix)S(e2πix)−xS−1(e2πix)S′(e2πix) = S−1(x)~P^A(e2πix)~P−1S(x)−xS−1(x)S′(x) = B(x)

showing that is -commutative and hence (Lemma 3.2) must be a unramified formal meromorphic power series matrix. The case of the general matrix follows by first applying a constant similarity transformation which diagonalises .

We can now give the proof of our main theorem.

Proofof Theorem 1.1 Let such that has a leading matrix as in the assumptions of the Theorem. In a similar way as in the proof of Lemma 3.3, we can see that is block-diagonal, matching the block structure of . Using this and Proposition 4.1, we obtain that is -commutative where is similarly block-diagonal. Note that and being relatively prime, the two conditions and are equivalent. We can therefore apply Lemma 3.4 to in order to obtain a -commutative transformation matrix such that is -commutative and block-diagonal with matching block structure.

We claim that the transformation matrix

 H(x)=S(x)C^T(x)C−1S−1(x)

is root-free satisfying the desired properties of the theorem. It is clear that is block-diagonal. But one verifies that is -commutative and hence is root-free. The remaining properties follow immediately.

## 5 Application for the Formal Reduction

Consider the situation where the system (1) is -meromorphically equivalent to a system as in (7) whose leading matrix has several eigenvalues but is not invertible. The exponential matrix polynomial in a formal fundamental matrix solution (5) is then

 W(x)=diag(w1(x),w2(x),…,wn(x))

where the leading terms of diagonal entries of the form

 wk(x)=λkx−r+pq+⋯(λk≠0)

are given by nonzero eigenvalues of . The diagonal entries having valuation greater than correspond to eigenvalues zero. In particular, these entries might involve ramifications different to or no ramifications at all. Algorithms using the classical Splitting Lemma will not be able to compute these entries without first introducing the ramification .

In order to see how we can remedy this situation, we use the fact that is similar to a matrix of the form

 ⎛⎝^B11p00^B22p⎞⎠

where is invertible and is nilpotent. Hence the conditions for Theorem 1.1 are satisfied and we will obtain a root-free transformation which splits the system into

 xdydx=(B1100B22)y. (20)

This makes it possible to work independently on the two matrices and : for the first matrix we can use a Shearing-transformation introducing the (necessary) ramification . For the second matrix however we now recursively apply the formal reduction process.

In order to illustrate this approach, consider the following example with , and

 x−1A(x)=⎛⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜⎝0x−3−x−112x−1\omit\span\omit\span\omit\span\omit\span\@@LTX@noalign\vskip6.0ptplus2.0ptminus2.0pt\omit−x−2x−10−x−10\omit\span\omit\span\omit\span\omit\span\@@LTX@noalign\vskip6.0ptplus2.0ptminus2.0pt\omitx−110x−31\omit\span\omit\span\omit\span\omit\span\@@LTX@noalign\vskip6.0ptplus2.0ptminus2.0pt\omit1−x−11x−1x−3\omit\span\omit\span\omit\span\omit\span\@@LTX@noalign\vskip6.0ptplus2.0ptminus2.0pt\omitx−10−3x−10−1⎞⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟⎠.

The Shearing-transformation with

 S11(x)=(1√x),S22(x))=⎛⎜⎝1√xx⎞⎟⎠

transforms the system into a system of the form (7) with ramification index , and block-diagonal leading matrix with the two blocks

 ^B111=(01−10),^B221=⎛⎜⎝010001000⎞⎟⎠

and the condition of Theorem 1.1 is satisfied since the first block matrix is invertible and the second is nilpotent. We obtain a root-free transformation which is of the form

 H(x)=(IU(x)V(x)I),

with (we only have computed the first couple of terms)

 U(x)=(−6x2+49x3−2x+18x26x−48x2x2−18x3−6x2+46x3−2x+16x2)

and

 V(x)=⎛⎜⎝3x−29x2+290x31−9x+89x2−x+8x2−88x33x−28x2+274x3−3x2+28x3−280x4−x+9x2−88x3⎞⎟⎠.

Applying this transformation to the original system yields the following block-diagonal matrix:

 B(x)=⎛⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜⎝0x−3−x−1\omit\span\omit\span\omit\span\omit\span\@@LTX@noalign\vskip6.0ptplus2.0ptminus2.0pt\omit−x−2x−1\omit\span\omit\span\omit\span\omit\span\@@LTX@noalign\vskip6.0ptplus2.0ptminus2.0pt\omit0x−30\omit\span\omit\span\omit\span\omit\span\@@LTX@noalign\vskip6.0ptplus2.0ptminus2.0pt\omit0x−1x−3\omit\span\omit\span\omit\span\omit\span\@@LTX@noalign\vskip6.0ptplus2.0ptminus2.0pt\omit−3x−100⎞⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟⎠+O(1).

Applying the Shearing-transformation to the first block matrix will result in a ramified system of smaller size and invertible leading matrix equalling the first block of . The formal reduction can now be applied to the second block. In this example, it is found that another Shearing-transformation of ramification index results in a system with invertible leading matrix. This decomposition can be interpreted as separation of the different slopes of the Newton-polygon, see Theorem 5.1 below.

We conclude that if the algorithm employed for computing the transformation (4) keeps the introduced ramification minimal (as for example the algorithm in [1]), using the root-free Splitting Lemma allows the recursive computation of using minimal ramifications. This approach leads to a root-free transformation taking a system of the form (1) into a system from which all the leading terms of the matrix (or, alternatively the Newton polygon) can be determined directly. We state this as

###### Theorem 5.1.

The given system (1) is meromorphically equivalent to a system where the matrix is block-diagonal

 B(x)=diag(B(1)(x),B(2)(x),…,B(ν)(x))

with and there exist a diagonal transformation with blocks of same sizes

 S(x)=diag(S(1)(x),S(2)(x),…,S(ν)(x))

with Shearing-transformations of ramification index () such that each of the matrices has either no pole at or an invertible leading matrix. In the latter case, let be the pole order and the leading matrix of . Then and if is an eigenvalue of with multiplicity , the eigenvalues are all of the same multiplicity . There are diagonal entries in the matrix of the form

 wk,j(x)=ωjλkx−rk+⋯(j=0,…,qk−1)

where the dots denote terms with higher powers of . The Newton polygon of the system corresponding to this block admits a single slope of length