# Decidability of the Mortality Problem: from multiplicative matrix equations to linear recurrence sequences and beyond

We consider the following variant of the Mortality Problem: given k× k matrices A_1, A_2, ...,A_t, does there exist nonnegative integers m_1, m_2, ...,m_t such that the product A_1^m_1 A_2^m_2... A_t^m_t is equal to the zero matrix? It is known that this problem is decidable when t ≤ 2 for matrices over algebraic numbers but becomes undecidable for sufficiently large t and k even for integral matrices. In this paper, we prove the first decidability results for t>2. We show as one of our central results that for t=3 this problem in any dimension is Turing equivalent to the well-known Skolem problem for linear recurrence sequences. This implies that it is decidable for t=3 and k ≤ 3 for matrices over algebraic numbers and for t=3 and k=4 for matrices over real algebraic numbers. Another corollary is that the set of triples (m_1,m_2,m_3) for which the equation A_1^m_1 A_2^m_2 A_3^m_3 equals the zero matrix is equal to a finite union of direct products of semilinear sets. For t=4 we show that the solution set can be non-semilinear, and thus it seems unlikely that there is a direct connection to the Skolem problem. However we prove that the problem is still decidable for upper-triangular 2 × 2 rational matrices by employing powerful tools from transcendence theory such as Baker's theorem and S-unit equations.

## Authors

• 5 publications
• 13 publications
• 5 publications
02/26/2019

### On the Mortality Problem: from multiplicative matrix equations to linear recurrence sequences and beyond

We consider the following variant of the Mortality Problem: given k× k m...
11/17/2016

### D-finite Numbers

D-finite functions and P-recursive sequences are defined in terms of lin...
06/24/2021

### Slack matrices, k-products, and 2-level polytopes

In this paper, we study algorithmic questions concerning products of mat...
08/11/2019

### Bijective recurrences concerning two Schröder triangles

Let r(n,k) (resp. s(n,k)) be the number of Schröder paths (resp. little ...
07/20/2021

### Cosine and Computation

We are interested in solving decision problem ∃? t ∈ℕ, cos t θ = c where...
06/27/2019

### Gaussian Regularization of the Pseudospectrum and Davies' Conjecture

A matrix A∈C^n× n is diagonalizable if it has a basis of linearly indepe...
12/01/2021

### Non-Sturmian sequences of matrices providing the maximum growth rate of matrix products

One of the most pressing problems in modern analysis is the study of the...
##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

A large number of naturally defined matrix problems are still unanswered, despite the long history of matrix theory. Some of these questions have recently drawn renewed interest in the context of the analysis of digital processes, verification problems, and links with several fundamental questions in mathematics [11, 7, 37, 39, 38, 35, 17, 13, 14, 36, 6, 43, 26].

One of these challenging problems is the Mortality Problem of whether the zero matrix belongs to a finitely generated matrix semigroup. It plays a central role in many questions from control theory and software verification [46, 10, 8, 36, 2]. The mortality problem has been known to be undecidable for matrices in since 1970 [41] and the current undecidability bounds for the problem (i.e. the mortality problem for semigroups generated by matrices of size ) are , , and , see [12]. It is also known that the problem is NP-hard for integer matrices [5] and is decidable for integer matrices with determinant [34]. In the case of finite matrix semigroups of any dimension the mortality problem is known to be PSPACE-complete [25].

In this paper, we study a very natural variant of the mortality problem when matrices must appear in a fixed order (i.e. under bounded language constraint): Given matrices over a ring , do there exist such that , where is zero matrix?

In general (i.e. replacing by other matrices) this problem is known as the solvability of multiplicative matrix equations and has been studied for many decades. In its simplest form, when , the problem was studied by Harrison in 1969 [21] as a reformulation of the “accessibility problem” for linear sequential machines. The case was solved in polynomial time in a celebrated paper by Kannan and Lipton in 1980 [24]. The case , i.e. where , and are commuting matrices was solved by Cai, Lipton and Zalcstein in 1994 [47]. Later, in 1996, the solvability of matrix equations over commuting matrices was solved in polynomial time in [1] and in 2010 it was shown in [4] that is decidable for non-commuting matrices of any dimension with algebraic coefficients by a reduction to the commutative case from [1]. However, it was also shown in [4] that the solvability of multiplicative matrix equations for sufficiently large natural numbers and is in general undecidable by an encoding of Hilbert’s tenth problem and in particular for the mortality problem with bounded language constraint. In 2015 it was also shown that the undecidability result holds for such equations with unitriangluar matrices [31] and also in the case of specific equations with nonnegative matrices [23].

The decidability of matrix equations for non-commuting matrices is only known as corollaries of either recent decidability results for solving membership problem in matrix semigroups [43, 44] or in the case of quite restricted classes of matrices, e.g. matrices from the Heisingberg group [26, 27] or row-monomial matrices over commutative semigroups [30]. In the other direction, progress has been made for matrix-exponential equations, but again in the case of commuting matrices [36].

In this paper, we prove the first decidability results for the above problem when and . We will call these problems the ABC111A related but distinct problem was introduced in [4] also named ABC problem: given three matrices , decide if there exist such that . and ABCD problems, respectively. More precisely, we will show that the ABC problem in any dimension is Turing equivalent to the Skolem problem (also known as Skolem-Pisot problem) which asks whether a given linear recurrence sequence ever reaches zero. As a corollary, we obtain that the ABC problem is decidable for and matrices over algebraic numbers and also for matrices over real algebraic numbers. Another consequence of the above equivalence is that the set of triples that satisfy the equation can be expressed as a finite union of direct products of semilinear sets.

In contrast to the ABC problem, we show that the solution set of the ABCD problem can be non-semilinear. This indicates that the ABCD problem is unlikely to be related to the Skolem problem. However we will show that the ABCD problem is decidable for upper-triangular rational matrices. The proof of this result relies on powerful tools from transcendence theory such as Baker’s theorem for linear forms in logarithms, S-unit equations from algebraic number theory and the Frobenius rank inequality from matrix analysis. More precisely, we will reduce the ABCD equation for upper-triangular rational matrices to an equation of the form , where are S-units, and then use an upper bound on the solutions of this equation (as in Theorem 2). On the other hand, if we try to generalize this result to arbitrary rational matrices or to upper-triangular matrices of higher dimension, then we end up with an equation that contain a sum of four or more S-units, and for such equations no effective upper bounds on their solutions are known. So, these generalizations seems to lie beyond the reach of current mathematical knowledge.

## 2 Preliminaries

We denote by , , and the sets of natural, integer, rational and complex numbers, respectively. Further, we denote by the set of algebraic numbers and by the set of real algebraic numbers.

For a prime number we define a valuation for nonzero as follows: if , where and does not divide or , then .

Throughout this paper will denote either the ring of integers or one of the fields , , or . We will use the notation for the set of matrices over .

We denote by the

’th standard basis vector of some dimension (which will be clear from the context). Let

be the zero matrix of size ,

be the identity matrix of size

, and be the zero column vector of length . Given a finite set of matrices , we denote by the multiplicative semigroup generated by .

If and , then we define their direct sum as . Let be a square matrix. We write for the determinant of . We call singular if , otherwise it is said to be invertible (or non-singular). Matrices and from are called similar if there exists an invertible matrix (perhaps over a larger field containing ) such that . In this case, is said to be a similarity matrix transforming to .

We will also require the following inequality regarding ranks of matrices, known as the Frobenius rank inequality.

[Frobenius Rank Inequality] Let . Then

 Rk(AB)+Rk(BC)≤Rk(ABC)+Rk(B)

In the proof of our first main result about the ABC problem we will make use of the primary decomposition theorem for matrices.

[Primary Decomposition Theorem [22]] Let be a matrix from , where is a field. Let be the minimal polynomial for such that

 mA(x)=p1(x)r1⋯pk(x)rk,

where the are distinct irreducible monic polynomials over and the are positive integers. Let be the null space of and let be a basis for . Then

1. is a basis for and ,

2. each is invariant under , that is, for any ,

3. let be a matrix whose columns are equal to the basis vectors from ; then

 S−1AS=A1⊕⋯⊕Ak,

where each is a matrix over of the size , and the minimal polynomial of is equal to .

The next two propositions are well-known results, but we include their proofs for completeness.

If is a polynomial over a field , where is either , or , then the primary decomposition of can be algorithmically computed.

###### Proof.

If , then one can use an LLL algorithm [29] to find primary decomposition in polynomial time.

If , then one can use well-known algorithms to compute standard representations of the roots of in polynomial time [3, 15, 40, 42]. Let be distinct roots of with multiplicities , respectively. In this case the primary decomposition of is equal to

 p(x)=(x−λ1)m1⋯(x−λk)mk.

If , then again one can compute in polynomial time standard representations of the roots of in . Let be real roots of with multiplicities and let be pairs of complex conjugate roots of with multiplicities , respectively. Then the primary decomposition of over is equal to

 p(x)=(x−λ1)m1⋯(x−λi)mip1(x)n1⋯pj(x)nj,

where for . ∎

Let and be the minimal polynomial of . Then is invertible if and only if has nonzero free coefficient, i.e., is not divisible by .

###### Proof.

Suppose that is invertible but for some polynomial . Then

 On,n=mA(A)=A⋅m′(A).

Multiplying the above equation by we obtain , which contradicts the assumption that is the minimal polynomial for .

On the other hand, it does not divide , then for some and a nonzero constant . Then

 On,n=mA(A)=A⋅m′(A)+aIn.

From this equation we conclude that is invertible, and . ∎

Our proof of the decidability of ABCD problem for upper-triangular rational matrices relies on the following result which is proved using Baker’s theorem on linear forms in logarithms [18, 16].

Let be a finite collection of prime numbers and let be relatively prime nonzero integers.

If are relatively prime nonzero integers composed of primes from which satisfy the equation , then

 max{|x|,|y|,|z|}

where and .

## 3 Linear recurrence sequences and semilinear sets

There is a long history in computer science and mathematics of studying sequences of numbers defined by some recurrence relation, where the next value in the sequence depends upon some ‘finite memory’ of previous values in the sequence. Possibly the simplest, and certainly the most well known of these, is the Fibonacci sequence, which may be defined by the recurrence with being given as the initial conditions of the sequence. We may generalise the Fibonacci sequence to define a linear recurrence sequence, which find application in many areas of mathematics and other sciences and for which many questions remain open. Let be a ring; a sequence is called a linear recurrence sequence (-LRS) if it satisfies a relation of the form:

 un=ak−1un−1+⋯+a1un−k+1+a0un−k,

for any , where each are fixed coefficients222In the literature, such a sequence is ordinarily called an LRS; we use the nomenclature -LRS since we will study a multidimensional variant of this concept. Also, 1-LRS are usually considered over integers, but in the present paper we will consider such sequences over algebraic numbers.. Such a sequence is said to be of depth if it satisfies no shorter linear recurrence relation (for any ). We call the initial values of the sequence the initial conditions of the -LRS. Given the initial conditions and coefficients of a 1-LRS, every element is uniquely determined.

The zero set of a 1-LRS is defined as follows:

 Z(un)={j∈N | uj=0}.

There are various questions that one may ask regarding . One notable example relates to the famous “Skolem’s problem” which is stated in the following way:

###### Problem (Skolem’s Problem).

Given the coefficients and initial conditions of a depth 1-LRS , determine if is the empty set.

Skolem’s problem has a long and rich history, see [19] for a good survey. We note here that the problem remains open despite properties of zero sets having been studied even since 1934 [45]. It is known that the Skolem problem is at least NP-hard [9] and that it is decidable for depth over and for depth over , see [46] and [33]333A proof of decidability for depth was claimed in [19], although there is possibly a gap in the proof [37].. Other interesting questions are related to the structure of . A seminal result regarding -LRS is that their zero sets are necessarily semilinear [32, 45, 28] (a set is called semilinear if it is the union of a finite set and finitely many arithmetic progressions).

Linear recurrence sequences can also be represented using matrices [19]:

Let be a ring; for a sequence over the following are equivalent:

1. is a 1-LRS of depth .

2. There are vectors and a matrix such that for .

Moreover, for any matrix , the sequence is a 1-LRS of depth . On the other hand, if is a 1-LRS of depth , then there is a matrix such that for all .

Lemma 3 motivates the following definition of -dimensional Linear Recurrence Sequences (-LRSs) which as we show later are related to the mortality problem for bounded languages.

[-LRS] A multidimensional sequence is called an -LRS of depth over if there exist two vectors and matrices such that

 um1,m2,…,mn=uTMm11Mm22⋯Mmnnv.

Note that in Definition 3, one could equivalently say that a sequence is an -LRS if there exist matrices such that

 um1,m2,…,mn=(Mm11Mm22⋯Mmnn)[1,k+1]

with a similar proof as in Lemma 3.

We remind the reader of the definition of semilinear sets.

[Semilinear set] A set is called semilinear if it is the union of a finite set and finitely many arithmetic progressions.

A seminal result regarding a -LRS is that its zero set is semilinear.

[Skolem, Mahler, Lech [32, 45, 28] and [19, 20]] The zero set of a 1-LRS over (or more generally over any field of characteristic 0) is semilinear.

In particular, if is a 1-LRS whose coefficients and initial conditions are algebraic numbers, then one can algorithmically find a number such that for every , if we let , then

1. the sequence is a 1-LRS of the same depth as , and

2. either or is finite.

Note that in the above theorem we can decide whether is finite or because if and only if , where is the depth of .

We will also consider a stronger version of the Skolem problem.

###### Problem (Strong Skolem’s Problem).

Given the coefficients and initial conditions of a 1-LRS over , find a description of the set . That is, find a finite set such that if is finite or, if is infinite, find a finite set , a constant and numbers such that

 Z(un)=F∪{i1+mL:m∈N}∪⋯∪{it+mL:m∈N}.

Using the Skolem-Mahler-Lech theorem we can prove an equivalence between the strong version of the Skolem problem and the standard version.

Skolem’s problem of depth over is equivalent to the strong Skolem’s problem of the same depth.

###### Proof.

Obviously, Skolem’s problem is reducible to the strong Skolem’s problem. We now show a reduction in the other direction.

Let be a depth- 1-LRS over . By Theorem 3, we can algorithmically find a number such that, for every , the sequence is a 1-LRS of depth  which is either everywhere zero, that is, or is finite. Recall that we can decide whether is equal to by considering the first terms of .

By definition, we have

 Z(un)=L−1⋃i=0{i+L⋅Z(uim)}.

So, if , then , and if is finite, then so is .

To finish the proof we need to show how to compute , and hence , when it is finite. For this we will use an oracle for the Skolem problem. Let be the smallest number such that is empty. Such exists because is finite. Furthermore, is a 1-LRS of depth for any . So, we ask the oracle for the Skolem problem to decide whether for each starting from until we find one for which is empty. Note that we do not have any bound on because we do not even know the size of . All we know is that is finite, and hence the above algorithm will eventually terminate. Since is a subset of , then we can compute it by checking whether for . ∎

## 4 The mortality problem for bounded languages

We remind the reader the definition of the mortality problem for bounded languages.

###### Problem (Mortality for bounded languages).

Given matrices over a ring , do there exist such that

 Am11Am22…Amtt=Ok,k.

Recall that for and this problem is called the ABC and ABCD problem, respectively. Our first main result is that the ABC problem is computationally equivalent to the Skolem problem for -LRS. Our reduction holds in any dimension and at the same number field which means that any new decidability results for the Skolem problem will automatically extend the decidability of ABC equations and can immediately lead now to new decidability results for equations in dimensions 2,3 and 4. For the proof we will need the following technical lemma.

Let be a field, and suppose are matrices of the form

 A=[As,sOs,k−sOk−s,sOk−s,k−s], B=[Bs,tXs,k−tYk−s,tZk−s,k−t], C=[Ct,tOt,k−tOk−t,tOk−t,k−t]

for some , where , , , , and are matrices over whose dimensions are indicated by their subscripts (in particular, and ). If and are invertible matrices, then the equation is equivalent to .

###### Proof.

It is not hard to check that

 AB=[As,sOs,k−sOk−s,sOk−s,k−s]⋅[Bs,tXs,k−tYk−s,tZk−s,k−t]=[As,sBs,tAs,sXs,k−tOk−s,tOk−s,k−t],

and hence

So, if , then . Conversely, if , then . Using the fact that and are invertible matrices, we can multiply the equation by on the left and by on the right to obtain that . ∎

The next lemma is similar to Lemma 4 and can also be proved by directly multiplying the matrices.

(1) Suppose are matrices of the following form

 A=[As,sOs,k−sOk−s,sOk−s,k−s]=As,s⊕Ok−s,k−sandB=[Bs,kXk−s,k],

for some . If is invertible, then is equivalent to .

(2) Suppose are matrices of the following form

 A=[Ak,tYk,k−t]andB=[Bt,tOt,k−tOk−t,tOk−t,k−t]=Bt,t⊕Ok−t,k−t,

for some . If is invertible, then is equivalent to .

As in Lemma 4, in the above equations , , , , and are matrices over whose dimensions are indicated by their subscripts.

Let be the ring of integers or one of the fields , or . Then the ABC problem for matrices from is Turing reducible to the Skolem problem of depth over .

###### Proof.

Clearly, the ABC problem over is equivalent to the ABC problem over (by multiplying the matrices by a suitable integer number). It is also not hard to see that the Skolem problem for 1-LRS over is equivalent to the Skolem problem over for 1-LRS of the same depth. Hence, without loss of generality, we will assume that is one of the fields , or .

Consider an instance of the ABC problem: , where . Let be the characteristic polynomial of . By Proposition 2, we can find a primary decomposition , where are distinct irreducible monic polynomials. From this decomposition we can find the minimal polynomial of because is a factor of , and we can check all divisors of to find .

Let , where are distinct irreducible monic polynomials. Now we apply the Primary Decomposition Theorem (Theorem 2) to . Let be a basis for the null space of , which can be found, e.g., using Gaussian elimination. Let be a matrix whose columns are the vectors of the basis . Then

 S−1AS=A1⊕⋯⊕Au,

where the minimal polynomial of is for . Similarly, we can compute a primary decomposition of the minimal polynomial for , where are distinct irreducible monic polynomials, and a matrix such that

 T−1CT=C1⊕⋯⊕Cv,

where the minimal polynomial of is for .

Note that if is an irreducible monic polynomial, then either or does not divide . So, among the polynomials in the primary decomposition of at most one is equal to , and the same holds for the polynomials in the primary decomposition of .

Suppose, for example, that . In this case , and , where the minimal polynomial of is , and hence is a nilpotent matrix of index . Recall that, for , the polynomial is not divisible by , and so is , which is the minimal polynomial for . Hence, by Proposition 2, is invertible. Let and . Then we obtain

 S−1AS=Ainv⊕Anil, (1)

where is invertible, and is nilpotent. If for some , then we need in addition to permute some rows and columns of matrix to obtain one that gives us Equation (1) above. If none of the is equal to , then we assume that is the empty matrix of size .

The same reasoning can be applied to matrix

, that is, we can compute an invertible matrix

, a nilpotent (or empty) matrix , and an invertible matrix such that

 T−1CT=Cinv⊕Cnil.

Note that the indices of the nilpotent matrices and are at most , and hence and are zero (or empty) matrices.

Our goal is to find all triples for which . In order to do this we will consider four cases: (1) and , (2) and , (3) and , and (4) and .

Before dealing with each of these cases, we note that the equation is equivalent to

 S(Aminv⊕Amnil)S−1 BnT(Cℓinv⊕Cℓnil)T−1=Ok,kor to (Aminv⊕Amnil)S−1 BnT(Cℓinv⊕Cℓnil)=Ok,k

because and are invertible matrices.

Now suppose has size , and has size for some .

Case 1: and . Since , we have and , and hence the equation is equivalent to

 (2)

Suppose the matrix has a form . Since and are invertible matrices, Lemma 4 implies that Equation (2) is equivalent to . Therefore, we obtain the following equivalence: if and only if

 si,jn=(e⊤iS−1)Bn(Tej)=0% for all i=1,…,s and j=1,…,t. (3)

By Lemma 3, the sequence is a 1-LRS of order over . As in the proof of Theorem 3, we can use an oracle for the Skolem problem for 1-LRS of depth over to compute the descriptions of the semilinear sets . Hence we can compute a description of the intersection , which is also a semilinear set. An important observation is that the set does not depend on and .

Case 2: and . Fix some and . For this particular choice of and , the equation is equivalent to

 si,jn=(e⊤iAm)Bn(Cℓej)=0for all i=1,…,k and j=1,…,k.

Again, by Lemma 3, the sequence is a 1-LRS of order over , and we can use an oracle for the Skolem problem for 1-LRS of depth over to compute the descriptions of the semilinear sets . Therefore, we can compute a description of the intersection which is equal to all values of for which holds for fixed .

Case 3: and . To solve this case we will combine ideas from cases (1) and (2). Fix some and let be any integer such that . Then , and the equation is equivalent to

 (Aminv⊕Ok−s,k−s)S−1BnCℓ=Ok,k. (4)

Suppose the matrix has a form . Since is invertible, Lemma 4 implies that Equation (4) is equivalent to . Therefore, Equation (4) is equivalent to

 si,jn=(e⊤iS−1)Bn(Cℓej)=0for all i=1,…,s and j=1,…,k.

As in the previous two cases, we can use an oracle for the Skolem problem for 1-LRS of depth over to compute the descriptions of the semilinear sets and of the intersection . is the set of all for which holds for a fixed and for any .

Case 4: and . Fix some and let be any integer such that . Using the same ideas as in Case 3 we can use an oracle for the Skolem problem to compute a description of a semilinear set which is equal to all values of for which holds for a fixed and for any .

Combining all the above cases together, we conclude that the set of all triples that satisfy the equation is equal to the following union

 {(m,n,ℓ):n∈Z1 and m,ℓ≥k} ⋃⋃m,ℓ

Having a description for the above set, we can decide whether is it empty or not, that is, whether there exist such that . ∎

The set of triples that satisfy an equation is equal to a finite union of direct products of semilinear sets.

###### Proof.

The corollary follows from Equation (5) above that describes all triples that satisfy the equation . By construction and the Skolem-Mahler-Lech theorem, the sets , , and are semilinear. In Equation (5) we take direct product of these sets either with singleton sets or with sets of the form , which are also semilinear sets, and then take a finite union of such products. In other words, Equation (5) can be rewritten as follows

 Nk×Z1×Nk ⋃⋃m,ℓ

The main corollary of Theorem 4 is the following result. ∎

The ABC problem is decidable for matrices over algebraic numbers and for