1 Introduction
Invariants are one of the most fundamental and useful notions in the quantitative sciences, appearing in a wide range of contexts, from gauge theory, dynamical systems, and control theory in physics, mathematics, and engineering to program verification, static analysis, abstract interpretation, and programming language semantics (among others) in computer science. In spite of decades of scientific work and progress, automated invariant synthesis remains a topic of active research, especially in the fields of program analysis and abstract interpretation, and plays a central role in methods and tools seeking to establish correctness properties of computer programs; see, e.g., [KCBR18], and particularly Sec. 8 therein.
The focus of the present paper is the Monniaux Problem on the decidability of the existence of separating invariants, which was formulated by David Monniaux in [Mon17b, Mon17] and also raised by him in a series of personal communications with various members of the theoretical computer science community over the past five years or so. There are in fact a multitude of versions of the Monniaux Problem—indeed, it would be more appropriate to speak of a class of problems rather than a single question—but at a high level the formulation below is one of the most general:
Consider a program operating over some numerical domain (such as the integers or rationals), and assume that has an underlying finite controlflow graph over the set of nodes . Let us assume that makes use of numerical variables, and each transition comprises a function as well as a guard . Let be two points in the ambient space. By way of intuition and motivation, we are interested in the reachability problem as to whether, starting in location with variables having valuation , it is possible to reach location with variables having valuation , by following the available transitions and under the obvious interpretation of the various functions and guards. Unfortunately, in most settings this problem is wellknown to be undecidable.
Let be an ‘abstract domain’ for , i.e., a collection of subsets of . For example, could be the collection of all convex polyhedra in , or the collection of all closed semialgebraic sets in , etc.
The Monniaux Problem can now be formulated as a decision question: is it possible to adorn each control location with an element such that:
;
The collection of ’s forms an inductive invariant: for each transition , we have that ; and
.
We call such a collection a separating inductive invariant for program . (Clearly, the existence of a separating inductive invariant constitutes a proof of nonreachability for with the given and .)
Associated with this decision problem, in positive instances one is also potentially interested in the synthesis problem, i.e., the matter of algorithmically producing a suitable separating invariant .^{1}^{1}1In the remainder of this paper, the term ‘invariant’ shall always refer to the inductive kind.
The Monniaux Problem is therefore parameterised by a number of items, key of which are (i) the abstract domain under consideration, and (ii) the kind of functions and guards allowed in transitions.
Our main interest in this paper lies in the decidability of the existence of separating invariants for various instances of the Monniaux Problem. We give below a cursory crosssectional survey of existing work and results in this direction.
Arguably the earliest positive result in this area is due to Karr, who showed that strongest affine invariants (conjunctions of affine equalities) for affine programs (no guards, and all transition functions are given by affine expressions) could be computed algorithmically [Karr76]. Note that the ability to synthesise strongest (i.e., smallest with respect to set inclusion) invariants immediately entails the decidability of the Monniaux Problem instance, since the existence of some separating invariant is clearly equivalent to whether the strongest invariant is separating. MüllerOlm and Seidl later extended this work on affine programs to include the computation of strongest polynomial invariants of fixed degree [MullerOlmS04], and a randomised algorithm for discovering affine relations was proposed by Gulwani and Necula [16]. More recently, Hrushovski et al. showed how to compute a basis for all polynomial relations at every location of a given affine program [HOPW18].
The approaches described above all compute invariants consisting exclusively of conjunctions of equality relations. By contrast, an early and highly influential paper by Cousot and Halbwachs considers the domain of convex closed polyhedra [CH78], for programs having polynomial transition functions and guards. Whilst no decidability results appear in that paper, much further work was devoted to the development of restricted polyhedral domains for which theoretical guarantees could be obtained, leading (among others) to the octagon domain of Miné [Mine01], the octahedron domain of Clarisó and Cortadella [CC04], and the template polyhedra of Sankaranarayanan et al. [SSM05]. In fact, as observed by Monniaux [Mon17], if one considers a domain of convex polyhedra having a uniformly bounded number of faces (therefore subsuming in particular the domains just described), then for any class of programs with polynomial transition relations and guards, the existence of separating invariants becomes decidable, as the problem can equivalently be phrased in the firstorder theory of the reals.
One of the central motivating questions for the Monniaux Problem is whether one can always compute separating invariants for the full domain of polyhedra. Unfortunately, on this matter very little is known at present. In recent work, Monniaux showed undecidability for the domain of convex polyhedra and the class of programs having affine transition functions and polynomial guards [Mon17]. One of the main results of the present paper is to show undecidability for the domain of semilinear sets^{2}^{2}2A semilinear set consists of a finite union of polyhedra, or equivalently is defined as the solution set of a Boolean combination of linear inequalities. and the class of affine programs (without any guards)—in fact, affine programs with only a single control location and two transitions:
Theorem 1.1
Let be two rational square matrices of dimension , and let be two points in . Then the existence of a semilinear set having the following properties:

;

and ; and

is an undecidable problem.
Remark 1
It is worth pointing out that the theorem remains valid even for sufficiently large fixed (our proof shows undecidability for , but this value could undoubtedly be improved). If moreover one requires to be topologically closed, one can lower to having fixed value (which again is unlikely to be optimal). Finally, an examination of the proof reveals that the theorem also holds for the domain of semialgebraic sets, and in fact for any domain of ominimal sets in the sense of [ACO018]. The proof also carries through whether one considers the domain of semilinear sets having rational, algebraic, or real coordinates.
Although the above is a negative (undecidability) result, it should be viewed in a positive light; as Monniaux writes in [Mon17], “We started this work hoping to vindicate forty years of research on heuristics by showing that the existence of polyhedral inductive separating invariants in a system with transitions in linear arithmetic (integer or rational) is undecidable.” Theorem 1.1
shows that, at least as regards nonconvex invariants, the development and use of heuristics is indeed vindicated and will continue to remain essential. Related questions of
completeness of given abstraction scheme have also been examined by Giaccobazzi et al. in [GRS00, GLR15].It is important to note that our undecidability result requires at least two transitions. In fact, much research work has been expended on the class of simple affine loops, i.e., onelocation programs equipped with a single selftransition. In terms of invariants, Fijalkow et al. establish in [FOOPW17, FOOPW19] the decidability of the existence of semialgebraic separating invariants, and specifically state the question of the existence of separating semilinear invariants as an open problem. Almagor et al. extend this line of work in [ACO018] to more complex targets (in lieu of the point ) and richer classes of invariants. The second main result of the present paper is to settle the open question of [FOOPW17, FOOPW19] in the affirmative:
Theorem 1.2
Let be a rational square matrix of dimension , and let be two points in . It is decidable whether there exists a closed semilinear set having algebraic coordinates such that:

;

; and

.
Remark 2
The proof shows that, in fixed dimension , the decision procedure runs in polynomial time. It is worth noting that one also has decidability if , , and are taken to have realalgebraic (rather than rational) entries.
Let us conclude this section by briefly commenting on the important issue of convexity. At its inception, abstract interpretation had a marked preference for domains of convex invariants, of which the interval domain, the octagon domain, and of course the domain of convex polyhedra are prime examples. Convexity confers several distinct advantages, including simplicity of representation, algorithmic tractability and scalability, ease of implementation, and better termination heuristics (such as the use of widening). The central drawback of convexity, on the other hand, is its poor expressive power. This has been noted time and again: “convex polyhedra […] are insufficient for expressing certain invariants, and what is often needed is a disjunction of convex polyhedra.” [BM18]; “the ability to express nonconvex properties is sometimes required in order to achieve a precise analysis of some numerical properties” [GIBMG12]. Abstract interpretation can accommodate nonconvexity either by introducing disjunctions (see, e.g., [BM18] and references therein), or via the development of specialpurpose domains of nonconvex invariants such as donut domains [GIBMG12]. The technology, data structures, algorithms, and heuristics supporting the use of disjunctions in the leading abstractinterpretation tool Astrée are presented in great detail in [CCFMMR09]. In the world of software verification, where predicate abstraction is the dominant paradigm, disjunctions—and hence nonconvexity—are nowadays native features of the landscape.
2 Preliminaries
2.1 Complex and algebraic numbers
The set of complex numbers is , and for a complex number its modulus is , its real part is and its imaginary part is .
Let denote the set of nonzero complex numbers. We write for the complex unit circle, i.e. the set of complex numbers of modulus . We let denote the set of roots of unity, i.e. complex numbers such that for some .
When working in
, the norm of a vector
is , defined as the maximum of the moduli of each complex number for in . For and in , we write for the open ball centered in of radius . The topological closure of a set is , its interior , and its frontier , defined as .We will mostly work in the field of algebraic numbers, that is, roots of polynomials with coefficients in . It is possible to represent and manipulate algebraic numbers effectively, by storing their minimal polynomial and a sufficiently precise numerical approximation. An excellent reference in computational algebraic number theory is [Cohen]. All standard algebraic operations such as sums, products, rootfinding of polynomials, or computing Jordan normal forms of matrices with algebraic entries can be performed effectively.
2.2 Semilinear sets
We now define semilinear sets in , by identifying with . A set is semilinear if it is the set of real solutions of some finite Boolean combination of linear inequalities with algebraic coefficients. We give an equivalent definition now using halfspaces and polyhedra. A halfspace is a subset of of the form
for some in , in and . A polyhedron is a finite intersection of halfspaces, and a semilinear set a finite union of polyhedra.
We recall some well known facts about semilinear sets which will be useful for our purposes.
Lemma 1 (Projections of Semilinear Sets)
Let be a semilinear set in . Then the projection of on the first coordinates, defined by
is a semilinear set.
Lemma 2 (Sections of Semilinear Sets)
Let be a semilinear set in and in . Then the section of along , defined by
is a semilinear set.
Furthermore, there exists a bound in such that for all in of norm at most , if is nonempty, then it contains some in of norm at most .
For the reader’s intuitions, note that the last part of this lemma does not hold for more complicated sets. For instance, consider the hyperbola defined by . Choosing a small forces to choose a large , hence there exist no bound as stated in the lemma for .
The dimension of a set of is the minimal in such that is included in a finite union of affine subspaces of dimension at most .
Lemma 3 (Dimension of Semilinear Sets)
Let be a semilinear set in . If , then has dimension at most .
3 Main results overview
We are interested in instances of the Monniaux Problem in which there are no guards, all transitions are affine (or equivalently linear, since affine transitions can be made linear by increasing the dimension of the ambient space by 1), and invariants are semilinear. This gives rise to the semilinear invariant problem, where an instance is given by a set of square matrices and two points . A semilinear set is a separating invariant if

,

for all ,

.
The semilinear invariant problem asks whether such an invariant exists.
We need to introduce some terminology. The triple is a reach instance if there exists a matrix belonging to the semigroup generated by such that , and otherwise it is a nonreach instance. Clearly a separating invariant can only exist for nonreach instances. An instance for is called an Orbit instance.
3.1 Undecidability for several matrices
Our first result is the undecidability of the semilinear invariant problem. We start by showing it is undecidable in fixed dimension, with a fixed number of matrices and requiring that the invariant be closed. We defer the proofs until Section 4.
Theorem 3.1
The semilinear invariant problem is undecidable for 9 matrices of dimension 3 and closed invariants.
In establishing the above, we used many matrices of small dimension. One could instead use only two matrices, but increasing the dimension to 27.
Theorem 3.2
The semilinear invariant problem is undecidable for 2 matrices of dimension 27 and closed invariants.
In the above results, it can happen that the target belongs to the closure of the set of reachable points. We now show that we can ignore those “nonrobust” systems and maintain undecidability.
Theorem 3.3
The semilinear invariant problem is undecidable for “robust” instances, i.e. instances in which the target point does not belong to the closure of the set of reachable points.
3.2 Decidability for simple linear loops
In this section, we are only concerned with Orbit instances. Since it is possible to decide (in polynomial time) whether an Orbit instance is reach or nonreach [KL80, KL86], we can always assume that we are given a nonreach instance. All decidability results are only concerned with closed invariants, this is crucial in several proofs.
Theorem 3.4
There is an algorithm that decides whether an Orbit instance admits a closed semilinear invariant. Furthermore, it runs in polynomial time assuming the dimension d is fixed.
We now comment a few instructive examples to illustrate the different cases that arise. The proof of Theorem 3.4 is postponed to Section 5.
Example 1
Consider the Orbit instance in dimension where
and . The orbit is depicted on Figure 1. Here, is a counterclockwise rotation around the origin with an expanding scaling factor. A suitable semilinear invariant can be constructed by taking the complement of the convex hull of a large enough number of points of the orbit, and adding the missing points. In this example, we can take
Constructing an invariant of this form will often be possible, for instance when
has an eigenvalue of modulus
. A similar (yet more involved) construction gives the same result when has an eigenvalue of modulus . The case in which all eigenvalues have modulus 1 is more involved. Broadly speaking, invariant properties in such cases are often better described by sets involving equations or inequalities of higher degree [FOOPW17], which is why interesting semilinear invariants do not exist in many instances. However, delineating exactly which instances admit separating semilinear invariants is challenging, and is our main technical contribution on this front. The following few examples illustrate some of the phenomena that occur.Example 2
Remove the expanding factor from the previous instance, that is, put instead
Now being a rotation of an irrational angle, the orbit of is dense in the circle of radius 1. It is quite easy to prove that no semilinear invariant exists (except for the whole space ) for this instance, whatever the value of . This gives a first instance of nonexistence of a semilinear invariants. Many such examples exist, and we shall now supply a more subtle one. Note that simple invariants do exist, such as the unit circle, which is a semialgebraic set but not a semilinear one.
Example 3
Consider in dimension with
where is the matrix from Example 2, and is arbitrary. When repeatedly applying to , the last two coordinates describe a circle of radius 1 as in the previous example. However, the first two coordinates diverge: at each step, they are rotated and the last two coordinates are added. In this instance, no semilinear invariant exists (except again for the whole space ), however proving this is somewhat involved. Note however once more that a semialgebraic invariant may easily be constructed.
In examples 2 and 3, no nontrivial semilinear invariant exist, or equivalently any semilinear invariant must contain , where is the whole space. In all instances for which constructing an invariant is not necessarily immediate (as is the case in Example 1), we will provide a minimal invariant, that is, a semilinear with the property that any semilinear invariant will have to contain . In such cases there exists a semilinear invariant (namely ) if and only if . We conclude with two examples having such minimal semilinear invariants.
Example 4
Consider in dimension with
where is the matrix of Example 2, a dimensional rotation by an angle which is not a rational multiple of and . As we iterate matrix , the two first coordinates describe a circle, and the third coordinate alternates between and : the orbit is dense in the union of two parallel circles. Yet the minimal semilinear invariant comprises the union of the two planes containing these circles.
Example 5
Consider in dimension with
where is the matrix from Example 3. This can be seen as two instances of Example 3 running in parallel. Let , and note that both blocks of are initially related by a multiplicative factor, namely . Moreover, as the first block is multiplied by the matrix while the second one is multiplied by , the multiplicative factor relating the two blocks alernates between and . Thus, the minimal semilinear invariant in this setting is
which has dimension 4. If however, we had , then the minimal semilinear invariant would be
which has dimension 6. Roughly speaking, no semilinear relation holds between and .
4 Undecidability proofs
4.1 Proof of Theorem 3.1
We reduce an instance of the PCP problem defined as follows: given nine pairs of nonempty words on alphabet , does there exist an infinite word on alphabet such that . This problem is known to be undecidable [HH06].
In order to simplify future notations, given a finite or infinite word , we denote the length of the word and given an integer , we write for the ’th letter of . Given a finite or infinite word on alphabet we denote and the words on the alphabet such that and . Given a (finite or infinite) word on the alphabet , denote by the quaternary encoding of . It is clear that it satisfies and that .
Let be an instance of the PCP problem. For all , for readibility, we denote and . We build the matrices where
In the following, we write for the matrix , which can be checked to satisfy
Let us show that there exists a separating invariant of where and iff the PCP instance has no solution.
Let us first assume the PCP instance has a solution . Fix and let and . We have that and since , it is clear that as . Any separating invariant must contain this sequence since contains the initial point and is stable under . Moreover, is closed so it must contain the limit of the sequence, , which is the target point. Thus cannot be a separating invariant. Therefore there is no separating invariant of .
Now, let us assume the PCP instance has no solution. There exists such that for every infinite word on alphabet there exists such that . Indeed, consider the tree which root is labelled by and, given a node of the tree, if for all we have , then this node has children: the nodes for . This tree is finitely branching and does not contain any infinite path (which would induce a solution to the PCP instance). Thus, according to König’s lemma, it is finite. We can therefore choose the height of this tree as our .
We define the invariant where^{3}^{3}3This is a semilinear invariant since if and only if .
and
Let us show that is a separating invariant of . By definition, is closed, semilinear, contains and does not contain . The difficult point is to show stability under for .

Let , for some : there are two cases. Either , then , therefore . Otherwise, where , and . But then, there exists such that . Let be the smallest such number, then
since for . Thus,
This shows that .

Let , then . Without loss of generality, assume that (this is completely symmetric in and ). Let , and we check that then
since and . This shows that .
This shows that is thus stable and concludes the reduction.
4.2 Proof of Theorem 3.2
We reduce the instances of Theorem 3.1 to matrices of size . The first matrix shifts upwards the position of the values in the point by 3, while the second matrix applies one of the matrices of the previous reduction, depending on the position of the values within the matrices, then put the obtained value at the top. In other words, for intuitively has the same effect as had in the proof of Theorem 3.1. In the following, we reuse the notations and results of the proof of Theorem 3.1.
Define matrices and , where
is the identity matrix of size
, and for any and , the shift of , where denotes the zero vector of size , as follows:It follows that and . Assume that there exists a separating invariant for and let
which is a closed semilinear set. Then for any , we have by definition and since by virtue of and being invariant. Furthermore, since , and for otherwise we would have . Therefore is a separating invariant for .
Assume that there exists a separating invariant for and let which is a closed semilinear set. Clearly since and since . Let and , then and since and is invariant under and , thus . Therefore is a nonreachability invariant for .
4.3 Proof sketch of Theorem 3.3
We do the proof of Theorem 3.3 twice: first we use linear guards in order to limit the selection of the matrices. The added power of the guards allows for a relatively simple proof. This first proof can be seen as an extended sketch of the second one, in Appendix 0.A, where we remove the guards to obtain the result claimed. We do so by emulating the guards using extra variables.
We reduce from the PCP problem and reuse some of the notations of the proof of Theorem 3.1. Let be an instance of the PCP problem. We build the matrices where
and are from the proof of Theorem 3.1. Moreover, when in , the matrices and can only be selected if the linear guard holds, and the matrix can only be selected if .
Informally, in state , the subvector has the same role as before: contains the difference of the values of the numbers obtained using the and , while and are used in order to help compute this value. In the proof of Theorem 3.1, we showed that when the PCP instance had no solution, there existed a value such that any pair of words created with the alphabet differed on one of the first terms. The variable is used with the guards in order to detect this value : if such an exists, then at most matrices can be selected before the guard stops holding. Moreover, firing a matrix adds 2 to ensuring that when the guard stops holding, is smaller or equal to . Conversely, if no such exist, then there is a way to select matrices such that the guard always holds, allowing the variable to become an even number as high as one wants. The existence of an upper bound on the value of is used to build an invariant or to prove that there cannot exist an invariant. Finally, the value is only here in order to allow for affine modification of the values. It is never modified.
Let and . Note that is not in the adherence of the reachable set as the fourth variable of any reachable point is an even number while
’s is an odd one.
Assume the PCP instance does not possess a solution. Then there exists such that any pair of words differs on one of the first letters. Define the invariant where
This invariant is clearly semilinear, it contains and does not contain . If then only can be triggered due to the guards and . Now if for some , then cannot be fired as the guard does not hold. If one fires , by construction of , is an even number smaller than , thus . Now in order to fire a matrix , one needs to hold. We showed in the proof of Theorem 3.1 that, from the initial configuration , after transitions using one of the matrices then . As a consequence, if the guard holds, then and . Therefore, is a separating invariant of .
Now assume the PCP possesses a solution . For , we denote the prefix of length of . Let and , then . Indeed, assume that is longer than . Then for some word because . Let and recall that , then
The symmetric case is similar but uses instead. Therefore the guard is satisfied and is reachable for all . Let be a semilinear invariant containing the reachability set, then is semilinear and contains for all . This implies that it necessarily contains an unbounded interval and there must exists such that . Since is stable by the matrix , contains the target . Therefore, is not a separating invariant of .
5 Decidability proofs
This section is aimed at sketching the main ideas of the proof of Theorem 3.4 while avoiding technicalities and details. We point to the appendix for full proofs. Recall that we only consider closed semilinear invariants.

We first normalize the Orbit instance, which amounts to putting matrix in Jordan normal form, and eliminating some easy instances. This is described in Section 5.1.

We then eliminate some positive cases in Section 5.2. More precisely, we construct invariants whenever one of the three following conditions is realized:

has an eigenvalue of modulus .

has an eigenvalue of modulus .

has a Jordan block of size with an eigenvalue that is a root of unity.


We are now left with an instance where all eigenvalues are of modulus 1 and not roots of unity, which is the most involved part of the paper. In this setting, we exhibit the minimal semilinear invariant containing . In particular, there exists a semilinear invariant (namely, ) if and only if . This part is explained in Section 5.3.
5.1 Normalization
As a first step, recall that every matrix can be written in the form , where is invertible and is in Jordan normal form. The following lemma transfers semilinear invariants through the changeofbasis matrix .
Lemma 4
Let be an Orbit instance, and
an invertible matrix in
. Construct the Orbit instance . Then is a semilinear invariant for if, and only if, is a semilinear invariant for .Proof
First of all, is semilinear if, and only if, is semilinear. We have:

if, and only if, ,

if, and only if, ,

, if, and only if, .
This concludes the proof.
Thanks to Lemma 4, we can reduce the problem of the existence of semilinear invariants for Orbit instances to cases in which the matrix is in Jordan normal form, i.e., is a diagonal block matrix, where the blocks (called Jordan blocks) are of the form:
Note that this transformation can be achieved in polynomial time [Cai00, CLZ00]. Formally, a Jordan block is a matrix with , the identity matrix and the matrix with ’s on the upper diagonal, and ’s everywhere else. The number is an eigenvalue of . We will use notation for the Jordan block of size with eigenvalue . A Jordan block of dimension one is called diagonal, and is diagonalisable if, and only if, all Jordan blocks are diagonal.
The dimensions of the matrix are indexed by pairs , where ranges over the Jordan blocks and where is the dimension of the Jordan block . For instance, if the matrix has two Jordan blocks, of dimension and of dimension , then the three dimensions of are (corresponding to the Jordan block ) and (corresponding to the Jordan block ).
For a point and a subset of , let be the projection of on the dimensions in , and extend this notation to matrices. For instance, is the point corresponding to the dimensions of the Jordan block , and is projected on the coordinates of the Jordan block whose index is greater than . We write for the coordinates which are not in .
There are a few degenerate cases which we handle now. We say that an Orbit instance in Jordan normal form is normalized if:

There is no Jordan block associated with the eigenvalue , or equivalently is invertible.

For each Jordan block , the last coordinate of the point is not zero, i.e. .

There is no diagonal Jordan block with an eigenvalue which is a root of unity,

Any Jordan block with an eigenvalue of modulus has
Lemma 5
The existence of semilinear invariants for Orbit instances reduces to the same problem for normalized Orbit instances in Jordan normal form.
5.2 Positive cases
Many Orbit instances present a divergence which we can exploit to construct a semilinear invariant. Such behaviours are easily identified once the matrix is in Jordan Normal Form, as properties of its Jordan blocks. We isolate three such cases.

If there is an eigenvalue of modulus . Call its Jordan block. Projecting to the last coordinate of the orbit of diverges to in modulus (see Example 1). A long enough “initial segment” together with the complement of its convex hull (on the last coordinate of ) constitutes a semilinear invariant. See Appendix 0.C for details.

If there is an eigenvalue of modulus in block , the situation is quite similar with a convergence towards 0. However, the construction we give is more involved, the reason being that we may not just concentrate on the last nonzero coordinate of , since may very well be 0, which belongs to the adherence of the orbit on this coordinate. Yet on the full block, . We show how to construct, for any , a semilinear invariant such that for some . Picking small enough we make sure that , and then is a semilinear invariant if is large enough so that . See Appendix 0.D for more details.

Finally, if there is an eigenvalue which is a root of unity, say , on a Jordan block of size at least 2 (that is, a non diagonal block), then penultimate coordinate on of the orbit goes to in modulus. In this case, the orbit on this coordinate is contained in a union of halflines which we cut far enough away from 0 and add an initial segment to build a semilinear invariant. See Appendix 0.E for details.
Note that in each of these cases, we concentrate on the corresponding (stable) eigenspace, construct a separating semilinear invariant for this restriction of the problem, and extend it to the full space by allowing any value on other coordinates.
5.3 Minimal invariants
We have now reduced to an instance where all eigenvalues have modulus 1 and are not roots of unity. Intuitively, in this setting, semilinear invariants fail, as they are not precise enough to exploit subtle multiplicative relations that may hold among eigenvalues. However, it may be the case that some coarse information in the input can still be stabilised by an semilinear invariant, for instance if two synchronised blocks are exactly identical (see Examples 4 and 5 for more elaborate cases).
We start by identifying exactly where semilinear invariants fail. Call two eigenvalues equivalent if their quotient is a root of unity (that is, they have a multiplicative relationship of degree 1). We show that whenever no two different eigenvalues are even nonequivalent, the only stable semilinear sets are trivial. As a consequence, computing the minimal semilinear invariant in this setting is easy, as it is basically the whole space (except where is 0). However, this lower bound (nonexistence of semilinear invariant) constitutes the most technically involved part. Our proof is inductive with as base case the diagonal case, where it makes crucial use of the SkolemMahlerLech theorem. This is the subject of Appendix 0.F.1.
When the matrix has several equivalent eigenvalues, we show how to iteratively reduce the dimension in order to eventually fall into the previous scenario. Rougly speaking, if is comprised of two identical blocks , we show that it suffices to compute a minimal invariant for , since (with obvious notations) is a minimal invariant for . This is achieved, by first assuming that all equivalent eigenvalues are in fact equal and then easily reducing to this case by considering a large enough iterations of , in Appendix 0.F.1.
References
Appendix 0.A Proof of Theorem 3.3
We now turn to the second proof without the use of guards. The idea is similar, however, the reduction is far more involved as the use of a guard is now replaced by the test of reachability of additional variables.
We reduce an instance of the PCP problem. Let be an instance of the PCP problem. We denote and