Efficient Algorithms for Asymptotic Bounds on Termination Time in VASS

04/29/2018 ∙ by Tomáš Brázdil, et al. ∙ Institute of Science and Technology Austria Andrey Kusnetsov Masarykova univerzita 0

Vector Addition Systems with States (VASS) provide a well-known and fundamental model for the analysis of concurrent processes, parameterized systems, and are also used as abstract models of programs in resource bound analysis. In this paper we study the problem of obtaining asymptotic bounds on the termination time of a given VASS. In particular, we focus on the practically important case of obtaining polynomial bounds on termination time. Our main contributions are as follows: First, we present a polynomial-time algorithm for deciding whether a given VASS has a linear asymptotic complexity. We also show that if the complexity of a VASS is not linear, it is at least quadratic. Second, we classify VASS according to quantitative properties of their cycles. We show that certain singularities in these properties are the key reason for non-polynomial asymptotic complexity of VASS. In absence of singularities, we show that the asymptotic complexity is always polynomial and of the form Θ(n^k), for some integer k≤ d, where d is the dimension of the VASS. We present a polynomial-time algorithm computing the optimal k. For general VASS, the same algorithm, which is based on a complete technique for the construction of ranking functions in VASS, produces a valid lower bound, i.e., a k such that the termination complexity is Ω(n^k). Our results are based on new insights into the geometry of VASS dynamics, which hold the potential for further applicability to VASS analysis.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction

Vector Addition Systems with States (VASS) are a fundamental model widely used in program analysis. Intuitively, a VASS consists of a finite set of control states and transitions between the control states, and a set of counters that hold non-negative integer values, where at every transition between the control states each counter is updated by a fixed integer value. A configuration of a given VASS is determined by the current control state and the vector of current counter values.

One of the most basic problems studied in program analysis is termination that, given a program, asks whether it always terminates. For VASS, the problem whether all paths initiated in given configuration reach a terminal configuration is EXPSPACE-complete. Here, a terminal configuration is a configuration where the computation is “stuck” because all outgoing transitions would decrease some counter to a negative value. The EXPSPACE-hardness follows from (Lipton, 1976), and the upper bound from (Yen, 1992; Atig and Habermehl, 2011). Contrasting to this, the problem of structural VASS termination, which asks whether all configurations of a given VASS terminate, is solvable in polynomial time (Kosaraju and Sullivan, 1988). This is encouraging, because structural termination guarantees termination for all instances of the parameters represented by the counter values (i.e., all inputs, all instances of a given parameterized system, etc.).

The quantitative variant of the termination question asks whether a given program terminates in steps for every input of size , where is some function. A significant research effort has recently been devoted to this question in the program analysis literature: Recent projects include SPEED (Gulwani et al., 2009; Gulwani and Zuleger, 2010), COSTA (Albert et al., 2009), RAML (Hoffmann et al., 2012), Rank (Alias et al., 2010), Loopus (Sinn et al., 2014, 2017), AProVE (Giesl et al., 2017), CoFloCo (F.-Montoya and Hähnle, 2014), C4B (Carbonneaux et al., 2017). The cited projects target general-purpose programming languages with the goal of designing sound (but incomplete) analyses that work well in practice. The question whether sound and complete techniques can be developed for restricted classes of programs (such as VASS), however, has received considerably less attention.

Our contribution. In this work, we study the quantitative variant of structural VASS termination. The termination complexity of a given VASS is a function such that is the length of the longest computation initiated in a configuration where all components of are bounded by . We concentrate on polynomial and particularly on linear asymptotic bounds for termination complexity, which seem most relevant for practical applications. Our main results can be summarized as follows:

Linear bounds. We show that the problem whether is decidable in polynomial time. Our proof reveals that if the termination complexity is not linear, then it is at least quadratic (or the VASS is non-terminating). Hence, there is no VASS with asymptotic termination complexity “between” and . In addition, for strongly connected linear VASS, we compute a constant (in polynomial time) such that for . Further, a linear VASS always has a ranking function that witnesses the linear termination complexity; this ranking function is also computable in polynomial time.

Polynomial bounds. We show that the termination complexity of a given VASS is highly influenced by the properties of normals of quasi-ranking functions, see Section 4. We start with strongly connected VASS, and classify them into the following three types:

  • Non-terminating VASS.

  • Positive normal VASS: Terminating VASS for which there exists a quasi-ranking function such that each component of its normal is positive.

  • Singular normal VASS: Terminating VASS for which there exists a quasi-ranking function such that each component of its normal is non-negative and (B) does not hold.

This classification is efficient, i.e., we can decide in polynomial time to which class a given VASS belongs. We show that each type (B) VASS of dimension has termination complexity in , where , and we show that the is computable in polynomial time. Termination complexity of a type (C) VASS is not necessarily polynomial, and hence singularities in the normal are the key reason for high asymptotic bounds in VASS. For a given type (C) VASS, we show how to compute a valid lower bound, i.e., a such that the termination complexity is (in general, this bound does not have to be tight). Our tight analysis for type (B) VASS extends to general (not necessarily strongly connected) VASS where each SCC determines a type (B) VASS.

Ranking Functions and Completeness. Algorithmically the result on polynomial bounds is established by a recursive procedure: the procedure computes quasi-ranking functions which establish that certain transitions can only be taken a linear number of times; these transitions are then removed and the algorithm recurses on the remaining strongly-connected components. We show that if there is no quasi-ranking function, then the VASS does not terminate, i.e., our ranking function construction is complete. To the best of our knowledge, this is the first completeness result for the construction of ranking functions for VASS.

Technically, our results are based on new insights into the geometry of VASS dynamics, some of which are perhaps interesting on their own and can enrich the standard toolbox of techniques applicable to VASS analysis.

  void main(uint n) {
    uint i = n, j = n;
\(l_{1}\!:\)  while (i > 0) {
      i--;
      j++;
\(l_{2}\!:\)    while (j > 0 && *)
          j--;
  } }

(-1,1)

(0,-1)

(0,0)
Figure 1. (a) a program, (b) VASS

d==ff, d:=tt

d:=ff

d==tt

(-1,1,0)

(-1,1,0)

(-1,0,1)

(1,-1,0)
Figure 2. (a) a process template, (b) VASS

Motivation and Illustration of our Results.

In previous work we have described automated techniques for the complexity analyses of imperative programs, which use VASS (and extensions) as backend (Sinn et al., 2014, 2017). For example, our techniques allow to abstract the program given in Fig. 1 (a) to the VASS in Fig. 1 (b). has two locations and , which correspond to the loop headers of the program. has dimension two in order to represent the variables and . The transitions of correspond to the variable increments/decrements. In contrast to our previous approaches (Sinn et al., 2014, 2017), the analysis in this paper is guaranteed to compute tight bounds: we obtain the precise linear termination complexity for and can construct a linear ranking function, e.g., , where and (our construction is not guaranteed to return this ranking function, but it will always find a linear ranking function).

We illustrate VASSs as models of concurrent systems: Fig. 2 (a) states a process template. A concurrent system consists of copies of this process template. The processes communicate via the Boolean variable . The concurrent system is equivalently represented by the VASS in Fig. 2 (b). has two locations and , which represent the global state. has dimension three in order to represent the number of processes in the local states , and . The transitions of reflect the transitions of the process template, e.g., transition means that one process moves from state to . We are interested in the parameterized verification problem, i.e., to study the termination of the concurrent system for all system sizes . Our results in this paper establish , i.e., after quadratically many steps of the concurrent system there is no more process that can take another step.

Related Work. Results on VASS. The model of VASS (Karp and Miller, 1969) or equivalently Petri nets are a fundamental model for parallel programs (Esparza and Nielsen, 1994; Karp and Miller, 1969) as well as parameterized systems (Bloem et al., 2016; Aminof et al., 2015b, a). The termination problems (counter-termination, control-state termination) as well as the related problems of boundedness and coverability have been a rich source of theoretical problems that have been widely studied (Lipton, 1976; Rackoff, 1978; Esparza, 1998; Esparza et al., 2014; Bozzelli and Ganty, 2011). The complexity of the termination problem with fixed initial configuration is EXPSPACE-complete (Lipton, 1976; Yen, 1992; Atig and Habermehl, 2011). Besides the termination problem, the more general reachability problem where given a VASS, an initial and a final configuration, whether there exists a path between them has also been studied (Mayr, 1984; Kosaraju, 1982; Leroux, 2011). The reachability problem is decidable (Mayr, 1984; Kosaraju, 1982; Leroux, 2011), and EXPSPACE-hard (Lipton, 1976), and the current best-known upper bound is cubic Ackermannian (Leroux and Schmitz, 2015), a complexity class belonging to the third level of a fast-growing complexity hierarchy introduced in (Schmitz, 2016). Functions (non)computable by VASS are studied in (Leroux and Schnoebelen, 2014)

. Our algorithm for computing polynomial bounds can be seen as the dual (in the sense of linear programming) of the algorithm of 

(Kosaraju and Sullivan, 1988); this connection is the basis for the completeness of our ranking function construction (we further comment on the connection to (Kosaraju and Sullivan, 1988) in Section 4).

Ranking functions and extensions. Ranking functions for intraprocedural analysis have been studied widely in the literature. We restrict ourselves here to approaches which present complete methods for the construction of linear/polynomial ranking functions (Podelski and Rybalchenko, 2004; Alias et al., 2010; Yang et al., 2010); in contrast to this paper these approaches target general programs and do not show that the non-existence of a linear/polynomial ranking function implies the non-termination of the program.

The problem of existence of infinite computations in VASS has been studied in the literature. Polynomial-time algorithms have been presented in (Chatterjee et al., 2010; Velner et al., 2015) using results of (Kosaraju and Sullivan, 1988). In the more general context of games played on VASS, even deciding the existence of infinite computation is coNP-complete (Chatterjee et al., 2010; Velner et al., 2015)

, and various algorithmic approaches based on hyperplane-separation technique have been studied 

(Chatterjee and Velner, 2013; Jurdzinski et al., 2015; Colcombet et al., 2017).

2. Preliminaries

We use , , , and to denote the sets of non-negative integers, integers, rational numbers, and real numbers. The subsets of all positive elements of , , and are denoted by , , and . Further, we use to denote the set where is treated according to the standard conventions. The cardinality of a given set is denoted by . When no confusion arises, we also use to denote the absolute value of a given .

Given a function , we use and to denote the sets of all such that and for all sufficiently large , where are some constants. If and , we write .

Let be arbitrary index sets. Elements of are denoted by bold letters such as . The component of of index is denoted by . For a matrix we denote by the element in row of index and column of index by , and by the transpose of . If the index set is of the form for some positive integer , we write instead of , i.e., for we have . For every , we use to denote the constant vector where all components are equal to . The scalar product of is denoted by , i.e., . The other standard operations and relations on such as , , or are extended to in the component-wise way. In particular, is positive if , i.e., all components of are positive. The norm of is defined by .

Half-spaces and Cones.

An open half-space of determined by a normal vector , where , is the set of all such that . A closed half-space is defined in the same way but the above inequality is non-strict. Given a finite set of vectors , we use to denote the set of all vectors of the form , where is a non-negative real constant for every .

2.1. Syntax and semantics of VASS

In this subsection we present a syntax of VASS, represented as finite state graphs with transitions labelled by vectors of counter changes.

Definition 2.1 ().

Let . A -dimensional vector addition system with states (VASS) is a pair , where is a finite set of states and is a finite set of transitions such that for every there exists and such that .

We denote by the number . The encoding size of is denoted by (the integers representing counter updates are written in binary).

In our disucssion it is often beneficial to express constraints on transitions using matrix notation. We define the update matrix by setting for all and all transitions . We also define the oriented incidence matrix by setting resp. , if resp. and , and , otherwise. We note that every column of , corresponding to a transition , either contains exactly one entry and exactly one entry (in case the source and target of transition  are different) or only entries (in case the source and target of transition  are the same).

Example 2.2 ().

VASS from Fig. 1 (b) has two states and three transitions , , . The matrices and look as follows:

Here the rows correspond to the states and columns to transitions .

Hence, the columns are the update vectors of transitions .

Paths and cycles.

A finite path in of length  is a finite sequence of the form where and for all . If , then is a cycle. A cycle is simple if all are pairwise different. The effect of , denoted by , is the sum . Given a set of paths , we denote by the sum of effects of all paths in . Let The elements of are called increments.

Given two finite paths and such that , we use to denote the finite path . A multi-cycle in is a multiset of simple cycles. The length of a multi-cycle is the sum of lengths of all its cycles.

Let be a finite path in . A decomposition of into simple cycles, denoted by , is a multi-cycle, i.e., a multiset of simple cycles, defined recursively as follows:

  • If does not contain any simple cycle, then is an empty multiset.

  • If where is the first simple cycle occurring in , then .

Observe that if is empty, then the length of is at most . Since the length of every simple cycle is bounded by , the length of is asymptotically the same as the number of elements in , assuming a fixed VASS . Considering to be the remainder of after all simple cycles of removed by the above procedure, we obtain .

Let be a VASS. A sub-VASS of is a VASS such that and . VASS is strongly connected if for every there is a finite path from to .

A strongly connected component (SCC) of is a maximal strongly connected sub-VASS of .

Configurations and computation.

A configuration of is a pair , where and . The set of all configurations of is denoted by . The size of is defined as . Given , we say that is -bounded if .

A computation initiated in is a finite sequence of configurations such that there exists a path where for all . The length of a given computation is the length of its (unique) corresponding path.

2.2. Termination Complexity of VASS

Definition 2.3 ().

Let be a -dimensional VASS. For every configuration of , let be the least such that the length of every finite computation initiated in is bounded by . The termination complexity of is a function defined by If for some , we say that is non-terminating, otherwise it is terminating.

Observe that if is non-terminating, then for all sufficiently large . Further, if is terminating, then . In particular, if , we also have .

3. Linear Termination Time

In this section, we give a complete and effective characterization of all VASS with linear termination complexity. Let us consider a VASS . We assume that is strongly connected unless explicitly stated otherwise.

Consider an integer solution to the constraints and (here is the oriented incidence matrix of ). Note that induces a multi-cycle of . Indeed, if , then there is a transition with such that the source state of is equal to the target state of . Hence one may trace a path over states with positive value in that eventually leads to a simple cycle. Subtracting one from for all on the simple cycle we obtain still satisfying the above constraints. Repeating this process we eventually end up with a zero vector and the desired multi-cycle .

Note that is equal to the number of transitions traced along the multi-cycle. So, roughly speaking, it suffices to add a constraint (here is the update matrix) to characterize multicycles that, when appropriately executed in an -bounded configuration, produce a zero-avoiding computation. However, there are several issues in such a formulation, namely dependency of the constraints on the parameter and demand for an integer solution.

So we transform the constraints into the following relaxed optimization problem to completely characterize the linear computational complexity:


rational LP (): with
Theorem 3.1 ().

Let be a strongly connected VASS. We consider LP () over .

  1. If LP () has a solution with , then is the precise asymptotic computational complexity of , i.e., converges to for .

  2. If () is unbounded, then the computational complexity of is at least quadratic.

Intuition: Let be a rational solution of () with and consider a non-negative integer . Let satisfy where is the least common multiple of the denominators of . Since and , the vector specifies a multi-cycle of length . Moreover, satisfies which means that executing all transitions of the multi-cycle cannot decrease the counters by more than . By executing cycles of the multi-cycle in a carefuly arranged order initiated in a -bounded configuration, we obtain a zero-avoiding computation whose length is, roughly, .

On the other hand, if the program () is unbounded, we show that then there is a solution satisfying . From this we obtain multi-cycles of arbitrary length whose overall effect is non-negative. Note that this does not mean that the VASS is non-terminating since the cycles need to be connected into a single computation. However, we show that they always can be connected into a computation of at least quadratic length.

Proof of Theorem 3.1 (A). Assume () is bounded. Let be an optimal solution. We set . We first show the upper bound. We fix some . We consider the longest computation starting from some -bounded configuration. Let be the path associated to this computation. Because we are interested only in asymptotic behaviour, we can assume is a cycle. Let denote the number of occurrences of transition on . We note that because the starting configuration of the considered worst-case computation is -bounded. Because is a cycle, we have . Hence, is a feasible point of LP () and we get . Thus, . Because this holds for all , we can conclude .

We show the lower bound. We fix some . Let be the least common multiple of the denominators of . We set . We have , , and . We consider the multi-cycle associated to . Let be some cycle of which visits each state at least once. Let be the length of . Because visits every state at least once we can combine and copies of multi-cycle into a single cycle . Let be the length of . We have . Let be the start and end state of . We set (rounded down if needed). Let . We show that starting from configuration we can times execute the cycle . This is sufficient to establish because of for .

We consider the configurations after executions of . We show by induction on that can be executed one more time. We have . Hence, we have . We have to show that we can execute one more time. In every step of we decrease each vector component by at most . Hence, we need to show . Indeed, we have .

Proof of Theorem 3.1 (B). Assume () is unbounded. We will show that there is no open half-space of such that and . As we show later, this implies that the computational complexity of is at least quadratic. From the theory of linear programming we know that there is a direction in which the polyhedron given by , and is unbounded and which increases the objective function . Hence, there is a with and and for some . We consider the multi-cycle extracted from the integer vector where is the common multiple of denominators in . Assume now for the sake of contradiction that there is an open half-space of such that and . Let be all simple cycles occuring in . Because of we have for all , and hence

which implies . On the other hand, we get from and . A contradiction.

Now suppose there is no open half-space of such that and . We show that , i.e., there exist and a constant such that for all configurations , where is sufficiently large, there is a computation initiated in whose length is at least .

The crucial point is that now there are and such that and

(1)

The above is a direct consequence of the following purely geometric lemma (proved in appendix) with .

Lemma 3.2 ().

Let be a finite set. If there is no openhalf-space of  such that and , then there exist and such that and .

As the individual simple cycles with effects may proceed through disjoint sets of states, they cannot be trivially concatenated into one large cycle with non-negative effect. Instead, we fix a control state and a cycle initiated in visiting all states of . Further, for every we fix a simple cycle such that . For every , let be a cycle obtained from by inserting precisely copies of every , where . Observe that the inequality (1) implies

(2)

For every configuration , let be the largest such that is executable in . If such a does not exist, i.e. is executable in for all , then is non-terminating (since, e.g. must be non-negative in such a case), and the proof is finished. Hence, we can assume that is well-defined for each . Since the cycles and have fixed effects, there is such that for all configurations where all components of (and thus also ) are above some sufficiently large threshold we have that , i.e. grows asymptotically at least linearly with the minimal component of . Now, for every , consider a computation initiated in defined inductively as follows: Initially, consists just of ; if the prefix of constructed so far ends in a configuration such that and (an event we call a successful hit), then the prefix is prolonged by executing the cycle (otherwise, the construction of stops). Thus, is obtained from by applying the inductive rule times, where is the number of successful hits before the construction of stops. Denote by the configuration visited by at -th successful hit. Now the inequality (2) implies that , so there exists a constant such that . In particular the decrease of all components of is at most linear in . This means that for all sufficiently large , where is a suitable constant. But at the same time, upon each successful hit we have , so the length of the segment beginning with the -th successful hit and ending with the -th hit or with the last configuration of is at least . Hence, the length of is at least , i.e. quadratic. ∎

Finally, let us consider an arbitrary VASS , not necessarily strongly connected. The following lemma allows us to characterize the linear complexity of termination for by applying Theorem 3.1 to its strongly connected components. A proof is straightforward.

Lemma 3.3 ().

Let , and let be a -dimensional VASS. Then iff for every SCC of , where is the termination complexity of .

Corollary 3.4 ().

The problem whether the termination complexity of a given -dimensional VASS is linear is solvable in time polynomial in the size of .

4. Polynomial termination time

We now concentrate on VASS with polynomial termination complexity. For simplicity, we restrict ourselves to strongly connected VASS. The general case is discussed at the end of the section.

A prominent notion in our analysis is the one of a ranking function for VASS. Let be a VASS. A linear map for is a function assigning rational numbers to configurations of s.t. there exists a vector and a weighting vector such that for each configuration of it holds . The vector is called a normal of . Given a linear map , we say that a transition of is -ranked if and -neutral if . A linear map is a quasi-ranking function (QRF) for if and if all transitions of are either -ranked or -neutral, and a ranking function (RF) if and all transitions of are -ranked. A quasi-ranking function is positive if each component of is positive. Note that in the language of update and incidence matrices and the conditions can be phrased as follows: a linear map is a QRF if and only if and such that if there is a negative number in some column, it is . Similarly, a linear map is a RF if and only if and .

The existence of ranking functions is already tightly connected to the question whether a given VASS has linear complexity, as shown in the following theorem.

Theorem 4.1 ().

A VASS has a linear termination complexity if and only if there exists a ranking function for .

Proof..

Consider the LP from Theorem 3.1. Its dual LP is the LP pictured in Figure 3.


with
Figure 3. The rational LP that is dual to . Here the variables are vectors and .

The dual LP has a feasible solution if and only if the original LP has an optimal solution (since it always has a feasible solution) and that is if and only if the VASS is linear (due to Theorem 3.1). Assume there exists a feasible solution. Let be a function such that

i.e., and . From the constraints of the dual LP we obtain for any transition

i.e. is a RF. Conversely, let be any RF. Then , is a feasible solution for the dual LP. ∎

Below, we show that complexity of general VASS is highly influenced by properties of normals of QRFs for . In particular, we classify each VASS into one of three types:

  • Non-terminating VASS.

  • Positive normal VASS: Terminating VASS for which there exists a QRF s.t. each component of the normal is positive.

  • Singular normal VASS: Terminating VASS for which there exists a QRF for s.t. each the normal is non-negative and (B) does not hold.

Results. We perform our complexity analysis on top of the above classification. We show that each non-trivial type (B) VASS of dimension has termination complexity in , where is an integer. Condition (C) is not strong enough to guarantee polynomial termination complexity, and hence singularities in the QRF normals are the key reason for complex asymptotic bounds in VASS. On the algorithmic front, we present a polynomial-time algorithm which classifies VASS into one of the above classes. Moreover, for type (B) VASS the algorithm also computes the degree such that the termination complexity of the VASS is . Hence, we give a complete complexity classification of type (B) VASS. For type (C) VASS, the algorithm returns a valid lower bound: a such that the termination complexity is (in general, such a bound does not have to be tight). In the following, we first present the algorithm and then formally state and prove its properties, which establish the above results.

Theorem 4.1 gives complete classification of linear complexity VASS. Note that the ranking function doesn’t have to be positive. The following lemma shows that every linear VASS is actually of type (B).

Lemma 4.2 ().

Let be a VASS. There exists a ranking function for if and only if there exists a positive ranking function for .

Proof.

One direction is trivial. For the other, assume we have some ranking function for . Then for any transition we have .

Let be such that every transition we have (there are only finitely many transitions so such must exist). We define a linear map as follows

Then for any transition we have

Therefore, is a positive RF. ∎

input : A strongly connected -dimensional VASS with at least one transition.
output : A tuple , or ‘‘non-terminating’’.
1 if  positive QRF for  then  else  Decompose()
2 if  then return ”non-terminating”else  return () procedure Decompose()
3       a QRF for maximizing the no. of -ranked transitions
4      
5       if  contains all transitions of  then return if  then return
6       return Decompose()Decompose()
Algorithm 1 Computing polynomial upper/lower bounds on the termination complexity of .

Algorithm. Our method is formalized in Algorithm 1. In the algorithm, for a VASS and , we denote by a pair obtained from by removing all transitions not belonging to . Note that this may not be a VASS (since some state doesn’t have to have an outgoing transition). An SCC of is a maximal strongly connected VASS in . We now formally state the properties of the algorithm, starting with bounds on its running time.

Theorem 4.3 ().

Algorithm 1 runs in time polynomial in . In particular, when called on a VASS of dimension , the overall depth of recursion is .

We proceed with correctness of the algorithm w.r.t. non-termination.

Theorem 4.4 ().

Assume that on input , Algorithm 1 returns “non-terminating.” Then is a non-terminating VASS.

Finally, the following two theorems show the correctness of the algorithm w.r.t. upper and lower bounds on the termination complexity of VASS.

Theorem 4.5 ().

Assume that on input , Algorithm 1 returns a tuple . Then and is terminating. Moreover, if , then .

Theorem 4.6 ().

Assume that on input , Algorithm 1 returns a tuple . Then and .

Note that the algorithm indeed performs the required classification since is set to if and only if the check for the existence of a positive QRF in the beginning of the algorithm is successful. We now present the proofs of the above theorems.

Proof of Theorem 4.3. In order to analyze the termination of the algorithm we consider the cone of cycle effects. As usual we define the dimension of a cone as the dimension of the smallest vector space containing . We show that the dimension of the cone generated by decreases with each recursive call:

Lemma 4.7 ().

Let be some VASS such that leads to some recursive call for some SCC of . Then .

By Lemma 4.7 we have that the dimension of decreases with every recursive call. With , we get that the recursion depth is bounded by .

Now we focus on the complexity of computing a QRF maximizing the number of -ranked transitions. The computation of such a QRF can be directly encoded by the following linear optimization problem .

LP ():    
Lemma 4.8 ().

Let be an optimal solution to LP (). Then, is a QRF, which is maximizing the number of -ranked transitions.

Similarly, checking the existence of a positive QRF can be performed by a direct reduction to linear programming. The LP is analogous to .

Lemma 4.9 ().

Checking the existence of a positive QRF can be done in polynomial time.

We now finish the proof of Theorem 4.3. We note that computing the QRFs in the algorithm can be done by linear programming. We next consider the set of recursive calls made at recursion depth . The VASSs of these recursive calls are all disjoint sub-VASSs of . Thus, the complexity of solving all the optimization problems at level is bounded by the complexity of solving for . Hence, the overall complexity of is the complexity of solving times the dimension .

Proof of Theorem 4.4. Let be a VASS. Consider the constraint systems () and () stated below. Both constraint systems are parameterized by a transition . Constraint system () is taken from Kosaraju and Sullivan (Kosaraju and Sullivan, 1988). Note that system has a rational solution if and only if it has an integer solution.

constraint system ():
(3) (4) (5) (6)
constraint system ():

The next lemma shows the connection between and multi-cycles in . We call a multi-cycle non-negative if .

Lemma 4.10 (Cited from (Kosaraju and Sullivan, 1988)).

There is a solution to constraints (3)-(5) iff there exists a non-negative multi-cycle such that the number of times a transition appears in cycles of is at least , for each .

On the other hand, the system is connected to QRFs.

Lemma 4.11 ().

Constraint system () has a rational solution if and only if there exists a and a QRF with and such that transition is -ranked and every other transition is -ranked or -neutral.

The following result is an immediate consequence of Farkas’ Lemma.

Lemma 4.12 ().

For each exactly one of the constraint systems () and () has a solution.

We now finish the proof of Theorem 4.4. Because Algorithm 1 returns “non-terminating”, there is a sub-VASS of , encountered during some recursive call, such that no transition of is -ranked for any QRF . Hence, constraint system is unsatisfiable for every transition of . By Lemma 4.12, constraint system is satisfiable. We consider the non-negative multi-cycle associated to an integer solution of . This multi-cycle contains at least transition . Because such a multi-cycle exists for every transition , we can combine all these multi-cycles into a single non-negative cycle, which shows that is non-terminating.

Connection to (Kosaraju and Sullivan, 1988).

Algorithm 1 extends algorithm ZCYCLE of Kosaraju and Sullivan (Kosaraju and Sullivan, 1988) by a ranking function construction. Because of the duality stated in Lemma 4.12, the ranking function construction part can be interpreted as the dual of algorithm ZCYCLE. Algorithm 1 makes use of this duality to achieve completeness: it either returns a ranking function, which witnesses termination, or it returns a non-negative cycle, which witnesses non-termination. The duality also means that ranking function construction comes essentially for free, as primal-dual LP solvers simultaneously generate solutions for both problems. An additional result is the improved analysis of the recursion depth: (Kosaraju and Sullivan, 1988) uses the fact that the number of locations is a trivial upper bound of the recursion depth, while we have shown the bound (see Theorem 4.3). With this result and with LP (), which simultaneously solves all constraint systems ()/() and thus avoids an iteration over , we affirmatively answer the open question of Kosaraju and Sullivan (Kosaraju and Sullivan, 1988), whether the complexity can be expressed as a polynomial function in the dimension times the complexity of a linear program.

Proof of Theorem 4.5. First, we will prove the -bound by induction on the depth of recursion (of Decompose). More precisely, if the algorithm return and the depth of recursion (number of calls) is , the termination complexity is in .

  • If there is no recursive call of procedure Decompose then QRF obtained on line 1 is actually a RF, because i.e., all transitions are -ranked. Due to Theorem 4.1 we have .

  • Let be the recursion depth. Assume the claim is correct for every run of the algorithm with recursion depth . By induction hypothesis we have that every SCC of has termination complexity .

    Let be an initial configuration. Now assume we have a VASS and QRF . If a transition is -ranked, the -value of the next configuration decreases by at least 1. If it is -neutral, it does not increase. Notice that every configuration satisfies